CYBERSECURITY
CYBERSECURITY
CYBERSECURITY
Cybersecurity refers to the practice of protecting computer systems, networks, and sensitive information from
unauthorized access, attacks, theft, damage, or disruption. The CIA triad is a well-known model of
cybersecurity, which stands for confidentiality, integrity, and availability.
Confidentiality refers to the protection of sensitive information from unauthorized access or disclosure.
This can be achieved through encryption, access controls, and other security measures that ensure only
authorized personnel can access sensitive data.
Availability refers to the assurance that information and systems are accessible and usable by authorized
personnel. This can be achieved through redundancy, disaster recovery planning, and other measures
that ensure systems and data are available when needed.
Risk, vulnerability, threat, and exploit are terms commonly used in cybersecurity:
Risk: The probability of a threat exploiting a vulnerability and causing damage or harm to an
organization's assets. Risk is typically measured in terms of likelihood and impact.
Threat: Any potential danger that can exploit a vulnerability and cause harm to an organization's
assets. Threats can come from a variety of sources, including hackers, malware, ransomware,
phishing attacks, and more.
Exploit: A software tool, code, or technique used to take advantage of a vulnerability and gain
unauthorized access to a system or network. Exploits can be used to execute malicious code, steal
data, or gain control of a system.
Cybersecurity attacks are malicious activities carried out by cybercriminals to compromise the confidentiality,
integrity, and availability of computer systems, networks, and sensitive data. The following are some common
types of cybersecurity attacks:
TYPE OF ATTACKS
1. Malware attack - A malware attack is a type of cyberattack in which malicious software, or "malware," is
used to gain unauthorized access to a computer system, steal data, or cause other types of harm. Malware
can take many forms, including viruses, worms, Trojans, ransomware, spyware, and adware.
a. Virus -A virus is a program or code that can replicate itself and spread to other computers or devices.
It can modify or delete files and data, and it may also steal sensitive information.
b. Worm - A worm is a self-replicating program that can spread rapidly over a network or the internet. It
can consume a significant amount of system resources, cause system crashes, and spread other types
of malware.
c. Trojan - Disguise itself as one thing but does something else. It can open a backdoor to allow hackers
to access the infected system, steal data, or install other types of malware.
d. Rootkit - Unauthorized access to a computer w/o being detected. A rootkit is a type of malware that
can hide its presence and activities from the user and security software. It can allow hackers to gain
remote access to the infected system and steal sensitive information.
e. Adware
f. Spyware - Ex. Keylogger
g. Backdoor - Hidden entry point and allow unauthorized access to the system
h. Botnet - Bots controlled by central entity (Criminal org). Created to infect large nums of comp to take
control and launch malicious acts like DDos, Bitcoin mining
i. Ransomware
1
2. Network attack - Network attack: A network attack is a type of cyberattack where an attacker tries to gain
unauthorized access to a network or networked devices in order to steal or destroy data, disrupt normal
network operations, or plant malware or viruses. Network attacks can be launched using various
techniques, such as Denial of Service (DoS), Distributed Denial of Service (DDoS), and Man-in-the-Middle
(MitM) attacks.
a. Man-in-the-middle - involves intercepting communications between two parties to steal
sensitive information or modify the communication. This can be done by compromising the network
infrastructure or by using social engineering to trick users into installing malware.
IP Spoofing - False source address (Use firewalls - filter traffic)
WiFi Eavesdropping - Intercepting/listening to wireless network traffic. Use WiFi
Protected Access(WPA2) Strong passwords for network Authentication, use VPN
DNS Cache poisoning attack or DNS spoofing - Corrupting the DNS resolver cache
memory with incorrect info, to redirect traffic from legit web to a fraudulent one. The goal
of DNS cache poisoning is to redirect users to malicious websites or servers, intercept
their traffic, or steal sensitive information such as login credentials or financial data. For
example, an attacker could redirect users trying to access a legitimate banking website to
a fake website that looks identical to the real one, and capture their login credentials and
other sensitive data.
b. Denial-of-service attack (Dos) - a single device or computer is used to send a high volume of
traffic or requests to a target, with the goal of overwhelming its resources and making it unavailable
to legitimate users. The attacker can achieve this by exploiting vulnerabilities in the target's software
or network infrastructure or by flooding it with traffic from a botnet
Ping of death - sending sing ICMP(internet control msg protocol) packet that is larger
than what system can handle
Ping flood - sending tons of ICMP packet. Goal is to consumed all available resources,
preventing it from serving legitimate traffic
c. Distributed denial-of-service-attack (DDos) - In contrast, a DDoS attack involves multiple
devices or computers, typically spread across different locations and networks, which are coordinated
to launch an attack on the target. The devices are often compromised through malware or other
means, and the attacker controls them remotely to send a high volume of traffic or requests to the
target. This makes DDoS attacks more difficult to defend against, as they can come from multiple
sources and be more challenging to identify and block.
d. Password attack - type of cyberattack that involves an attacker attempting to guess or crack a
user's password to gain unauthorized access to a system or account.
Brute-force-attack - Guessing the password
Dictionary attack - Systematically try diff combination of common use password
Phising email
3. Social Engineering - Is a type of attack that rely on exploiting human psychology and emotions, rather
than technical vulnerabilities in computer systems or networks. Attackers use tactics such as trust,
fear, greed, urgency, and curiosity to manipulate people into performing actions or divulging sensitive
information that can be used for nefarious purposes.
a. Phising (rely more on tricking individuals rather than exploiting vulnerabilities in network
infrastructure. Send to lot people, no specific target)
b. Spear phising (Target specific individual or group)
c. Pretetxting - Trick individuals into divulging sensitive info. Pretending to be from legit company
pressuring the victim.
d. Spoofing - Impersonate a trusted source.
e. Baiting - Lure victims with free gifts in exchange of sensitive information
f. Tailgating - Following the real employee into office
g. Whaling - Target big individuals for big ransom or info
h. Vishing - VoIP
i. Shoulder surfing
j. Impersonating - pretending to be you and call your bank company asking for password reset.
k. Dumpster diving
l. Evil twin - Server that looks identically yours, but this server is controlled by attacker.
4. Client side attack - Target client’s computer or device rather than the server-side. Targets
vulnerabilities in the software running on a user’s device (Compromised web, untrusted link, etc)
5. Injection attack - a type of cyberattack that involves injecting malicious code into an application or
system to exploit a vulnerability and gain unauthorized access or control. Injection attacks typically
occur when an application or system does not properly validate user input, allowing an attacker to
insert malicious code or commands.
a. SQL Injection - Insert malicious SQL code(Web app) to bypass web authentication or to access
database
2
b. Cross-site-scripting - Inject malicious scripts into web, will be executed when the page loaded.
C. Command Injection - Inject Malicious commands in application to execute action (Del files)
Prevention
a) Use strong password and multi-factor authentication - Prevent unauthorized access.
b) Keep software up-to-date - Prevent vulnerabilities that can be exploited by an attacker.
c) Implement Firewalls - Filtering incoming traffic
d) Implement network segmentation - . Dividing larger computer network into several small sub-
networks that are each isolated from one another (limit the damage of an attack)
e) Access control list - Control access to network resources and limit the traffic.
f) Conduct regular security assessments - Testing can identify vulnerabilities and potential areas of
weakness in network or system.
g) Use anti-virus and make sure it is updated.
h) Limit access to sensitive data.
i) Train Employee -Untrusted emails, links,untrusted WiFi, SSL Certificate.
j) Backup regularly
k) Train employees
l) Stay informed
MODULE TWO CRYPTOLOGY
Cryptology is the science of encryption and decryption of information. It includes several techniques
such as symmetric encryption, asymmetric encryption, hashing, and public key infrastructure (PKI).
Symmetric encryption, also known as shared secret encryption, is a technique that uses a single secret
key to encrypt and decrypt data. Both the sender and receiver must have the same key. It is a fast and
efficient way to encrypt data but is vulnerable to key distribution and management issues.
Asymmetric encryption, also known as public-key encryption, uses two different keys, a public key and
a private key, to encrypt and decrypt data. The public key can be shared freely, while the private key
is kept secret. It is a more secure way to encrypt data than symmetric encryption, but it is slower and
more resource-intensive.
Hashing is a one-way encryption technique that converts data into a fixed-length string of characters.
It is commonly used to verify the integrity of data by comparing the hash value of the original data
with the hash value of the received data. Hashing is not designed to be decrypted, making it useful for
protecting passwords and other sensitive data.
PKI, or Public Key Infrastructure, is a system that uses asymmetric encryption and digital certificates
to secure communications over a network. PKI involves the use of a trusted third party, known as a
Certificate Authority (CA), to issue and manage digital certificates that are used to authenticate users,
devices, and services. PKI is commonly used for secure email, online transactions, and other secure
communications.
In summary, symmetric encryption is a fast and efficient way to encrypt data, while asymmetric
encryption is more secure but slower and resource-intensive. Hashing is a one-way encryption
technique used to protect data integrity, and PKI is a system used to secure communications over a
network using digital certificates and asymmetric encryption.
The Three A's of Security, also known as AAA, refers to Authentication, Authorization, and Accounting.
These are the three main components of a security system that helps ensure that only authorized
users are granted access to a network or system, and that their activities are monitored and audited.
1. Authentication: Authentication is the process of verifying the identity of a user or device trying to
access a system or network. This can be done using various methods such as passwords, biometrics,
smart cards, or digital certificates. Authentication ensures that only authorized users can access the
system or network.
3
Here are some common authentication technologies:
Multi-factor authentication (MFA) is a method of authentication that requires two or more factors to
verify the identity of a user, device, or entity. Here are some common MFA factors:
RSA OTP (One-Time Password) is a type of MFA that uses a token to generate a one-time password
that is valid for a short period of time. The token can be a hardware device or a software application
that is installed on the user's device. RSA is a vendor that provides such token-based authentication
solutions.
Universal Factor (UF) is a type of MFA that combines two or more factors into a single device or
mechanism. For example, a device that combines a smart card and a fingerprint reader would be a
UF device. UF devices aim to simplify the authentication process while maintaining a high level of
security. However, UF devices are less flexible than multi-factor solutions that use separate factors, as
users cannot easily change or update one factor without affecting the others.
Certificate is a digital document that is used to verify the identity of a user, device, or entity in a
networked environment. It contains information about the entity it represents, such as its name,
public key, and the name of the Certificate Authority (CA) that issued the certificate. The certificate is
digitally signed by the CA, which provides assurance that the information contained in the certificate
is accurate and has not been tampered with.
When a user or device attempts to authenticate with a server, the server may request a certificate
from the user or device to verify their identity. The server can then check the digital signature of the
certificate to ensure that it was issued by a trusted CA and that the entity it represents is authorized
to access the server's resources.
CA stands for Certificate Authority. A CA is a trusted entity that issues digital certificates and verifies
the identities of entities that use those certificates. CAs are commonly used in web browsers to
authenticate websites, where they are commonly referred to as SSL (Secure Sockets Layer) or TLS
(Transport Layer Security) certificates. When a user visits a website secured by SSL/TLS, their browser
checks the certificate to ensure that it was issued by a trusted CA and that the website is genuine and
secure. If the certificate cannot be verified, the browser will display a warning message to the user.
CAs play a critical role in ensuring the security of networked environments by providing a trusted
mechanism for authenticating the identities of users, devices, and entities.
RADIUS, Kerberos, TACACS, and Single sign-on (SSO) are all different authentication protocols used in
networking, and they each have unique features and use cases.
RADIUS:
RADIUS (Remote Authentication Dial-In User Service) is a protocol that provides centralized
authentication, authorization, and accounting (AAA) management for remote access servers such as
VPNs and wireless access points. When a user attempts to connect to a RADIUS client, the client sends
an authentication request to the RADIUS server. The RADIUS server checks the user's credentials
against a user database, typically stored in Active Directory, and sends an authentication response
indicating whether the user is authorized to access the requested resource.
Kerberos:
Kerberos is a network authentication protocol that uses symmetric encryption and a trusted third-
party service called the Key Distribution Center (KDC) to provide secure authentication for client-
4
server applications. When a user attempts to authenticate to a Kerberos-enabled service, the user's
computer sends an authentication request to the KDC, which issues a ticket-granting ticket (TGT). The
TGT is used to authenticate the user to other Kerberos-enabled services without requiring the user to
enter their credentials again.
TACACS:
TACACS (Terminal Access Controller Access-Control System) is a protocol that provides centralized
authentication, authorization, and accounting (AAA) management for network devices such as
routers, switches, and firewalls. When a user attempts to access a network resource, the network
device sends an authentication request to the TACACS server. The TACACS server authenticates the
user against a user database, typically Active Directory, and sends an authentication response to the
network device indicating whether the user is authorized to access the requested resource.
SSO:
Single sign-on (SSO) is an authentication process that allows users to authenticate once and access
multiple resources without re-entering their credentials. SSO works by using a centralized
authentication system that issues tokens or tickets to users after they authenticate. When the user
attempts to access another resource, the resource sends a request to the authentication system,
which verifies the user's identity and issues a new token or ticket for the resource. This allows the
user to access multiple resources without re-entering their credentials.
Overall, these protocols serve different purposes and are used in different contexts to provide secure
access to network resources. RADIUS is commonly used for remote access authentication, Kerberos is
commonly used for client-server authentication, TACACS is commonly used for network device
authentication and access control, and SSO is used to improve user efficiency and security by reducing
the need for multiple logins.
OAuth and ACL are examples of authorization mechanisms used to control access to resources.
OAuth is an authorization protocol used to manage access to web applications and APIs by allowing
third-party applications to access resources on behalf of a user without requiring the user to share
their login credentials with the third-party application. It provides a secure and convenient way for
users to grant access to their resources while maintaining control over their personal data.
An Access Control List (ACL) is a list of permissions associated with a resource that determines which
users or groups are granted access to that resource and what actions they can perform on it. It is
commonly used in network security and file systems to manage access to resources.
Both OAuth and ACL provide different ways to manage authorization and access control to resources,
and the choice of which mechanism to use depends on the specific needs of the application or system
being secured.
3. Accounting: Accounting refers to the tracking and recording of user activities and events within a
system or network. This includes logging user access attempts, resource usage, and system changes.
Accounting provides an audit trail of system activities, which can be used to detect and investigate
security incidents, and to comply with regulatory requirements.
RADIUS and TACACS+ are two protocols commonly used for accounting in network environments.
RADIUS (Remote Authentication Dial-In User Service) is a client-server protocol used to provide
authentication, authorization, and accounting services for remote access and network devices.
RADIUS accounting tracks the usage of network resources and generates reports on user activity, such
5
as connection time, data usage, and duration of access. RADIUS accounting data can be used to
enforce security policies, monitor network performance, and bill for network usage.
TACACS+ (Terminal Access Controller Access Control System Plus) is another protocol used for
authentication, authorization, and accounting services in network environments. TACACS+ accounting
provides detailed information about network usage, such as the commands executed on network
devices, user activity, and session duration. TACACS+ accounting data can be used for compliance
reporting, auditing, and troubleshooting network issues.
Cisco AAA (Authentication, Authorization, and Accounting) is a framework used by Cisco devices to
provide centralized authentication, authorization, and accounting services for network devices and
users. It allows network administrators to configure and manage network access policies from a
central location and provides visibility into network usage through accounting data. Cisco AAA
supports both RADIUS and TACACS+ protocols for accounting purposes.
In summary, RADIUS, TACACS+, and Cisco AAA are all protocols and frameworks used for providing
accounting services in network environments, and they help network administrators to manage
network resources, enforce security policies, and optimize network performance.
Together, these three components of AAA security provide a comprehensive security framework
that can help prevent unauthorized access, control access to resources, and track user activities
within a system or network.
NETWORK HARDENING :
Network hardening is the process of securing a network by reducing its vulnerability to cyber attacks,
unauthorized access, and other security threats. The goal of network hardening is to make the
network more resilient and less susceptible to compromise, thus enhancing the overall security
posture of an organization.
1. Implicit deny is a security mechanism used to protect network resources by default, by denying
access to them unless explicitly granted. It means that any network traffic or access request that is not
explicitly allowed by the security policy will be denied. Implicit deny is a fundamental principle of
access control and is used in many security devices, such as firewalls and routers. By using implicit
deny, network administrators can control which resources are accessible to users or systems,
reducing the risk of unauthorized access and other security threats. Implicit deny helps to enforce the
principle of least privilege, which means that users or systems are granted only the minimum access
necessary to perform their tasks, reducing the risk of exposure to security vulnerabilities.
2. Analyzing logs is a critical process in network security. It involves examining network logs to
identify potential security incidents, such as intrusion attempts, malware infections, or other
anomalous behavior.Logs analysis systems, normalizing log data, correlation analysis, and flood guard
are some of the methods used to analyze logs.
6
information. Normalizing log data helps to streamline the logs analysis process and makes it easier to
correlate data from multiple logs.
Correlation Analysis:
Correlation analysis involves combining data from multiple logs to identify potential security threats.
It helps to identify patterns and trends that may not be apparent from a single log. For example,
correlating data from firewall logs and intrusion detection system (IDS) logs can help to identify
potential attacks that may have been missed by either system alone.
correlation analysis can be used to identify patterns or relationships between security events, such as
network traffic, system log entries, and user activity. By analyzing these events together, security
analysts can gain a better understanding of the overall security posture of the organization and
identify potential threats and vulnerabilities.
For example, if a correlation analysis of network traffic logs reveals a high number of failed login
attempts at a specific time, this may indicate a possible brute-force attack on the network. Similarly, if
an analysis of system log entries shows a correlation between multiple failed attempts to access a
specific file or directory, this may indicate a possible attempt at unauthorized access.
Flood Guard:
Flood guard is a network security mechanism that helps to protect against distributed denial-of-
service (DDoS) attacks. DDoS attacks are a type of cyber attack where an attacker attempts to
overwhelm a network with traffic, rendering it unavailable to legitimate users.
Flood guard works by monitoring network traffic and identifying traffic patterns that indicate a
potential DDoS attack. When a flood guard mechanism detects an attack, it will limit or block traffic
from the attacker's IP address or the affected network segment. Flood guard can also be configured to
limit traffic from specific protocols or ports.
Flood guard can be implemented on network devices, such as routers and firewalls, or through
specialized DDoS protection services. Flood guard can help to prevent network downtime and protect
against the financial and reputational damage that can result from a successful DDoS attack.
However, flood guard is not a foolproof solution, and it may not be effective against highly
sophisticated DDoS attacks. It is important to have a comprehensive DDoS protection strategy that
includes multiple layers of defense, such as intrusion prevention systems, firewalls, and content
delivery networks, in addition to flood guard.
3. Network segmentation is a network security strategy that involves dividing a larger network into
smaller segments or subnetworks, known as zones or security domains. The goal of network
segmentation is to improve the security of the network by limiting the spread of cyber attacks and
minimizing the potential impact of a security breach.
a. Improved security: Network segmentation helps to limit the impact of a cyber attack by
containing the attack within a single segment, reducing the risk of it spreading to other parts of
the network.
b. Better network performance: By dividing a larger network into smaller segments, network traffic
can be managed more efficiently, leading to better network performance.
c. Simplified network management: Network segmentation can make it easier to manage and
monitor network activity by isolating different parts of the network.
d. Compliance with industry regulations: Many industries have regulations that require the use of
network segmentation to protect sensitive data and ensure data privacy.
7
In summary, network segmentation is an important network security strategy that involves dividing a
larger network into smaller segments to limit the spread of cyber attacks and improve network
performance.
Network hardware hardening is the process of securing the physical devices that make up a network
infrastructure. This can include routers, switches, firewalls, and other network devices. The goal of
network hardware hardening is to prevent unauthorized access, reduce the risk of cyber attacks, and
ensure the availability of network resources.
1. Password management: This involves using strong passwords, changing default passwords, and
implementing a password policy to ensure that passwords are changed regularly.
2. Firmware and software updates: Regularly updating the firmware and software of network devices
helps to ensure that they are protected against known vulnerabilities.
3. Access control: Limiting access to network devices to authorized personnel can help prevent
unauthorized configuration changes or malicious activity.
4. Auditing and monitoring: Regularly auditing and monitoring network devices can help detect
security breaches and identify potential vulnerabilities.
DHCP:
DHCP (Dynamic Host Configuration Protocol) is a network protocol used to automatically assign IP
addresses and other network configuration information to devices on a network. DHCP simplifies
network administration by eliminating the need for manual IP address configuration. DHCP servers
can also assign other network configuration information such as DNS server addresses, default
gateway addresses, and subnet masks.
When a device connects to a network, it sends a broadcast message requesting an IP address. The
DHCP server responds with an available IP address, along with other network configuration
information. The device then configures its network settings automatically based on the information
provided by the DHCP server.
1. Simplified network administration: DHCP eliminates the need for manual IP address configuration,
making network administration more efficient and less error-prone.
2. IP address management: DHCP can help manage IP addresses by ensuring that IP addresses are
assigned only when they are needed, and by preventing IP address conflicts.
3. Centralized network configuration: DHCP servers can be used to centrally manage network
configuration information such as DNS server addresses and default gateway addresses, making it
easier to make changes and updates to network configurations.
4. Scalability: DHCP can be used to automatically assign IP addresses to a large number of devices,
making it scalable for larger networks.
Overall, DHCP is a critical protocol in modern networking that helps simplify network administration
and ensure efficient and effective use of IP addresses.
8
Dynamic ARP Inspection:
Dynamic ARP Inspection (DAI) is a network security feature that helps prevent ARP (Address
Resolution Protocol) spoofing attacks. ARP spoofing is a type of cyber attack where an attacker sends
fake ARP messages to associate a different MAC address with an IP address, leading to traffic
redirection or interception.
DAI works by inspecting and validating ARP messages and only forwarding legitimate messages. It
uses information from the DHCP Snooping database or a static ARP inspection configuration to
validate the source MAC and IP address of the ARP message. If the message is legitimate, it is
forwarded as usual. If the message is illegitimate, it is either dropped or forwarded to a designated
"black hole" port.
DAI can be configured on a per-VLAN basis and can be used in conjunction with other network
security features such as IP Source Guard and Port Security to provide a comprehensive network
security solution.
1. Enhanced network security: DAI helps prevent ARP spoofing attacks, which can be used to carry out
various types of cyber attacks, including man-in-the-middle attacks and denial-of-service attacks.
2. Improved network performance: DAI can help reduce network congestion by preventing ARP
flooding, which can occur when a device sends a large number of ARP messages.
3. Simplified network management: DAI can be easily configured on a per-VLAN basis, allowing for
flexible network management.
In summary, Dynamic ARP Inspection is a network security feature that helps prevent ARP spoofing
attacks by inspecting and validating ARP messages. It can provide enhanced network security,
improved network performance, and simplified network management.
When a client device attempts to connect to the network, it is required to present a digital certificate.
The authentication server then verifies the certificate and, if it is valid, grants access to the network.
Strong authentication: EAP-TLS provides strong mutual authentication between the client device and
the authentication server using digital certificates, ensuring that only authorized devices are granted
access to the network.
Enhanced network security: Implementing 802.1x with EAP-TLS provides enhanced network security
by preventing unauthorized access to the network.
9
Improved network management: 802.1x with EAP-TLS allows network administrators to control which
devices have access to the network and what level of access they are granted, making network
management easier and more efficient.
In summary, implementing 802.1x with EAP-TLS is an excellent way to lock down your network and
provide a high level of network security. It provides strong authentication, enhanced network
security, and improved network management.
In summary, network hardware hardening involves securing physical network devices to prevent
unauthorized access and ensure the availability of network resources. DHCP simplifies network
administration by automatically assigning IP addresses and other network configuration
information to devices. Dynamic ARP Inspection helps to prevent ARP spoofing attacks, while
802.1x provides authentication and authorization for devices connecting to a network.
10
When used for remote access, VPNs allow employees to securely connect to the
corporate network from anywhere in the world, using an encrypted tunnel that
protects their traffic from interception or snooping. This is particularly important
for businesses that need to protect sensitive data or intellectual property from
unauthorized access.
Similarly, when used to link two or more networks together, VPNs create an
encrypted tunnel between the networks, allowing them to securely
communicate with each other over an untrusted network such as the Internet.
This is commonly used by businesses with multiple offices or remote employees
who need to access resources located in a different physical location.
WIRELESS SECURITY
Best security option for securing a WiFi network
The first security protocol introduced for WiFi networks was Wired Equivalent Privacy (WEP)
WEP Encryption
WEP (Wired Equivalent Privacy) is a security protocol that was designed to provide confidentiality
and integrity for wireless networks in the past, because it has since been found to have serious
security vulnerabilities and is no longer recommended for use.
Weakness in Encryption: WEP uses a weak encryption algorithm based on the RC4 stream cipher,
which has several known vulnerabilities that allow attackers to easily crack the encryption and
intercept wireless network traffic.
Weakness in Key Management: WEP uses a weak key management scheme that makes it
vulnerable to brute-force attacks. Specifically, WEP uses a 24-bit initialization vector (IV) along
with a user-defined key to encrypt data. The short length of the IV allows attackers to quickly
determine the key by brute force, making it easy to crack WEP encryption.
Lack of Authentication: WEP does not provide any authentication mechanism to verify the
identity of users or devices on the wireless network. This makes it vulnerable to spoofing attacks,
where an attacker can impersonate a legitimate user or device on the network and gain
unauthorized access.
No Integrity Protection: WEP does not provide any integrity protection, which means that an
attacker can modify or inject packets on the wireless network without detection. This makes it
vulnerable to attacks such as packet injection, where an attacker can insert malicious packets
into the network to compromise its security.
WEP used a relatively weak encryption algorithm that was vulnerable to attacks that could allow
attackers to decrypt wireless traffic and eavesdrop on network activity. WEP also relied on static
encryption keys, which made it easier for attackers to crack the key and gain unauthorized access to
the network.
WPA addressed these weaknesses by introducing several improvements in the security mechanism.
First, it introduced a new and much stronger encryption algorithm called TKIP (Temporal Key Integrity
11
Protocol). TKIP uses a unique encryption key for each data packet, making it much more difficult for
attackers to intercept and decode wireless traffic.
WPA also introduced a more robust authentication mechanism called IEEE 802.1X, which enabled
network administrators to control access to the wireless network by requiring users to authenticate
with a username and password or a digital certificate. This helped to prevent unauthorized access to
the network and protect against attacks such as brute force password cracking.
Another key improvement introduced by WPA was the use of message integrity checking (MIC) to
prevent packet forgery and tampering. MIC ensures that data packets are not modified in transit,
providing an additional layer of security to the wireless network.
Overall, WPA was designed as a short-term replacement for WEP that addressed its known
vulnerabilities and provided a stronger and more secure alternative. Later, an improved version of
WPA called WPA2 was introduced that provided even stronger security and remains the current
standard for wireless network security.
Also disabling WPS (Wi-Fi Protected Setup) can reduce the likelihood of a WPS brute-force attack.
WPS is a feature that allows users to easily set up a secure wireless network by using a PIN or pushing
a button on the router. However, WPS has been found to have significant vulnerabilities, particularly
in the PIN-based method.
In a WPS brute-force attack, an attacker attempts to guess the WPS PIN in order to gain access to the
wireless network. This type of attack can be successful because the WPS PIN is often only 8 digits long
and can be guessed through trial and error.
Disabling WPS removes the ability for an attacker to use the PIN-based method to gain access to the
wireless network. Without the WPS PIN, an attacker would need to resort to other methods such as
cracking the Wi-Fi password or exploiting other vulnerabilities in the wireless network.
However, if you need to use WPS, you can use a lockout period to protect your network against brute
force attacks. A lockout period blocks any further connection attempts for a set amount of time if an
attacker tries to connect to your network using WPS and enters the wrong PIN a certain number of
times. This means that if an attacker tries to guess the WPS PIN repeatedly, the router will temporarily
block any further connection attempts for a set amount of time, making it more difficult for the
attacker to gain access to your network.
Overall, while using a lockout period can add some extra protection, disabling WPS altogether is the
safest option to reduce the likelihood of brute force attacks. You should also ensure that your wireless
network is secured with a strong password, using WPA2 or WPA3 encryption, and regularly update
your router's firmware to keep it secure.
However, it's important to note that disabling WPS is not a complete solution for securing a wireless
network. Other security measures such as strong Wi-Fi passwords, WPA2 with AES/CCMP encryption,
and regular firmware updates should also be implemented to secure a wireless network.
WPA2
WPA (Wi-Fi Protected Access) is a security protocol that was designed to replace the older WEP
(Wired Equivalent Privacy) protocol for securing wireless networks. Although WPA improved security
over WEP, it still had some vulnerabilities that were addressed in the later WPA2 protocol.
One of the main weaknesses of WPA was its use of the Temporal Key Integrity Protocol (TKIP) for
encryption. TKIP was an improvement over the encryption used in WEP, but it was still susceptible to
some attacks, such as the chop-chop attack and the fragmentation attack. These attacks allowed
attackers to bypass encryption and intercept traffic on the wireless network.
12
WPA2 addressed this weakness by using the Advanced Encryption Standard (AES) algorithm for
encryption, which is much stronger and more secure than TKIP. AES encryption is resistant to brute-
force attacks and provides better security for wireless networks.
Another weakness of WPA was the use of the Message Integrity Code (MIC) for integrity checking,
which was vulnerable to attacks such as the Forgery Attack and the Replay Attack. WPA2 addressed
this weakness by introducing a new encryption protocol called Counter Mode with Cipher Block
Chaining Message Authentication Code Protocol (CCMP). CCMP provides improved data integrity and
confidentiality, making it much more difficult for attackers to intercept and tamper with wireless
traffic.
WPA2 also introduced the use of a stronger key management mechanism called the 802.11i standard,
which provides improved protection against dictionary attacks and other forms of attack.
Overall, WPA2 provides stronger security than WPA by addressing the vulnerabilities and weaknesses
of the earlier protocol. WPA2 is currently the recommended standard for securing wireless networks
and provides a high level of security when implemented correctly.
Wireless Hardening
Best security protocol to use is WPA2 with AES/CCMP mode.
WPA2 with AES/CCMP mode is considered the best security protocol to use for securing wireless
networks due to several factors.
First, WPA2 provides stronger security than its predecessor, WPA. WPA2 uses the Advanced
Encryption Standard (AES) algorithm for encryption, which is more secure than the Temporal Key
Integrity Protocol (TKIP) used in WPA. AES encryption is resistant to brute-force attacks and provides
better security for wireless networks.
Second, CCMP (Counter Mode with Cipher Block Chaining Message Authentication Code Protocol) is
the recommended encryption protocol for WPA2, which provides improved data integrity and
confidentiality. CCMP is more efficient than the encryption algorithm used by WPA and provides
better protection against attacks such as replay attacks and dictionary attacks.
Third, WPA2 has a more robust key management mechanism than WPA. WPA2 uses the 802.11i
standard for key management, which provides improved protection against dictionary attacks and
other forms of attack.
Finally, WPA2 with AES/CCMP mode is widely recognized as a secure and reliable security protocol for
wireless networks. It has been extensively tested and is recommended by security experts and
organizations such as the Wi-Fi Alliance and the National Institute of Standards and Technology
(NIST).
Overall, WPA2 with AES/CCMP mode provides a high level of security for wireless networks and is
currently the recommended standard for securing wireless networks.
WPA2 Enterprise would offer the highest level of security for a WiFi network. It offers the best
encryption options for protecting data from eavesdropping third parties, and does not suffer from
the manageability or authentication issues that WPA2 Personal has with a shared key mechanism.
WPA2 Enterprise used with TLS certificates for authentication is one of the best solutions available.
Network Monitoring
1. Sniffing the network
Sniffing the network" or “Packet sniffing” refers to the practice of intercepting and
analyzing network traffic for the purpose of gathering information or monitoring activity.
13
This can be done using specialized software, such as network analyzers or packet sniffers,
which capture and decode packets of data as they pass through a network.
"Promiscuous mode" is a feature of a network interface that allows it to receive and
analyze all network traffic passing through the network, regardless of whether the traffic is
addressed to the interface or not. This mode is commonly used by network administrators
and security professionals to diagnose and troubleshoot network issues, monitor network
activity, and identify security threats.
"Monitor mode" is a special mode that is available on wireless network adapters, which
allows them to capture and analyze all wireless traffic that is being transmitted in their
vicinity. This mode is useful for network administrators and security professionals who
need to troubleshoot wireless network issues or identify potential security threats.
Both promiscuous mode and monitor mode can be used for legitimate purposes, such as
network troubleshooting and security analysis, but they can also be used for malicious
purposes, such as eavesdropping on network traffic or stealing sensitive information.
Therefore, it is important to use these tools responsibly and in accordance with ethical and
legal guidelines.
TCPDump and Wireshark are two popular tools used for capturing and analyzing network
traffic in order to diagnose and troubleshoot issues, monitor network performance, and
identify potential security threats.
TCPDump is a command-line utility that is used to capture and display packets of data as
they are transmitted over a network. It supports multiple packet capture formats and can
be used to capture packets in real-time or read them from a file. TCPDump is commonly
used by network administrators and security professionals to diagnose network issues,
monitor network performance, and identify potential security threats. TCPDump has a
smaller footprint and requires less system resources than Wireshark.
Wireshark, on the other hand, is a more advanced graphical user interface (GUI) tool that
provides a more comprehensive set of features and capabilities for capturing and analyzing
network traffic. It is widely used for troubleshooting network problems and analyzing
network performance. With Wireshark, network administrators can capture packets in real-
time or read them from a file, and then view and analyze the packets in detail. Wireshark
has an advanced filtering system and provides a more detailed analysis of network traffic,
such as displaying the content of each packet, identifying the source and destination of
each packet, and highlighting potential security threats. However, Wireshark requires more
system resources than TCPDump.
Both TCPDump and Wireshark are powerful tools that can be used to troubleshoot network
issues and identify potential security threats. TCPDump is a command-line utility that is
lightweight and requires less system resources than Wireshark, while Wireshark is a more
advanced GUI-based tool that provides a more comprehensive set of features and
capabilities for analyzing network traffic. In practice, network administrators often use a
combination of both tools to gain a more complete picture of their network's performance
and identify potential issues or security threats.
Why packet capture and analysis fits into security at this point.
Like logs analysis, traffic analysis is also an important part of network security.
Traffic analysis is done using packet captures and packet analysis. Traffic on a network is
basically a flow of packets.
14
Now being able to capture and inspect those packets is important to understanding what
type of traffic is flowing on our networks that we’d like to protect
.
3. Intrusion Detection/ Prevention Systems
IDS or IPS systems operate by monitoring network traffic and analyzing it.
As an IT support specialist, you may need to support the underlying platform that the
IDS/IPS runs on. You might also need to maintain the system itself, ensuring that rules are
updated and you may even need to respond to alerts. So what exactly do IDS and IPS
systems do?
The difference between an IDS and an IPS system is that IDS is only a detection
system. It won’t take action to block or prevent an attack when one is detected, it will
only log and alert. But an IPS system can adjust firewall rules on the fly to block or
drop the malicious traffic when it’s detected.
Network Intrusion Detection System, which is an IDS that monitors network traffic for
signs of intrusion or attack. It typically analyzes network packets to identify known
attack signatures or abnormal traffic patterns that may indicate an attack.
To capture all the packets on this port, the NIDS host would enable promiscuous
mode on the network interface controller (NIC) of the analysis port for wired
networks, or monitor mode for wireless networks.
By enabling promiscuous mode or monitor mode, the NIC will capture all packets that
pass through the network, regardless of whether they are intended for the NIDS host
or not. The NIDS software running on the host will then analyze these packets to
identify potential security threats or suspicious behavior, such as unauthorized access
attempts, malware infections, or network-based attacks.
Enabling promiscuous mode or monitor mode on the analysis port is a critical step in
configuring a NIDS, as it ensures that all network traffic is captured and analyzed,
providing a comprehensive view of network activity and potential threats.
In the context of a Network Intrusion Detection System (NIDS), port mirroring can be
used to capture network traffic for analysis and detection of security threats. This is
achieved by configuring the switch to mirror traffic from one or more network ports to
a dedicated analysis port, where the NIDS software can capture and analyze the
traffic.
Host-based intrusion detection system (HIDS) which is an IDS that monitors activity
on a single host or endpoint, such as a server or workstation. It typically analyzes
system logs, file system changes, and other host activity to identify signs of intrusion
or attack.
15
NIPS stands for Network Intrusion Prevention System, which is an IPS that operates
at the network level, monitoring network traffic and actively preventing attacks in
real-time.
HIPS stands for Host-based Intrusion Prevention System, which is an IPS that
operates on a single host or endpoint, such as a server or workstation. It typically
analyzes system logs, file system changes, and other host activity to identify signs of
intrusion or attack and can block malicious activity in real-time to prevent an attack
from succeeding.
UTM solutions provide an all-in-one approach to network security, combining multiple security
services and tools into a single platform, UTM simplifies the configuration and enforcement of
security controls and policies, saving time and resources. Security event logs and reporting are also
centralized and simplified to provide a holistic view of network security events.
Set of UTM networked appliances or devices - A set of UTM networked appliances or devices is
a combination of multiple hardware devices that are networked together to provide
comprehensive network security. Each device is responsible for a specific security service or tool,
such as firewall, VPN, or intrusion detection and prevention. This option provides greater
flexibility and scalability compared to stand-alone UTM network appliances. Organizations can
add or remove devices as needed to meet their changing security requirements.
UTM server software application(s) - UTM server software applications are software-based
solutions that run on dedicated servers, virtual machines, or cloud-based infrastructure. This
option provides the greatest flexibility and scalability, allowing organizations to deploy UTM
services and tools across multiple locations and platforms. It is also ideal for organizations that
want to customize their UTM solution to meet their specific security requirements.
Single host - UTM solutions can be deployed on individual devices, such as laptops or desktops,
to provide real-time protection against threats such as viruses, malware, and intrusions. This can
include features like endpoint security, host-based firewalls, and anti-malware tools.
16
Entire network - UTM solutions can provide comprehensive protection across the entire
network, including firewalls, intrusion detection and prevention, content filtering, and more. This
can help protect against a wide range of threats, including malware, phishing attacks, and
unauthorized access attempts.
In addition to these two main options, some UTM solutions may also offer hybrid deployment
models, where some components of the network are protected by UTM appliances or software, while
others are protected by standalone security solutions.
The specific extent of UTM protection options will depend on the particular UTM solution being used,
the deployment model, and the organization's security needs and requirements.
UTM security service and tool options solutions offer a range of security service and tool options,
which can include:
Firewall: Can be the first line of defense in catching phishing attacks, spam, viruses, malware, and
other potential threats that attempt to access an organization’s network. Firewalls can be hardware
devices or software applications. Firewalls filter and inspect packets of data attempting to enter and
exit a managed network. Rules can be configured to permit or prevent certain types of packets from
entering the network.
Intrusion detection system (IDS): Passively monitors packets of data and network traffic for unusual
patterns that could indicate an attack. IDS devices can monitor entire networks (NIDS) or just a single
host (HIDS). IDS identifies, logs, and alerts IT Support about suspicious traffic. However, IDS does not
prevent an attack from occurring. This system gives IT Support professionals the opportunity to
inspect flagged events to determine how to handle the threat on a case by case basis.
Intrusion prevention system (IPS): Actively monitors packets and network traffic for potential
malicious attacks. IPS systems can be configured to automatically block attacks or to allow manual
interventions. IPS devices can monitor entire networks (NIPS) or just a single host (HIPS).
Anti-virus and anti-malware: UTM solutions can include anti-virus and anti-malware tools to protect
against viruses, worms, Trojans, and other types of malware.
Spam gateway: Filters, identifies, and quarantines spam email. Spam gateways are network servers
that use Domain Name Server (DNS) management tools to protect against spam.
Web and content filters: Block user access to risky and malicious websites. When a user attempts to
access an unauthorized or suspicious website using a browser, the UTM web filter can prevent the
website from loading. The filter can also be customized to block certain types of websites or specific
URLs, like social media or other websites that might be a distraction in the workplace.
Data leak/loss prevention (DLP): Monitors outgoing network traffic for personal, sensitive, and
confidential data. DLP includes a verification system to determine if the external data transfer is
authorized or malicious, and can block unauthorized attempts.
Virtual Private Network (VPN): Encrypts data and creates a private “tunnel” to safely transmit the
data through a public network.
Stream-based inspection, also called flow-based inspection: UTM devices inspects data samples
from packets for malicious content and threats as the packets flow through the device in a stream of
data. This process minimizes the duration of the security inspection, which keeps network data
flowing at a faster rate than a proxy-based inspection.
17
Proxy-based inspection: A UTM network appliance works as a proxy server for the flow of network
traffic. The UTM appliance intercepts packets and uses them to reconstruct files. Then the UTM
device will analyze the file for threats before allowing the file to continue on to its intended
destination. Although this security screening process is more thorough than the stream-based
inspection technique, proxy-based inspections are slower in the transmission of data.
UTM might be a waste of resources for small businesses: Small businesses may not need a
robust security solution like UTM. The time and money needed to purchase, implement, and
manage a complex UTM system may not provide a significant return on security benefits for a
smaller network. Cybercriminals are more likely to attack larger targets.
Key takeaways
Unified Threat Management (UTM) systems offer multiple options in a comprehensive suite of
network security tools. UTM solutions can be implemented as hardware and/or software and can
protect either a single host or an entire network.
18
UTM security services and tool options include firewalls, IDS, IPS, antivirus and anti-malware software,
spam gateways, web and content filters, data leak/loss prevention, and VPN services.
The benefits of using a UTM solution include having a cost-effective network security system that is
flexible and adaptable with a management console that is integrated and centralized. The risks of
using UTM include creating a single point of failure for a network security system and it might be an
unnecessary use of resources for small businesses.
By the end of this module, we’ll be able to implement the appropriate methods for system hardening
and application hardening. We’ll also be able to determine the policies to use for operating system
security.
An attack vector is a path or method that attackers use to gain unauthorized access to a network or
system. Attack vectors can include techniques such as phishing, malware, and social engineering, as
well as exploiting vulnerabilities in software or hardware.
The attack surface refers to the total number of entry points that attackers can use to gain
unauthorized access to a network or system. This includes all software and hardware components
that are exposed to the network, such as servers, routers, firewalls, and other devices. The larger the
attack surface, the more vulnerable the network or system is to attack. By reducing the attack surface
through measures such as disabling unnecessary components, the network or system becomes less
vulnerable to attack.
Host-based Firewalls
Are important to creating multiple layers of security.
Protect individual hosts from being compromised when they are used in untrusted, potentially
malicious environments. They also protect individual hosts from potentially compromised peers inside
a trusted network. Network-based Firewalls has a duty to protect our internal network by filtering
traffic in and out of it.
While the Host-based Firewall on each individual host, protects that one machine.
HBF plays a big part in reducing what’s accessible to an outside attacker.
It provides flexibility while only permitting connections to selective services on a given host from
specific networks or IP ranges. This ability to restrict connections from certain origins, is usually used
to implement a highly secure host to network.
From there, access to critical or sensitive systems or infrastructure is permitted. These are called
bastion hosts or networks, and are specifically hardened and minimized to reduce what's permitted
to run on them.
Bastion hosts are usually exposed to the Internet, so you should pay special attention to hardening
and locking them down to reduce the chances of compromise. But they can also be used as a gateway
or access portal into more sensitive services like core authentication servers or domain controllers.
19
This would let you implement more secure authentication mechanisms and ACLs on the bastion hosts
without making it inconvenient for your entire company.
Monitoring and logging can be prioritized for these hosts more easily.
Typically, these hosts or networks would also have severely limited network connectivity. It's usually
just to the secure zone that they're designed to protect and not much else.
Applications that are allowed to be installed and run on these hosts, would also be restricted to those
that are strictly necessary since these machines have one specific purpose. Part of the host-based
firewall rules will likely also provide ACLs that allow access from the VPN subnet. It's good practice to
keep the network that VPN clients connect into separate using both subnetting and VLANs. This gives
you more flexibility to enforce security on these VPN clients. It also lets you build additional layers of
defenses.
While a VPN host should be protected using other means, it's still a host that's operating in a
potentially malicious environment. This host is then initiating a remote connection into your trusted
internal network.
These hosts represent another potential vector of attack and compromise. Your ability to separately
monitor traffic coming and going from them is super useful.
There's an important thing for you to consider when it comes to host-based firewalls, especially for
clients systems like laptops. If the users of the system have administrative rights than they have the
ability to change firewall rules and configurations. This is something you should keep in mind and
make sure to monitor with logging. If management tools allow it, you should also prevent the
disabling of the host-based firewall. This can be done with Microsoft Windows machines when
administered using Active directory as an example.
It wouldn't do much good to have all these defenses in place. If we have no idea if they're working or
not, we need visibility into the security systems in place to see what kind of traffic they're seeing.
Logging refers to the process of capturing and storing events or actions that occur within a system,
application, or network infrastructure. These events can include user actions, system events, errors,
warnings, and other relevant information that can be used for troubleshooting, debugging, and
analysis purposes. Logs are typically stored in a centralized location, such as a log server, for easy
access and management.
Auditing, on the other hand, refers to the process of examining and analyzing the logs to determine if
any unauthorized or inappropriate activities have occurred. Auditing involves reviewing the logs to
identify any anomalies or patterns that may indicate a security breach or other types of malicious
activity. Auditing is an important part of ensuring the security and integrity of systems and networks
and can help detect and prevent security incidents before they cause significant damage.
Both logging and auditing are essential components of an effective security strategy. Logging provides
a detailed record of system and user activity, while auditing enables security teams to analyze this
data to identify potential security risks and take appropriate action to mitigate them. In summary,
logging is the act of recording events, and auditing is the act of analyzing those records to ensure
system security and integrity.
Centralized logging is a technique used in computer systems and networks to collect and store log
data from various sources into a single location or repository. The purpose of centralized logging is to
provide a unified view of system and network activity, making it easier to monitor and analyze system
events for security and operational purposes.
20
In a centralized logging architecture, log data is typically collected from various sources, such as
servers, network devices, and applications, and forwarded to a central logging server or database. The
central logging server aggregates and stores the log data, providing a single point of access for log
analysis and monitoring.
1. Improved security: Centralized logging allows for real-time monitoring of system and network
activity, making it easier to detect and respond to security threats. It also enables security analysts to
identify and investigate security incidents more efficiently.
2. Operational insights: By aggregating log data from various sources, centralized logging can provide
insights into system and network performance, allowing for proactive identification of issues before
they become critical.
3. Compliance: Many compliance regulations require organizations to retain and analyze log data.
Centralized logging can help organizations comply with these regulations by providing a single source
for log data collection, storage, and analysis.
However, centralized logging also has some potential drawbacks, such as increased network traffic
and storage requirements, as well as the need for proper management and monitoring of the central
logging server.
To effectively implement centralized logging, organizations must carefully plan and design their
logging architecture, selecting appropriate log sources, log formats, and storage options. They must
also consider the security implications of centralized logging, such as protecting log data from
unauthorized access or tampering. Overall, centralized logging is a valuable technique for improving
system and network security, as well as operational efficiency and compliance.
Defender for Endpoint: Protects network endpoints including servers, workstations, mobile devices,
and IoT devices. Provides preventative safeguards, breach detections, automated analyses, and threat
response services.
Defender for Office 365: Protects Microsoft 365 (formerly Office 365), including Exchange, Outlook,
files, and attachments. Guards against malicious threats entering from email messages, links (URLs),
and collaboration tools.
Defender for Identity: Protects user identities and credentials. Detects, identifies, and investigates
advanced threats, compromised identities, and malicious actions performed using stolen user
identities or by internal threats.
Azure Active Directory Identity Protection: Protects cloud-based identities in Azure by automating
detection and resolutions for identity risks.
21
Defender for Cloud Apps: Protects cloud applications by providing deep visibility searches, robust data
controls, and advanced threat protection.
Devices: See alerts, breach activity, and other threats on devices connected to the organization’s
network.
Apps: Observe how cloud apps are being used in your organization.
Alerts: View alerts compiled from across the Microsoft 365 suite.
Advanced hunting: Scan for suspicious files, malware, and risky activities.
Secure score: Get a calculated score for your security configuration and recommendations on how to
improve your score.
Learning hub: Easily access Microsoft 365 security tutorials and other learning materials.
Microsoft 365 Defender aggregates and organizes this monitoring data to provide IT Support
professionals details on where attacks began, which malicious tactics were used, the scope of the
attacks, and other related incident information.
A phishing attempt enters through email: An employee in an organization receives an email from a
business that appears to be legitimate, like a bank. The email might claim that there is a problem with
the employee’s account and that they must click on a given link to resolve the problem. However, the
phishing email actually contains a link to a malicious website that a cybercriminal disguised to look
like a real bank. If the employee clicks on the link to view the website, the site requests that the user
enter their account credentials or other sensitive information. This information is then transmitted to
the cybercriminal.
Microsoft Defender for Office 365 detects the emailed phishing scam by monitoring Exchange and
Outlook. Both the employee and the IT Support team are alerted about this attempted phishing
attack.
Malware enters through social media: An employee clicks on an enticing link posted on their favorite
social media app. The link triggers an automatic download of a malware file to the employee’s laptop.
22
Microsoft Defender for Endpoint monitors the employee’s laptop for suspicious malware signatures.
Upon detecting the malware, Defender for Endpoint alerts the employee and the organization’s IT
Support team about the malware and discloses its endpoint location.
A cybercriminal intercepts an employee’s work login credentials: An employee accesses their work
account using their laptop and an open Wi-Fi access point in a busy coffee shop. A cybercriminal is in
the same coffee shop to intercept and collect unprotected information flowing through the open Wi-
Fi access point. The cybercriminal obtains the employee’s user account credentials and uses them to
hijack the employee’s work account. The cybercriminal then begins a malicious attack on the
employer’s network.
Microsoft Defender for Identity can detect the sudden change in activity on the employee’s user
account. Defender for Identity alerts the employee and the IT Support team about the compromised
user identity.
A virus enters a cloud drive through a file upload: An employee unknowingly uploads an file that is
infected with a virus to their work cloud storage drive. When the employee opens the file from the
cloud drive, the virus is activated and begins changing the security settings on the other files in the
employees cloud drive.
Microsoft Defender for Cloud Apps detects the unusual pattern of activity and alerts the employee
and IT Support team of the suspicious activity in the cloud account.
4. Anti-malware protection
Anti malware defenses are a core part of any company’s security model in this day and age.
Today, the internet is full of bots, viruses, worms and other automated attacks.
While modern operating systems have reduced this threat vector by having basic firewalls enabled by
default, there’s still huge amount of attack on the internet.
Anti-virus software is signature-based. Means, that it has a database of signatures that identify
known malware like unique file hash of a malicious binary or the file associated with an infection. Or it
could be the network traffic characteristics that malware uses to communicate with a command and
control server.
Antivirus software will monitor and analyze things, like new files being created or being modified on
the system, in order to watch for any behavior that matches a known malware signature.
If it detects activity that matches the signature, depending on the signature type, it will attempt to
block the malware from harming the system.
But some signatures might only be able to detect the malware after the infection has occurred.
In that case, it may attempt to quarantine the infected files. Or it’ll just log and alert the detection
event at a high level. This is how all antivirus products work.
23
1) The first is that they depend on antivirus signatures distributed by the antivirus software vendor.
The effectiveness of antivirus software depends on timely updates of virus definitions and
signatures.
2) The second is that they depend on the antivirus vendor discovering new malware and writing
new signatures for newly discovered threats. Until the vendor is able to write new signatures
and publish and disseminate them, your antivirus software can't protect you from these
emerging threats
Antivirus which is designed to protect systems, actually represents an additional attack surface that
attackers can exploit.
It is true that antivirus software, like any other software installed on a system, can potentially be
exploited by attackers to gain unauthorized access or perform malicious actions on the system. This is
because antivirus software typically operates with high privileges and has access to a wide range of
system resources, making it an attractive target for attackers.
Moreover, antivirus software often relies on complex and sophisticated detection algorithms to
identify and block threats, which can also be targeted by attackers. For instance, attackers can
attempt to exploit vulnerabilities in the antivirus software itself, trick it into ignoring or whitelisting
malicious files or bypassing its detection mechanisms.
That said, it is important to note that the benefits of using antivirus software typically outweigh the
potential risks. Antivirus software can effectively detect and prevent a wide range of malware and
other threats from infecting and damaging a system, which can help mitigate the risk of data
breaches, theft, and other malicious activities. Additionally, most antivirus vendors regularly release
updates and patches to address any identified vulnerabilities in their software and improve its overall
security.
Overall, while it is true that antivirus software can potentially represent an additional attack surface
for attackers, the benefits of using such software to protect systems usually outweigh the risks. It is
important for users to regularly update their antivirus software and ensure that it is configured to
provide maximum protection while minimizing potential risks.
And remember, our defense and depth concept involves multiple layers of protection. Antivirus
software is just one piece of our anti malware defenses.
If antivirus can't protect us from the threats we don't know about, how do we protect against the
unknown that's out there? Well, anti virus operates on a blacklist model, checking against a list of
known bad things and blocking what gets matched.
There's a class of anti malware software that does the opposite. Binary whitelisting software operates
off a white list. It's a list of known good and trusted software and only things that are on the list are
permitted to run. Everything else is blocked.
I should call out that this typically only applies to executable binaries, not arbitrary files like pdf
documents or text files.
This would naturally defend against any unknown threats but at the cost of convenience.
Now, imagine if you had to get approval before you could download and install any new software,
that would be really annoying. It's for this reason that binary whitelisting software can trust software
using a couple different mechanisms.
Binary whitelisting software can use both cryptographic hashes and software signing certificates as
trust mechanisms to whitelist software and prevent unauthorized or malicious code from executing
on a system.
A software signing certificate is a digital certificate that is issued to a software publisher by a trusted
certificate authority (CA). The certificate contains information about the publisher, such as their name
24
and contact information, as well as a public key that can be used to verify the digital signature of the
code.
When a software publisher signs a binary with their software signing certificate, they are essentially
vouching for the authenticity and integrity of the code. The digital signature can be verified by the
binary whitelisting software, which can then allow the code to run if it is deemed trustworthy.
Using software signing certificates as a trust mechanism can provide an additional layer of security
beyond just verifying the cryptographic hash of a binary. However, it is important to note that
software signing certificates can potentially be compromised or misused if the private key associated
with the certificate falls into the wrong hands. Therefore, it is important for software publishers to
properly secure their private keys and take other measures to prevent unauthorized access to their
signing infrastructure.
5. Disk Encryption
FDE is an important factor in a defense in depth security model. It provides protection from some
physical forms of attack.
Full-disk encryption (FDE)
Full Disk Encryption (FDE) is a security technique that involves encrypting all of the data on a storage
device, including the operating system, applications, and user data. FDE ensures that all the data on
the disk is protected and inaccessible to unauthorized users, even if the device is lost, stolen, or
accessed by an attacker.
With FDE, the encryption process is performed at the disk level, meaning that all the data on the disk
is encrypted, not just individual files or folders. This provides a high level of security and protection
against attacks that attempt to access or steal data from the device.
FDE typically uses symmetric encryption, where the same key is used for both encryption and
decryption. The encryption key is typically derived from a user password or passphrase, meaning that
the key is only accessible to authorized users who know the password.
During the boot process, critical boot files are decrypted to allow the operating system to start. This
requires the use of the encryption key, which is typically entered by the user at boot time. Once the
system has booted, all the data on the disk remains encrypted until it is accessed by an authorized
user or application.
FDE is commonly used in environments where data security is a high priority, such as in government,
financial, and healthcare organizations. It is also increasingly being used on personal devices, such as
laptops and smartphones, to protect personal data and sensitive information.
Overall, FDE is a powerful security technique that provides strong protection for data stored on a disk,
ensuring that it remains inaccessible to unauthorized users or attackers.
Systems w/ their entire hard drive’s encrypted are resilient against data theft. They’ll prevent an
attacker from stealing potentially confidential information from a hard drive that’s been stolen or lost.
W/out also knowing the encryption password or having access to the encryption key, the data on the
hard drive is just meaningless gibberish.
This is a very important security mechanism to deploy for more mobile devices like laptops, cellphones
and tablets.
But it’s also recommended for desktops and servers to since disk encryption not only provides
confidentiality but also integrity.
This means that an attacker with physical access to a system can’t replace system files w/ malicious
ones or install malware.
Having the disk fully encrypted protects from data theft and unauthorized tampering even if an
attacker has physical access to the disk.
25
There are first-party full-disk encryption (FDE) solutions from Microsoft and Apple, called BitLocker and
FileVault 2, respectively.
There are also a bunch of third-party and open-source solutions. On Linux, the dm-crypt package is
very popular.
There are also offerings from PGP, VeraCrypt and a host of others.
4.)A good defense in depth strategy would involve deploying which firewalls?
Defense in depth involves multiple layers of overlapping security. So, deploying both host- and
network-based firewalls is recommended.
B. APPLICATION HARDENING
1. Software Patch Management
As an IT Support Specialist, it’s critical that you make sure that you install software updates and
security patches in a timely way, in order to defend your company’s systems and networks.
Software updates don’t just improve software products by adding new features and improving
performance, and stability. They also address security vulnerabilities.
26
Patching isn’t just necessary for software, but also operating systems and firmware that run on
infrastructure devices.
Every devices has code running on it that might have software bugs that could lead to security
vulnerabilities from routers, switches , phones, even printers.
Operating system vendors usually push security related patches pretty quickly when an issue is
discovered. They will usually release security fixes out of cycle from typical OS upgrades to ensure a
timely fix, because of the security implications.
But for embedded devices like network and equipment or printers, this might not be typical.
Critical infrastructure devices should be approached carefully when you apply updates.
There’s always the risk that a software update will introduce a new bug that might affect the
functionality of the device or if the update process itself would go wrong and cause an outage.
To minimize the risk of introducing new issues through software updates, it is important to thoroughly
test updates in a controlled environment before deploying them in the production environment. This
can involve creating a test environment that closely mirrors the production environment, and testing
the updates in that environment to identify any issues before deploying the updates to the live
systems.
It is also important to have a well-designed and tested rollback plan in case an update causes issues in
the live environment. This can involve having backups of the system and a plan to quickly revert to
the previous version of the software if necessary.
2. Browser Hardening
In this reading, you will learn how to harden browsers for enhanced internet security. The methods
presented include evaluating sources for trustworthiness, SSL certificates, password managers, and
browser security best practices. Techniques for browser hardening are important components in
enterprise-level IT security policies. These techniques can also be used to improve internet security
for organizations of any size and for individual users.
To guard against threats like this, there are checks you can perform to evaluate websites:
Use antivirus and anti-malware software and browser extensions. Run antivirus and anti-
malware scans regularly and scan downloaded files. Ensure antivirus and anti-malware browser
extensions are enabled when surfing the web.
Check for SSL certificates. See the “Secure connections and sites” section below.
Ensure the URL displayed in the address bar shows the correct domain name. For example,
Google websites use the Google.com domain name.
Search for negative reviews of the website from trusted sources. Be wary of websites that have
few to no reviews. They may not have been active long enough to build a bad reputation.
Cybercriminals will create new websites when they get too many negative reviews on their older
sites.
Don’t automatically trust website links provided by people or organizations you trust. They
may not be aware that they are passing along links to malicious websites and files.
Use hashing algorithms for downloaded files. Compare the developer-provided hash value of
the original file to the hash value of the downloaded copy to ensure the two values match.
27
Secure connections and sites
Secure Socket Layer (SSL) certificates are issued by trusted certificate authorities (CA), such as
DigiCert. An SSL certificate indicates that any data submitted through a website will be encrypted. A
website with a valid SSL certificate has been inspected and verified by the CA. You can find SSL
certificates by performing the following steps:
1. Check the URL in the address bar. The URL should begin with the https:// protocol. If you see
http:// without the “s”, then the website is not secure.
2. Click on the closed padlock icon in the address bar to the left of the URL. An open lock indicates
that the website is not secure.
3. A pop-up menu should open. Websites with SSL certificates will have a menu option labeled
“Connection is secure.” Click on this menu item.
4. A new pop-up menu will appear with a link to check the certificate information. The layout and
wording of this pop-up will vary depending on which browser you are using. When you review
the certificate, look for the following items:4
b) The domain it was issued to - This name should match the website domain name.
c) The expiration date - The certificate should not have passed its expiration date.
Note that cybercriminals can obtain SSL certificates too. So, this is not a guarantee that the site is
safe. CAs also vary in how thorough they are in their inspections.
Password managers
Password managers are software programs that encrypt and retain passwords in secure cloud storage
or locally on users’ personal computing devices. There are a wide variety of activities users perform
online that require unique and complex passwords, such as banking, managing health records, filing
taxes, and more. It can be difficult for users to keep track of so many different logins and passwords.
Fortunately, password managers can help.
It can expose all of the user’s account credentials if a cybercriminal obtains the master password
to the password manager;
Can be very difficult for a user to regain access to the password manager account if the master
password is lost or forgotten;
Requires the user to learn a new method for logging in to their various accounts in order to
retrieve passwords from the password manager software; an
Often requires a fee or subscription for password management services.
28
A few of the top brands for password manager applications include Bitwarden, Last Pass, and
1Password. Please see the Resource section at the end of this reading for more information.
Browser settings
Browser settings can be configured for additional safety measures. Some additional options for
hardening browsers include:
2. Clear browsing data and cache: Clear your web browser's cache, cookies, and history
Key takeaways
You learned about multiple steps you can take to harden a browser and protect your online security:
3. Application Policy
Application Software can represent a pretty large attack surface.
So it’s important to have some kind of application policies in place.
29
b) Browser extensions or add-ons.
Extensions that require full access to web sites visited can be risky, since the extension developer
has the power to modify pages visited.
2. A core authentication server is exposed to the internet and is connected to sensitive services. What
are some measures you can take to secure the server and prevent it from getting compromised by a
hacker? Select all that apply.
Access Control Lists (ACLs)
Designate as a bastion host
Secure firewall
Secure Firewall (A secure firewall configuration should restrict connections between untrusted
networks and systems)
Bastion Hosts. (Bastion hosts are specially hardened and minimized in terms of what is
permitted to run on them. Typically, bastion hosts are expected to be exposed to the internet,
so special attention is paid to hardening and locking them down to minimize the chances of
compromise.)
Access Control Lists (ACLs). Secure configurations, such as ACLs, could be implemented on
specific bastion hosts to secure sensitive services without degrading the convenience of the
entire organization.
3. When looking at aggregated logs, you are seeing a large percentage of Windows hosts connecting
to an Internet Protocol (IP) address outside the network in a foreign country. Why might this be worth
investigating more closely?
It can indicate a malware infection
It can indicate a malware infection. When looking at aggregated logs, you should pay attention
to patterns and correlations between traffic. For example, if you are seeing a large percentage
of hosts all connecting to a specific address outside your network, that might be worth
investigating more closely, as it could indicate a malware infection.
4. Which of these plays an important role in keeping attack traffic off your systems and helps to
protect users? Select all that apply.
Antivirus software
Antimalware measures
5. What does full-disk encryption protect against? Select all that apply.
Data theft
Data tampering
5. A hacker exploited a bug in the software and triggered unintended behavior which led to the
system being compromised by running vulnerable software. Which of these helps to fix these types of
vulnerabilities?
Software patch management
Software Patch Management. Vulnerabilities can be fixed through software patches and
updates which correct the bugs that attackers exploit.
6. Besides software, what other things will also need patches? Select all that apply.
Infrastructure firmware
30
Operating systems
7. What is the best way to avoid personal, one-off software installation requests?
A clear application whitelist policy
A clear application whitelist policy can be an effective way to avoid personal, one-off software
installation requests. An application whitelist policy is a list of approved software that employees
are allowed to install on their computers. By implementing a whitelist policy, you can prevent
employees from installing unapproved software on their machines.
A clear application whitelist policy should include a list of approved software, as well as guidelines
for requesting new software to be added to the whitelist. This can help ensure that employees have
access to the software they need to do their jobs, while also maintaining security and compliance
standards.
One of the benefits of an application whitelist policy is that it can be automated using software
deployment tools. This can help streamline the process of installing approved software on
employee machines, while also ensuring that unapproved software is not installed.
However, it is important to note that an application whitelist policy should be regularly reviewed
and updated to ensure that it remains relevant and effective. New software may be released that
employees need to use, and existing software may become outdated or pose security risks. Regular
reviews can help ensure that the whitelist policy continues to meet the needs of the organization.
Yes, securely storing a recovery or backup encryption key is referred to as key escrow. Key escrow is
the process of keeping a copy of an encryption key in a secure location, separate from the system or
device that uses the key, to ensure that the key can be recovered if it is lost or becomes
inaccessible.
The purpose of key escrow is to provide a way to recover encrypted data if the original encryption
key is lost or damaged. It is particularly important for organizations that need to maintain the
confidentiality and integrity of sensitive information, such as financial institutions or government
agencies.
Key escrow can be implemented in different ways, depending on the requirements of the
organization. One common approach is to use a third-party service provider to securely store the
encryption key. The provider would store the key in a secure data center with strong physical and
logical security controls, and only release the key to authorized individuals or systems with proper
authentication and authorization.
Another approach is to use a trusted employee or group of employees within the organization to
store and manage the encryption keys. This approach requires a high degree of trust and
accountability, as the individuals responsible for the keys must ensure that they are stored securely
and not misused.
Overall, key escrow is an important security measure that helps organizations protect sensitive data
and ensure that it can be recovered in the event of a disaster or system failure.
31
The Payment Card Industry Data Security Standard (PCI DSS) has six primary objectives, each
with a set of requirements to help organizations protect cardholder data:
1. Build and Maintain a Secure Network and Systems: This objective focuses on ensuring that an
organization's network and systems are secure and protected from unauthorized access. This
includes requirements such as installing and maintaining firewalls and anti-virus software,
encrypting data in transit, and restricting access to cardholder data.
2. Protect Cardholder Data: This objective focuses on protecting cardholder data wherever it is
stored, processed, or transmitted. This includes requirements such as encrypting cardholder
data, masking cardholder data when displayed, and limiting access to cardholder data to
authorized personnel.
4. Implement Strong Access Control Measures: This objective focuses on ensuring that access to
cardholder data is limited to authorized personnel only. This includes requirements such as
assigning unique user IDs to each person with access, implementing two-factor authentication,
and regularly reviewing access rights and permissions.
5. Regularly Monitor and Test Networks: This objective focuses on regularly monitoring an
organization's systems and networks to detect and respond to security incidents. This includes
requirements such as regularly monitoring access logs, conducting penetration testing, and
implementing intrusion detection and prevention systems.
6. Maintain an Information Security Policy: This objective focuses on maintaining and enforcing a
comprehensive information security policy that addresses all aspects of the organization's
security program. This includes requirements such as implementing and maintaining a security
awareness program, conducting regular security training for employees, and regularly reviewing
and updating the security policy as needed.
Measuring and assessing risk
Security is all about determining risks or exposure; understanding the likelihood of attacks; and
designing defense around these risks to minimize the impact of an attack.
Security risks assessment starts with threat modeling. First, we identify threats to our systems,
then we assign them priorities that correspond to severity and probability. We do this by
brainstorming from the perspective of an outside attacker, putting ourselves in a hacker shoes. It
help to start by figuring out what high-value targets an attacker may want to go after. From
there, you can start to look at possible attack vectors that could be used to gain access to high-
value assets.
High-value data usually includes info, like usernames and passwords. Or any kind of user data is
considered high-value.
Another part of risk measurement is understanding what vulnerabilities are on your systems and
network. One way to find these out is to perform regular vulnerability scanning.
32
Identifying vulnerabilities: Vulnerability scanners scan the target system or network and
compare the results to a database of known vulnerabilities. The scanners identify vulnerabilities
such as unpatched software, misconfigured systems, and weak passwords.
Reporting vulnerabilities: Vulnerability scanners generate reports that detail the identified
vulnerabilities and their severity. The reports often include recommendations for remediation or
mitigation of the vulnerabilities.
Overall, vulnerability scanners are an important tool for organizations to use as part of their
security program. They help identify potential vulnerabilities and prioritize them for remediation
or mitigation, which can help organizations prevent security breaches and protect sensitive data.
Discovery: The first step is to identify the target system or network to be scanned. The scanner
will use various techniques such as IP scanning, DNS lookup, and port scanning to identify the
target system.
Enumeration: Once the target system or network is identified, the scanner will begin
enumerating the system or network to gather information such as open ports, running services,
and installed applications.
Vulnerability Detection: After enumerating the system or network, the scanner will compare the
gathered information to a database of known vulnerabilities to identify potential security
weaknesses. The scanner will use various techniques to test the system for vulnerabilities such
as sending specific packets, probing for specific configurations, and brute-force attacks.
Vulnerability Assessment: Once the scanner has identified potential vulnerabilities, it will assess
the severity and impact of each vulnerability. This includes assigning a risk score or severity level
based on the potential impact on the system or network.
Reporting: Finally, the scanner will generate a report detailing the vulnerabilities identified,
including their severity, impact, and recommended actions for remediation or mitigation.
Overall, vulnerability scanners help organizations identify potential security weaknesses before
they can be exploited by attackers. They provide a comprehensive way to assess the security
posture of systems and networks and prioritize remediation efforts to mitigate the most critical
vulnerabilities.
But Vulnerability scanning isn’t the only way to put your defenses to the test.
33
Conducting regular penetration tests is also really encouraged to test your defenses even more.
These tests will also ensure detection and alerting systems are working properly.
Penetration Testing - The practice of attempting to break into a system or network to verify the
systems in place. This way you can test your systems to make sure they protect you like they’re
supposed to. The results of the penetration testing reports will also show you where weak points
or blind spots exist. This test help improves defenses and guide future security projects.
Privacy Policy
Privacy is not only a defends against external threats. It also protects the data against misuse by
employees.
Both privacy and data access policies are important to guiding and information people how to
maintain security while handling sensitive data.
Auditing data access logs is super important. It helps us ensure that sensitive data is only
accessed by people who are authorized to access it and they use it for the right reasons.
It’s a good practice to apply the principle of least privilege here, by not allowing access to this
type of data by default. If need access, first make an access request.
Any access that doesn’t have a corresponding request should be flagged as a high-priority
potential breach that needs to be investigated as soon as possible.
Data handling policies should cover the details of how different data is classified.
Once different data classes are defined, you should create guidelines around how to handle
these different types of data. If something is considered sensitive or confidential, you’d probably
have stipulations that this data shouldn’t be stored in media that’s easily lost or stolen, like USB
sticks or portable hard drives. If you really have no choice, store it in encrypted media.
1. What are some examples of security goals that you may have for an organization? Check all
that apply.
To prevent unauthorized access to customer credentials
To protect customer data from unauthorized access
2. Which of these would you consider high-value targets for a potential attacker? Check all that
apply.
Authentication databases
Customer credit card information
4. What are some restrictions that should apply to sensitive and confidential data? Check all that
apply.
It can be stored on encrypted media only.
Sensitive data should be treated with care so that an unauthorized third-party doesn't gain
access. Ensuring this data is encrypted is an effective way to safeguard against unauthorized
access.
34
Misuse or abuse of sensitive data
Data Destruction
Data destruction is removing or destroying data stored on electronic devices so that an
operating system or application cannot read it. Data destruction is required when a company no
longer needs a device, when there are unused or multiple copies of data, or you are required to
destroy specific data.
There are three categories of data destruction methods: recycling, physical destruction, and
third-party destruction. This reading will introduce the data destruction methods and how to
decide which method to use.
Recycling
Recycling includes methods that allow for device reuse after data destruction. This option is
recommended if you hope to reuse devices internally, sell surplus equipment, or your devices
are on loan and are due to be returned. Standard recycling methods include the following:
Erasing/wiping: cleans all data off a device’s hard drive by overwriting it. Erasing or wiping data
can be done manually or with data-destruction software. This method is practical when you only
have a few devices that need data destroyed, as it takes a long time. Note that it may take
multiple passes to wipe highly sensitive data completely.
Low-level formatting: erases all data written on the hard drive by replacing it with zeros. Low-
level reformatting can be done using a tool such as HDDGURU on a PC or the Disk Utility function
on a Mac.
Standard formatting: erases the path to the data and not the data itself. Both PCs and Macs have
internal tools that can perform a standard format, Disk Management on a PC or Disk Utility on a
Mac. Note that standard formatting does not remove the data from the device, enabling data
rediscovery using software.
Physical destruction
Physical destruction includes any method that physically destroys a device to make it difficult to
retrieve data from it. You should only use physical destruction if you do not need to reuse the
device. However, only completely destroying the device ensures the destruction of all data with
physical methods. Physical destruction methods include the following:
Drilling holes directly into the device wipes data out on the sections where there are holes.
However, individuals can recover data from the areas that are still intact.
Shredding includes the physical shredding of hard drives, memory cards, CDs, DVDs, and other
electronic storage devices. Shredding reduces the potential for recovery. Shredding requires
special equipment or outsourcing to another facility.
Degaussing uses a high-powered magnet which destroys the data on the device. This method
effectively destroys large data storage devices and renders the hard drive unusable. As electronic
technology changes, this method may become obsolete
Incinerating destroys data by burning the device. Most companies do not have an incinerator on-
site. Devices need to be transported to a facility for incineration. Due to this, devices can be lost
or stolen in transit.
35
Outsourcing
Outsourcing means using a third-party specializing in data destruction to complete the physical
or recycling process. This option appeals to companies that do not have the staff or knowledge
to complete the destruction themselves. Once a vendor has completed the task, they issue a
certificate of destruction/recycling.
Key Takeaways
Data destruction makes data unreadable to an operating system or application. You should
destroy data on devices no longer used by a company, unused or duplicated copies of data, or
data that’s required to destroy. Data destruction methods include:
Outsourcing: using an external company specializing in data destruction to handle the process
USERS
User habits
You can build the world’s best security systems, but they won’t protect you if the users are going
to be practicing unsafe security.
You should never upload confidential information onto a third-party service that hasn’t been
evaluated by your company.
Password Policy - 20 long character and change every 3 months. Don’t password reuse.
A much greater risk in the workplace that users should be educated on is credential theft from
phising emails.
Having two-factor Authentication helps protect against it.
If someone entered their password into a phising website, or even suspect they did, it’s
important to change their password as soon as possible.
And use tools like Password alert, it is a chrome extension from Google that can detect when you
enter your password into a site that’s not a Google page.
If a user writes their password on a Post-it Note, and sticks it to laptop, then leave laptop on cafe
unintended.
Third Party Security
Sometimes you need to rely on 3rd party solutions or service providers because you might not be
able to do everything in house.
If they have subpar (weak) security, you are undermining your security defenses by potentially
opening a new avenue of attack.
It’s important to hire trustworthy and reputable vendors whenever you can.
This involves conducting a vendor risk review or security assessment.
In typical vendor security assessments, you ask vendors to complete a questionnaire that covers
different aspects of their security policies, procedures and defenses.
The questionnaire is designed to determine whether or not they’ve implemented good security
designs in their organization.
36
For software services or hardware vendors, you might also ask to test the software hardware.
That way you can evaluate it for potential security vulnerabilities or concerns before deciding to
contract their services.
It’s important to understand how well protected your business partners are before deciding to
work with them. If they have poor security practices, your organization’s security could be at
risk. A compromise of their infrastructure could lead to a breach of your systems.
Additional monitoring would also be recommended for this third party device, since it represents
a new potential attack surface in your network. If the vendor lets you, evaluate the hardware in
a lab environment, first, there you can run in depth vulnerability assessments and penetration
testing of the hardware. And make sure there aren’t any obvious vulnerabilities in the product.
Report your findings to the vendor and ask that they address any issues you discover.
Security Training
It’s impossible to have good security practices at your company if employees and users haven’t
received good trainings and resources. This will boost a healthy company culture and overall
attitude towards security.
A working environment that encourages people to speak up when they feel something isn’t right
is critical. It encourage them to do the right thing.
Helping others keep security in mind will help decrease the security burdens you’ll have as an IT
support specialist.
It will also make the overall security of the organization better.
Because anyone with access to the machine can impersonate you and get access to any
resources you’re logged into.
INCIDENT HANDLING
Incident reporting and analysis
We try our best to protect our systems and networks, but it’s pretty likely that some sort of
incident will happen.
Regardless of the nature of the incident, proper incident handling is important to understanding
what exactly happened and how it happened and how to avoid it from happening again.
The very first step of handling an incident is to detect it in the first place.
The next step is to analyze it and determine the effects and scope of damage.
Was it a data leak? Or information disclosure?
If so, what information got out? How bad is it, where systems compromised? What
systems and what level of access did they manage to get?
Is it a malware infection? What systems were infected?
This is why having good monitoring in place is so important along with understanding your
baseline. Once you figure out what normal traffic looks like on your network and what services
you expect to see, outliers will be easier to detect.
This is important because every false lead that the incident response team has to investigate
means time and resources wasted. This has the potential to allow real intrusions to go
undetected and uninvestigated longer.
37
Once the scope of the incident is determined, the next step is containment.
You need to contain the breach to prevent further damage for system compromises and
malware infection.
If an account was compromised, change the password immediately. If the owner is
unable to change the password right away, then lock the account.
If it’s a malware infection, can our Antimalware software quarantine or remove the
infection. If not then the infected machine needs to removed from the network as
soon as possible to prevent lateral movement around the network, to do this, you can
adjust network based firewall rules to effectively quarantine the machine.
You could also move the machine to a separate VLAN used for security quarantining
purposes. This would be a VLAN w/ strict restrictions and filtering applied to prevent
further infection of other systems and networks.
It’s important during this phase that efforts are made to avoid the destruction of any
logs or forensic evidence.
Attackers will usually try to cover their attacks by modifying logs and deleting
files, especially when they suspect they’ve been caught.
They’ll take measures to make sure they keep their access to compromised
systems.
This could involve installing a backdoor or some kind or remote access malware.
Another step to watch out is creating a new user account that they can use to
authenticate w/ in the future w/ effective logging, configurations and system in place.
This type of access should be detected during an incident investigation.
Another part of incident analysis is determining severity, impact and recovery ability of
the incident.
Severity includes factors like what and how many systems were compromised, and
how the breach affects business functions.
An incident that’s compromised a bunch of machines in the network would be
more severe than one where a single web server was hacked.
So the impact of an incident is also an important issue to consider.
If the org only had one web server and it was compromised, it might be
considered a much higher severity breach.
Data exfiltration - The unauthorized transfer of data from a computer
It’s also a very important concerns when a security incident happens, hackers may try
to steal data for a number of reasons. Or account information to provide access later.
Or attacker may just want to cause damage and destruction which might involve
corrupting data.
Recoverability - How Complicated and time-consuming the recovery effort will be
An incident that can be recovered w/ a simple restoration from backup by following
documented procedures would be considered easily recovered from.
But an incident where an attacker deleted large amounts of customer information and
wrecked havoc across lots of critical infrastructure systems would be way more
difficult to recover from.
It might not be possible to recover from it at all. In some cases, depending on backup
systems and configurations, some data may be lost forever and can’t be restored.
Back ups won’t contain any changes or new data that were made after the last backup
run.
Incident response
When you’ve had a data breach, you may need forensic analysis to analyze the attack. This
analysis usually involves extensive evidence gathering. This reading covers some considerations
for protecting the integrity of your forensic evidence and avoiding complications or issues
related to how you handle evidence.
Regulated data
38
It’s important to consider the type of data involved in an incident. Many types of data are
subject to government regulations that require you to take extra care when handling it. Here are
some examples you’re likely to encounter as an IT support specialist.
2. Credit Card or Payment Card Industry (PCI) Information: This is information related to credit,
debit, or other payment cards. PCI data is governed by the Payment Card Industry Data Security
Standard (PCI DSS), a global information security standard designed to prevent fraud through
increased control of credit card data.
4. Federal Information Security Management Act (FISMA) compliance: FISMA requires federal
agencies and those providing services on their behalf to develop, document, and implement
specific IT security programs and to store data on U.S. soil. For example, organizations like NASA,
the National Institutes of Health, the Department of Veteran Affairs—and any contractors
processing or storing data for them—need to comply with FISMA.
Restrict users from editing, saving, sharing, printing, or taking screenshots of content or
products
Set expiration dates on media to prevent access beyond that date or limit the number of
times users can access the media
Limit access to specific devices, Internet Protocol (IP) addresses, or locations, such as
limiting content to people in a specific country
Organizations can use these DRM capabilities to protect sensitive data. DRM enables
organizations to track who has viewed files, control access, and manage how people use the
files. It also prevents files from being altered, duplicated, saved, or printed. DRM can help
organizations comply with data protection regulations.
39
End User Licensing Agreement (EULA)
End User Licensing Agreements (EULAs) are similar to DRM in specifying certain rights and
restrictions that apply to the software. You often encounter EULA statements when installing a
software package, accessing a website, sharing a file, or downloading content. A EULA is usually
considered a legally binding agreement between the owner of a product (e.g., a software
publisher) and the product's end-user. The EULA specifies the rights and restrictions that apply
to the software, and it’s usually presented to users during installation or setup of the software.
You can’t complete an installation (or access, share, or download data) until you agree to the
terms written in the EULA statement.
Unlike DRM restrictions, EULAs are only valid if you agree to it (i.e., you check a box or click the ‘I
Agree’ button). DRM restrictions don’t require your agreement—or rely on you to keep that
agreement. DRMs are built into the product they protect, making it easier for content creators to
ensure users do not violate restrictions.
Chain of custody
“Chain of custody” refers to a process that tracks evidence movement through its collection,
safeguarding, and analysis lifecycle. Maintaining the chain of custody makes it difficult for
someone to argue that the evidence was tampered with or mishandled. Your chain of custody
documentation should answer the following questions. Documentation for these questions must
be maintained and filed in a secure location for current and future reference.
Who collected the evidence? Evidence can include the afflicted or used devices,
media, and associated peripherals.
How was the evidence stored and protected in storage? The procedures involved in
storing and protecting evidence are called evidence-custodian procedures.
Who took the evidence out of storage and why? Ongoing documentation of the names
of individuals who check out evidence and why must be kept.
When a data breach occurs, forensic analysis usually involves taking an image of the disk.
This makes a virtual copy of the hard drive. The copy lets an investigator analyze the disk’s
contents without modifying or altering the original files. An alteration compromises the
integrity of the evidence. This kind of compromised integrity is what you want to avoid
when performing forensic investigations.
Incident response and recovery
Once the threat has been detected and contained, it has to be removed or remediated.
When it comes to Malware Infection, this means removing the malware from affected
systems.
But in some cases, this may not be possible, so the affected systems have to be
restored to a known good configuration. This can be done by rebuilding the machine
or restoring from backup.
Take care when removing malware from systems, because some malware is designed
to be very persistent, which means it’s resistant to being removed.
But before we can start the recovery, we have to contain the incident. This might
involve shutting down affected systems to prevent further damage or spread of an
infection.
Affected systems may just have to remove access removed to cut off any
communication with the compromised system.
40
The motivating factor here would be to prevent the spread of any infection or to
remove remote access to the system.
Forensic analysis may need to be done to analyze the attack. This is true when it comes to a
malware infection.
In the case of forensic analysis, affected machines might be investigated very closely
to determine exactly what the attacker did.
This is usually done by taking an image of the disk, essentially making a virtual copy of
the hard drive.
This lets the investigator analyze the contents of the disk w/out the risk of modifying
or altering the original files. Or it would compromise the integrity of any forensic
evidence.
Usually evidence gathering is also part of the incident response process. This provides
evidenced to law enforcement if the organization wants to pursue legal action against
the attackers.
Forensic evidence is super useful for providing details of the attack to security
community. It allows other security teams to be aware of new threats and lets them
better defend themselves.
It’s also very important that you get members from your legal team involved in any
incident handling plans. Because an incident can have legal implications for the
company, a lawyer should be available to consult and advise on the legal aspects of
the investigation.
We’ll need to use information from the analysis to prevent any further intrusions or
infections.
First, we determine the entry point to figure out how the attacker got in or what
vulnerability the malware exploited.
If you remove a malware infection w/out also addressing the underlying vulnerability,
systems could become re-infected right after you clean them up. Portmortems can be
a great way to document incident.
Logs have to be audited to determine exactly what the attacker did while they had
access to the system. They’ll also tell you what data the attacker accessed.
System must be scrutinized to ensure no back doors have been installed or malware
planted on the system.
And vulnerabilities should be close to prevent any future attacks.
When all traces of the attack have been removed and discovered and the known
vulnerabilities have been closed, you can move on to the last step.
Systems need to be thoroughly tested to make sure proper functionality has been restored.
It is possible the attacker will attack the same target again. Or similar attack
methodology on other targets in your network.
Updates firewall rules and ACLs if an exposure was discovered in the course of the
investigation.
Create new definitions and rules for intrusion detection systems that can watch for
the signs of the same attack again.
Stay vigilant and prepared to protect your system from attacks.
Mobile security and privacy
41