0% found this document useful (0 votes)
3 views31 pages

Cybersecurity_Deep_Dive

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 31

● What is Network Security?

○ Network security is the strategic combination of hardware and software


designed to protect sensitive data in a computer network. Network
access controls, intrusion detection, and many other types of network
security functions work together to secure the environment against
unauthorized access, data breaches, malware delivery, and other
cyberattacks.
● How does it work?
○ Network-based security has evolved as more network traffic traverses
the internet rather than staying within a local network infrastructure.
Today’s stack is in a security gateway, which monitors traffic moving to
and from the internet. It includes an array of firewalls, intrusion
prevention systems (IPS), sandboxes, URL filters, DNS filters, antivirus
technology, data loss prevention (DLP) systems, and more that work
together to keep external attacks from reaching data and intellectual
property inside a network.
● What type of threats does network security prevent?
○ The variety of network security tools on the market speaks to the
breadth of the threat landscape. There are countless solutions
designed to stop malware (e.g., spyware, ransomware, trojans),
phishing, and other such threats.
○ The key thing to note about legacy network security solutions ties back
to the “castle and moat” approach—they’re largely built to protect
networks against malicious activities from outside, with far less ability
to protect from inside.
● What is OT Security:
○ OT security is the measures and controls in place to protect OT
systems—which use purpose-built software to automate industrial
processes—against cybersecurity threats. As the convergence of
information technology and OT drives greater automation and
efficiency in industrial systems, OT security has become a requirement
of critical infrastructure management.
● Difference between IT and OT security:
○ While IT systems are designed for various uses for people, devices,
and workloads, OT systems are instead purpose-built to automate
specific industrial applications, presenting some key differences in how
they are secured.
○ One challenge lies in the technology life cycle. That of an OT system
can span decades, whereas the life cycles of IT systems, such as
laptops and servers, are often between four and six years. In practical
terms, this means OT security measures often need to account for
infrastructure that’s out of date and may not even be possible to patch.
● Why is OT security important?
○ Years ago, OT assets weren’t connected to the internet, so they
weren’t exposed to web-borne threats like malware, ransomware
attacks, and hackers. Then, as digital transformation initiatives and IT-
OT convergence expanded, many organizations added point solutions
to their infrastructure to address specific issues, such as patching. This
approach led to complex networks in which systems didn’t share
information, and therefore couldn’t provide full visibility to those
managing them.
● What is Purdue Model for ICS Security?
○ The Purdue model is a structural model for industrial control system
(ICS) security that concerns segmentation of physical processes,
sensors, supervisory controls, operations, and logistics. Long regarded
as a key framework for ICS network segmentation to protect
operational technology (OT) from malware and other attacks, the
model persists alongside the rise of edge computing and direct-to-
cloud connectivity.
● Purpose of this model
○ The Purdue model, part of the Purdue Enterprise Reference
Architecture (PERA), was designed as a reference model for data
flows in computer-integrated manufacturing (CIM), where a plant’s
processes are completely automated. It came to define the standard
for building an ICS network architecture in a way that supports OT
security, separating the layers of the network to maintain a hierarchical
flow of data between them.
○ The model shows how the typical elements of an ICS architecture
interconnect, dividing them into six zones that contain information
technology (IT) and OT systems. Implemented correctly, it helps
establish an “air gap” between ICS/OT and IT systems, isolating them
so an organization can enforce effective access controls without
hindering business.
● Is this model still relevant?
○ When the Purdue model was introduced in 1992 by Theodore J.
Williams and the Purdue University Consortium, few other models had
yet outlined a clear information hierarchy for CIM, which began to take
hold of the industry in the mid-to-late 1980s.
○ Today, with the industrial internet of things (IIoT) blurring the line
between IT and OT, experts often wonder whether the Purdue model
still applies to modern ICS networks. Its segmentation framework is
often set aside, after all, as data from Level 0 is sent directly to the
cloud. However, many suggest it’s not yet time to discard the model.
● OT systems occupy the lower levels of the model while IT systems occupy the
upper levels, with a “demilitarized zone” of convergence between them.
● Let’s take a look at each of the zones in the Purdue reference model, top
to bottom.
● Level 0: Physical Process Zone
○ This zone contains sensors, actuators, and other machinery directly
responsible for assembly, lubrication, and other physical processes.
Many modern sensors communicate directly with monitoring software
in the cloud via cellular networks.
● Level 1: Intelligent Devices Zone
○ This zone contains instruments that send commands to the devices at
Level 0:
• Programmable logic controllers (PLCs) monitor automated
or human input in industrial processes and make output
adjustments accordingly.
• Remote terminal units (RTUs) connect hardware in Level 0 to
systems in Level 2.
● Level 2: Control Systems Zone
○ This zone contains systems that supervise, monitor, and control
physical processes:
• Supervisory control and data acquisition (SCADA) software
oversees and controls physical processes, locally or remotely,
and aggregates data to send to historians.
• Distributed control systems (DCS) perform SCADA functions
but are usually deployed locally.
• Human-machine interfaces (HMIs) connect to DCS and PLCs
to allow for basic controls and monitoring.
● Level 3: Manufacturing Operations Systems Zone
○ This zone contains customized OT devices that manage production
workflows on the shop floor:
• Manufacturing operations management (MOM) systems
manage production operations.
• Manufacturing execution systems (MES) collect real-time
data to help optimize production.
• Data historians store process data and (in modern solutions)
perform contextual analysis.
• As with Levels 4 and 5, disruptions here can lead to economic
damage, failure of critical infrastructure, risk to people and plant
safety, or lost revenue.
● Level 3.5: Demilitarized Zone (DMZ)
○ This zone includes security systems such as firewalls and proxies,
used in an effort to prevent lateral threat movement between IT and
OT. The rise of automation has increased the need for bidirectional
data flows between OT and IT systems, so this IT-OT convergence
layer can give organizations a competitive edge—but it can also
increase their cyber risk if they adopt a flat network approach.
● Level 4/5: Enterprise Zone
○ These zones house the typical IT network, where the primary business
functions occur, including the orchestration of manufacturing
operations. Enterprise resource planning (ERP) systems here drive
plant production schedules, material use, shipping, and inventory
levels.
○ Disruptions here can lead to prolonged downtime, with the potential for
economic damage, failure of critical infrastructure, or revenue loss.
● What is Next-gen firewall?
○ A next-generation firewall (NGFW) is the convergence of traditional
firewall technology with other network device filtering functions, such
as inline application control, an integrated intrusion prevention system
(IPS), threat prevention capabilities, and antivirus protection, to
improve enterprise network security.
● Next-Generation Firewall vs. Traditional Firewall
○ Traditional firewalls only operate on Layers 3 and 4 of the Open
Systems Interconnection (OSI) model to inform their actions, managing
network traffic between hosts and end systems to ensure complete
data transfers. They allow or block traffic based on port and protocol,
leverage stateful inspection, and make decisions based on defined
security policies.
○ As advanced threats such as ransomware began to emerge, these
stateful firewalls were easily bypassed day in and day out. Needless to
say, an enhanced, more intelligent security solution was in high
demand.
○ Enter the NGFW, introduced by Gartner more than a decade ago as a
“deep-packet inspection firewall that moves beyond port/protocol
inspection and blocking to add application-layer inspection, intrusion
prevention, and bringing intelligence from outside the firewall.” It touted
all the features one would expect from a traditional firewall, but with
more granular capabilities that allow for even tighter policies for
identity, user, location, and application.
● NGFW Features
○ Next-generation firewalls are still in use today, and they offer a host of
benefits that place them above their predecessors for on-premises
network and application security.
• Application control: NGFWs actively monitor which
applications (and users) are bringing traffic to the network.
They have an innate ability to analyze network traffic to detect
application traffic, regardless of port or protocol, increasing
overall visibility.
• IPS: At its core, an IPS is designed to continuously monitor a
network, look for malicious events, then take careful action to
prevent them. The IPS can send an alarm to an administrator,
drop the packets, block the traffic, or reset the connection
altogether.
• Threat intelligence: This can be described as the data or
information collected by a variety of nodes across a network or
IT ecosystem that helps teams understand the threats that are
targeting—or have already targeted—an organization. This is
an essential cybersecurity resource.
• Antivirus: As the name suggests, antivirus software detects
viruses, responds to them, and updates detection functionality
to oppose the ever-changing threat landscape.
• Let’s understand why locations are important and this comes down to that old
world mindset, very old, ancient world mindset, if you will, of anchoring our
controls within the four walls of our castle.

• This castle and moat idea of we put everything in the middle, we protect it, and
then we put a protection line around it with water to make sure nobody can get
into that castle, allowed us to centralize our controls.

• Inside of that castle, we held our core assets, things like our money, our gold,
our wheat, our food stores, our people, things that were important to the
longevity of that castle, that kingdom, and ultimately your business.

• Additional, defenses were built. We build towers, we build turrets, we build


firing mechanisms to shoot bows and arrows or whatever it may be in order to
protect our ecosystem.
• Now think about this in terms of your business. You've done the same thing.
You've gone and protected your business. You built a centralized location
where your key assets were. You've built controls. You've created
environments to protect your people, your workloads, and your livelihood
around your financial information.

• If you wanted to extend that and enable a third party, a new employee, to
connect, you needed to open up a drawbridge and allow someone to come in.
You had to provide controls to ensure that they could securely access the
resources they required without putting your castle and business at risk.

• When a third party or someone wanted to communicate or interact with your


castle, you had to put controls in place within the walls of your castle; as your
company expands, you had to interconnect your branch offices or your other
castles with a transport environment. So a road would've been in place for the
Romans.

• They would've built a road between their empires and castles to protect
communication. Their trade, much like you and your branches, would extend
your branches with a secure network so you can enable connectivity between
your ecosystems.
• The more you expand, the bigger the challenges to protecting those services.
• Shadow IT, you see people using personal devices, BYOD.
• When you have this expanse of various network and security controls, a weak
link will be exploited.
• We have a diverse way of working, a diverse set of services, and a diverse set
of ecosystems we need to work in. It's no longer centralized.

That's why our enterprises need to start looking into new ways of working so
they're not creating multiple weak links or replicating the historical mistakes of
legacy infrastructure. We must move away from our historical mindset of a trusted
ecosystem and castle where we are protected.
● What is Security Service Edge?
○ Security service edge (SSE), as defined by Gartner, is a convergence
of network security services delivered from a purpose-built cloud
platform. SSE can be considered a subset of the secure access
service edge (SASE) framework with its architecture squarely focused
on security services. SSE consists of three core services: a secure
web gateway (SWG), a cloud access security broker (CASB), and a
zero trust network access (ZTNA) framework.
● In the SASE framework, network and security services should be consumed
through a unified, cloud-delivered approach. The networking and security
aspects of SASE solutions focus on improving the user-to-cloud-app
experience while reducing costs and complexity.
● You can look at a SASE platform in two slices. The SSE slice focuses on
unifying all security services, including SWG, CASB, and ZTNA. The other,
the WAN edge slice, focuses on doing so for networking services, including
software-defined wide area networking (SD-WAN), WAN optimization, quality
of service (QoS), and other means of improving routing to cloud apps.
● Delivered from a unified cloud-centric platform, SSE enables organizations to
break free from the challenges of traditional network security. SSE provides
four primary advantages:
● 1. Better Risk Reduction
○ SSE enables cybersecurity to be delivered without being tied to a
network. Security is delivered from a cloud platform that can follow the
user-to-app connection regardless of location. Delivering all security
services in a unified way reduces risk because it eliminates the gaps
often seen between point products.
○ SSE also improves visibility across users—wherever they are—and
data, regardless of the channels accessed. Additionally, SSE
automatically enforces security updates across the cloud without the
typical lag time of manual IT administration.
● 2. Zero Trust Access
○ SSE platforms (along with SASE) should enable least-privileged
access from users to cloud or private apps with a strong zero trust
policy based on four factors: user, device, application, and content. No
user should be inherently trusted, and access should be granted based
on identity and policy.
○ Securely connecting users and apps using business policies over the
internet ensures a more secure remote experience because users are
never placed on the network. Meanwhile, threats cannot move
laterally, and applications remain protected behind the SSE platform.
Apps are not exposed to the internet and thus can't be discovered,
which reduces the attack surface, increasing your security and further
minimizing business risk.
● 3. User Experience
○ By Gartner's definition, SSE must be fully distributed across a global
footprint of data centers. The best SSE architectures are purpose-built
for inspection in every data center, as opposed to vendors hosting their
SSE platforms in IaaS infrastructures.
○ Distributed architecture improves performance and reduces latency
because content inspection—including TLS/SSL decryption and
inspection—occurs where the end user connects to the SSE cloud.
Combined with peering across the SSE platform, this gives your
mobile users the best experience. They no longer need to use slow
VPNs, and access to apps in public and private clouds is fast
and seamless.
● 4. Consolidation Advantages
○ With all key security services unified, you'll see lower costs and less
complexity. SSE can deliver many key security services—SWG,
CASB, ZTNA, cloud firewall (FWaaS), cloud sandbox, cloud data loss
prevention (DLP), cloud security posture management (CSPM), and
cloud browser isolation (CBI)—all in one platform. Plus, if you don't
need everything right away, you can easily add any of these services
as your organization grows.
○ With all protection unified under one policy, all channels your users
and data traverse get the same consistent protection.
● 1. Secure Access to Cloud Services and Web Usage
○ Enforcing policy control over user access to the internet, web, and
cloud applications (historically performed by a SWG) is one of the
primary use cases for the security service edge. SSE policy control
helps mitigate risk as end users access content on- and off-network.
Enforcing corporate internet and access control policies for compliance
is also a key driver for this use case across IaaS, PaaS, and SaaS.
○ Another key capability is cloud security posture management (CSPM),
which protects your organization from risky misconfigurations that can
lead to breaches.
● 2. Detect and Mitigate Threats
○ Detecting threats and preventing successful attacks across the
internet, web, and cloud services are key drivers for adopting SSE and,
to a lesser extent, SASE. With end users accessing content across any
connection or device, organizations need a strong defense-in-depth
approach to malware, phishing, and other threats.
○ Your SSE platform must have advanced threat prevention capabilities,
including cloud firewall (FWaaS), cloud sandbox, malware detection,
and cloud browser isolation. CASBs enable inspection of data within
SaaS apps and can identify and quarantine existing malware before it
inflicts damage. Adaptive access control, whereby an end user's
device posture is determined and access is adjusted accordingly, is
also a key component.
● 3. Connect and Secure Remote Workers
○ The modern remote workforce needs remote access to cloud services
and private applications without the inherent risks of VPN. Enabling
access to applications, data, and content without enabling access to
the network is a critical piece of zero trust access because it eliminates
the security ramifications of placing the user on a flat network.
○ Providing secure access to private and cloud apps without needing to
open firewall ACLs or expose apps to the internet is key here. SSE
platforms should enable native inside-out app connectivity, keeping
apps "dark" to the internet. A ZTNA approach should also offer
scalability across a global network of access points, giving all your
users the fastest experience regardless of connectivity demands.
● 4. Identify and Protect Sensitive Data
○ SSE enables you to find and control sensitive data no matter where it
resides. By unifying key data protection technologies, an SSE platform
provides better visibility and greater simplicity across all data channels.
Cloud DLP enables sensitive data (e.g., personally identifiable
information [PII]) to be easily found, classified, and secured to support
Payment Card Industry (PCI) standards and other compliance policies.
SSE also simplifies data protection, as you can create DLP policies
just once and apply them across inline traffic and data at rest in cloud
apps via CASBs.
○ The most effective SSE platforms also deliver high-
performance TLS/SSL inspection to address encrypted traffic (that is,
most data in transit). Also key for this use case is shadow IT discovery,
which allows organizations to block risky or sanctioned applications
across all endpoints.
• This Training will cover the three main components of a successful zero-trust
architecture.
• (Animation) Verify Identity and Context
• (Animation) Control Risk
• (Animation) And Enforce Policy

● So let's get into the details of connectivity and why this is important.
• (Animation) When you look at, we have initiators, these are the requesters
of the connection, what needs to get access. That's the crux of the beginning.
• (Animation) Then we have the destinations, the applications we're trying to
connect to. This is the service that needs to be consumed.

● Historically, these initiators and destinations had to share the same network in
order to get access. If you wanted to provide a service to initiators in some
sort of ecosystem, there needed to be a way to get access to that. That
network had to be shared.

● And that's the crux of what we're talking about here, is that no longer has to
happen. In a zero trust world those initiators and destinations can be
anywhere and there's no reliance on that network as a control point anymore.
And then with a true zero trust architecture, we're not reliant on that network
anymore, but we're reliant on providing a true, secure, functional path by
verifying the initiator, controlling the access, and then ultimately delivering a
policy controller enforces the access as required. This connectivity really
needs to be anywhere regardless of location.
● Zero trust security is a big buzzword these days. While many organizations
have shifted their priorities to adopt zero trust, zero trust network access
(ZTNA) is the strategy behind achieving an effective zero trust model.
● The path to zero trust as an ideology is vague, so ZTNA provides a clear,
defined framework for organizations to follow. It's also a component of
the secure access service edge (SASE) security model, which, in addition to
ZTNA, comprises next-gen firewall (NGFW), SD-WAN, and other services in a
cloud native platform.
● While the need to secure a remote workforce has become critical, network-
centric solutions such as virtual private networks (VPNs) and firewalls create
an attack surface that can be exploited. ZTNA takes a fundamentally different
approach to providing secure remote access to internal applications based on
four core principles:
1. ZTNA completely isolates the act of providing application access from
network access. This isolation reduces risks to the network, such as
infection by compromised devices, and only grants access to specific
applications for authorized users who have been authenticated.
2. ZTNA makes outbound-only connections ensuring both network and
application infrastructure are made invisible to unauthorized users. IPs
are never exposed to the internet, creating a “darknet” that makes the
network impossible to find.
3. ZTNA’s native app segmentation ensures that once users are
authorized, application access is granted on a one-to-one basis.
Authorized users have access only to specific applications rather than
full access to the network. Segmentation prevents overly permissive
access as well as the risk of lateral movement of malware and other
threats.
4. ZTNA takes a user-to-application approach rather than a traditional
network security approach. The network becomes deemphasized, and
the internet becomes the new corporate network, leveraging end-to-
end encrypted TLS micro-tunnels instead of MPLS.

You might also like