Devsecops Guide
Devsecops Guide
Devsecops Guide
COM
DevSecOps Guides
AppSec
Application security (AppSec) threats refer to the security risks and vulnerabilities that can be
present in the software applications used by organizations. These threats can arise from various
sources, such as software bugs, coding errors, design flaws, and inadequate security controls.
AppSec threats can lead to data breaches, information theft, financial losses, reputational
damage, and legal liabilities for organizations.
To address AppSec threats, various standards and frameworks have been developed. Here are
some of the most important ones:
1 OWASP Top Ten: The Open Web Application Security Project (OWASP) Top Ten is a list of the
most critical security risks to web applications. It is widely used by organizations as a guideline
for identifying and addressing AppSec threats.
2 PCI DSS: The Payment Card Industry Data Security Standard (PCI DSS) is a set of security
standards designed to protect credit card data. It requires merchants and service providers to
implement various security controls to prevent unauthorized access to cardholder data.
3 ISO 27001: The International Organization for Standardization (ISO) 27001 is a standard for
information security management systems. It provides a framework for implementing controls
and processes to protect sensitive information, including software applications.
4 NIST Cybersecurity Framework: The National Institute of Standards and Technology (NIST)
Cybersecurity Framework is a set of guidelines for managing and reducing cybersecurity risks.
It provides a framework for organizations to identify, protect, detect, respond to, and recover
from security incidents.
5 BSIMM: The Building Security In Maturity Model (BSIMM) is a software security framework that
provides a measurement of an organization’s software security program maturity. It identifies
best practices and benchmarks for implementing a successful software security program.
6 CSA: The Cloud Security Alliance (CSA) provides guidance for secure cloud computing. Its
Cloud Controls Matrix provides a framework for organizations to assess the security of cloud
service providers.
7 CWE/SANS Top 25: A list of the top 25 most dangerous software errors, as identified by the
Common Weakness Enumeration (CWE) and the SANS Institute.
8 NIST Cybersecurity Framework: A framework developed by the National Institute of Standards
and Technology (NIST) to help organizations manage and reduce cybersecurity risk.
Cheatsheet with rules/policies for preventing OWASP Top 10 vulnerabilities
Type Vulnerability Rule/Policy
SQL Injection Use prepared statements and parameterized
A1: Injection
queries. Sanitize input and validate parameters.
Use parameterized queries with built-in
A1: Injection NoSQL Injection protections. Sanitize input and validate
parameters.
LDAP Injection Use parameterized queries and escape special
A1: Injection
characters.
Use safe APIs or libraries that do not allow
A1: Injection Command Injection arbitrary command execution. Sanitize input and
validate parameters.
A2: Broken Enforce strong password policies, including
Authentication and Weak Passwords complexity requirements and regular password
Session Management changes. Use multi-factor authentication.
A2: Broken Regenerate session ID upon login and logout.
Authentication and Session Fixation Use secure cookies with HttpOnly and Secure
Session Management flags.
Sanitize all user input, especially from untrusted
A3: Cross-Site Scripting
Reflected XSS sources such as URLs, forms, and cookies. Use
(XSS)
output encoding to prevent XSS attacks.
Filter user-generated content to prevent
A3: Cross-Site Scripting
Stored XSS malicious scripts from being stored. Use output
(XSS)
encoding to prevent XSS attacks.
A4: Broken Access Insecure Direct Object Implement proper access controls and
Control Reference (IDOR) authorization checks to prevent direct object
Type Vulnerability Rule/Policy
reference attacks.
A5: Security Improper Error Do not reveal sensitive information in error
Misconfiguration Handling messages or logs. Use custom error pages.
Use strong, up-to-date encryption algorithms
A6: Insecure
Weak Cryptography and keys. Implement proper key management
Cryptographic Storage
and storage practices.
A7: Insufficient
Unencrypted Use HTTPS with secure protocols and strong
Transport Layer
Communications encryption. Disable insecure protocols such as
Protection SSLv2 and SSLv3.
Insecure Validate and verify the integrity of serialized
A8: Insecure
DREAD:
• Damage potential: How much damage could be caused if the vulnerability is exploited?
• Reproducibility: How easy is it to reproduce the vulnerability?
• Exploitability: How easy is it to actually exploit the vulnerability?
• Affected users: How many users or systems are affected by the vulnerability?
• Discoverability: How easy is it for an attacker to discover the vulnerability?
By evaluating each of these factors, organizations can assign a score to a particular vulnerability
and use that score to determine which vulnerabilities pose the greatest risk and should be
addressed first.
SDL (Security Development Lifecycle)
Training:
• Core security training
• Requirements:
• Establish security requirements
• Create quality gates/bug bars
• Perform security and privacy risk assessments
Design:
• Establish design requirements
• Perform attack surface analysis reduction
• Use threat modeling
Implementation:
• Use approved tools
• Deprecate unsafe functions
• Perform static analysis
Verification:
• Perform dynamic analysis
• Perform fuzz testing
• Conduct attack surface review
Release:
• Create an incident response plan
• Conduct final security review
• Certify, release, and archive
Response:
1 Execute incident response plan
OWASP SAMM
OWASP SAMM categorizes security practices into four key business
Governance:
• Strategy and metrics
• Policy and compliance
• Education and guidance
Construction:
• Threat assessment
• Security requirements
• Secure architecture
Verification:
• Design review
• Implementation review
• Security testing
Operations:
• Issue management
• Environment Hardening
• Operational enablement
Driver
DevSecOps is a methodology that seeks to integrate security into the software development
lifecycle, rather than treating it as a separate process that is bolted on at the end. The goal is to
build secure, reliable software that meets the needs of the business, while also protecting
sensitive data and critical infrastructure. There are several drivers and challenges associated with
implementing DevSecOps, which are outlined below.
Drivers:
1 Security concerns: With the increasing frequency and severity of cyberattacks, security has
become a top priority for organizations. DevSecOps provides a way to build security into the
software development process, rather than relying on ad hoc security measures.
2 Compliance requirements: Many organizations are subject to regulatory requirements such as
PCI-DSS, HIPAA, and GDPR. DevSecOps can help ensure compliance with these regulations
by integrating security into the development process and providing visibility into the security
posture of the application.
3 Agility and speed: DevSecOps can help organizations develop and deploy software more
quickly and with greater agility. By integrating security into the development process,
organizations can reduce the time and cost of remediation and avoid delays caused by
security issues.
4 Collaboration: DevSecOps encourages collaboration between developers, security teams, and
operations teams. By working together, these teams can build more secure and reliable
software.
Challenges:
1 Cultural barriers: DevSecOps requires a cultural shift in the organization, with developers,
security teams, and operations teams working together in a collaborative manner. This can be
challenging, particularly in organizations with a siloed culture.
2 Lack of skills: DevSecOps requires a range of skills, including development, security, and
operations. Finding individuals with these skills can be difficult, particularly in a competitive job
market.
3 Tooling and automation: DevSecOps relies heavily on tooling and automation to integrate
security into the development process. Implementing and maintaining these tools can be
challenging, particularly for smaller organizations with limited resources.
4 Complexity: DevSecOps can be complex, particularly for organizations with large, complex
applications. It can be difficult to integrate security into the development process without
causing delays or creating additional complexity.
Application Security Verification Standard(ASVS):
Authentication, Session Management, Access Control, Malicious Input handling, Output
encoding/escaping, Cryptography, Error handling and logging , Data Protection, Communication
Security, Http Security configuration, Security configuration, Malicious, Internal Security, Business
logic, Files and resources, Mobile, Web services
Design review
• Security compliance checklist
• Security requirement checklist (OWASP ASVS)
• Top 10 security design issues
• Security issues in the previous release
• Customer or marketing feedback on security issues
Implementation review
• Secure coding
• Selection of reliable and secure third-party components
• Secure configuration
Third-party components
• A third-party software evaluation checklist:
• Recommended third-party software and usage by projects:
• CVE status of third-party components:
Code Review
• Static Application Security Testing (SAST)->FindSecbugs, Fortify, Coverity, klocwork.
• Dynamic Application Security Testing (DAST)->OWASP ZAP, BurpSuite
• Interactive Application Security Testing (IAST)->CheckMarks Varacode
• Run-time Application Security Protection(RASP)->OpenRASP
• https://www.owasp.org/index.php/Category:OWASP _Code_Review_Project SEI CERT Coding
https://wiki.sei.cmu.edu/confluence/display/seccode/SEI+CERT+Coding+Standards
• Software Assurance Marketplace (SWAMP): https://www.mir-swamp.org/
Environment Hardening
• Secure configuration baseline
• Constant monitoring mechanism
Constant monitoring mechanism
1 Common vulnerabilities and exposures (CVEs) OpenVAS, NMAP
2 Integrity monitoring OSSEC
3 Secure configuration compliance OpenSCAP
4 Sensitive information exposure No specific open source tool in this area. However, we may
define specific regular expression patterns
Methodology
DevSecOps methodology is an approach to software development that integrates security
practices into the software development process from the beginning. The goal of DevSecOps is to
make security an integral part of the software development process, rather than an afterthought.
Some common methodologies used in DevSecOps include:
1 Agile: Agile methodology focuses on iterative development and continuous delivery, with an
emphasis on collaboration and communication between developers and other stakeholders. In
DevSecOps, Agile is often used to facilitate a continuous feedback loop between developers
and security teams, allowing security issues to be identified and addressed early in the
development process.
2 Waterfall: Waterfall methodology is a traditional software development approach that involves
a linear progression of steps, with each step building on the previous one. In DevSecOps,
Waterfall can be used to ensure that security requirements are defined and addressed early in
the development process, before moving on to later stages of development.
3 DevOps: DevOps methodology focuses on collaboration and automation between developers
and IT operations teams. In DevSecOps, DevOps can be used to automate security testing and
other security-related tasks, allowing security issues to be identified and addressed more
quickly and efficiently.
4 Shift-Left: Shift-Left methodology involves moving security testing and other security-related
tasks earlier in the development process, to catch and address security issues earlier. In
DevSecOps, Shift-Left can be used to ensure that security is integrated into the development
process from the very beginning.
5 Threat Modeling: Threat modeling is a methodology that involves identifying and analyzing
potential threats to a software application, and then designing security controls to mitigate
those threats. In DevSecOps, threat modeling can be used to identify and address potential
security issues early in the development process, before they become more difficult and
expensive to address.
These are just a few examples of the methodologies that can be used in DevSecOps. The key is to
integrate security practices into the development process from the beginning, and to use a
continuous feedback loop to identify and address security issues as early as possible.
DoD
DoD Methodology in DevSecOps refers to the specific methodology and framework that the US
Department of Defense (DoD) follows to implement DevSecOps practices in its software
development lifecycle. The DoD has created its own set of guidelines and best practices for
DevSecOps that align with its specific security requirements and regulations.
The DoD Methodology for DevSecOps is based on the following principles:
1 Continuous Integration/Continuous Delivery (CI/CD) pipeline: The CI/CD pipeline is an
automated process for building, testing, and deploying software changes. The DoD
Methodology emphasizes the importance of automating the pipeline to speed up the delivery
process and ensure that all changes are tested thoroughly before they are deployed.
2 Security testing: The DoD Methodology requires that security testing is integrated throughout
the entire software development lifecycle. This includes static code analysis, dynamic
application security testing (DAST), and penetration testing.
3 Infrastructure as Code (IaC): The DoD Methodology promotes the use of IaC to automate the
deployment and management of infrastructure. This approach ensures that infrastructure is
consistent and repeatable, which helps to reduce the risk of misconfigurations and security
vulnerabilities.
4 Risk management: The DoD Methodology requires that risk management is an integral part of
the DevSecOps process. This involves identifying potential risks and vulnerabilities, prioritizing
them based on their severity, and taking appropriate measures to mitigate them.
5 Collaboration: The DoD Methodology emphasizes the importance of collaboration between
development, security, and operations teams. This includes regular communication, joint
planning, and cross-functional training to ensure that all team members have a common
understanding of the DevSecOps process.
Overall, the DoD Methodology for DevSecOps is designed to help the Department of Defense
build secure, reliable, and resilient software systems that meet its unique security requirements
and regulations.
Microsoft
Microsoft has its own approach to DevSecOps, which is known as the Microsoft Secure
Development Lifecycle (SDL). The SDL is a comprehensive methodology that integrates security
practices and tools throughout the entire software development process, from planning and
design to testing and release.
The key principles of the Microsoft SDL are:
1 Security by design: Security is considered from the beginning of the development process,
and is integrated into the design of the application.
2 Continuous improvement: The SDL is an iterative process, with continuous improvement of
security practices and tools based on feedback and lessons learned.
3 Risk management: Risks are identified and evaluated at each stage of the development
process, and appropriate measures are taken to mitigate them.
4 Collaboration: Security is a shared responsibility, and collaboration between development,
operations, and security teams is essential.
5 Automation: Automated tools and processes are used to ensure consistency and efficiency in
security practices.
The Microsoft SDL includes specific practices and tools for each stage of the development
process, such as threat modeling, code analysis, security testing, and incident response.
Microsoft also provides guidance and training for developers, operations teams, and security
professionals on how to implement the SDL in their organizations.
Security guidelines and processes
1- Security training: Security awareness, Security certification program, Case study knowledge
base, Top common issue, Penetration learning environment OWASP top 10, CWE top 25, OWASP
VWAD
2- Security maturity assessment Microsoft SDL, OWASP SAMM self-assessment for maturity level
Microsoft SDL, OWASP SAMM
3- Secure design Threat modeling templates (risks/mitigation knowledge base), Security
requirements for release gate, Security design case study, Privacy protection OWASP ASVS, NIST,
Privacy risk assessment
4- Secure coding Coding guidelines (C++, Java, Python, PHP, Shell, Mobile), Secure coding
scanning tools, Common secure coding case study CWE, Secure coding, CERT OWASP
5- Security testing Secure compiling options such as Stack Canary, NX, Fortify Source, PIE, and
RELRO, Security testing plans, Security testing cases, Known CVE testing, Known secure coding
issues, API-level security testing tools, Automation testing tools, Fuzz testing, Mobile testing,
Exploitation and penetration, Security compliance Kali Linux tools, CIS
6- Secure deployment Configuration checklist, Hardening guide, Communication ports/protocols,
Code signing CIS Benchmarks, CVE
7- Incident and vulnerability handling Root cause analysis templates, Incident handling process
and organization NIST SP800-61
8- Security training Security awareness by email, Case study newsletter, Toolkit usage hands-on
training, Security certificate and exam NIST 800- 50, NIST 800- 16, SAFECode security
engineering training
Stage 1 – basic security control
• Leverage third-party cloud service provider security mechanisms (for example, AWS provides
IAM, KMS, security groups, WAF, Inspector, CloudWatch, and Config)
• Secure configuration replies on external tools such as AWS Config and Inspector
• Service or operation monitoring may apply to AWS Config, Inspector, CloudWatch, WAF, and
AWS shield
Stage 2 – building a security testing team
Vulnerability assessment: NMAP, OpenVAS
Static security analysis: FindBugs for Java, Brakeman for Ruby on Rails, Infer for Java, C++,
Objective C and C
Web security: OWASP dependency check, OWASP ZAP, Archni-Scanner, Burp Suite, SQLMap,
w3af
Communication: Nmap, NCAT, Wireshark, SSLScan, sslyze
Infrastructure security: OpenSCAP, InSpec
VM Toolset: Pentest Box for Windows, Kali Linux, Mobile Security Testing Framework
Security monitoring: ELK, MISP—Open source Threat Intelligence Platform, OSSCE—Open source
HIDS Security, Facebook/osquery—performant endpoint visibility, AlienValut OSSIM—opensource
SIEM
Stage 3 – SDL activities
• Security shifts to the left and involves every stakeholder
• Architect and design review is required to do threat modeling
• Developers get secure design and secure coding training
• Operation and development teams are as a closed-loop collaboration
• Adoption of industry best practices such as OWASP SAMM and Microsoft SDL for security
maturity assessment
Stage 4 – self-build security services
Take Salesforce as an example—the Salesforce Developer Center portal provides security training
modules, coding, implementation guidelines, tools such as assessment tools, code scanning,
testing or CAPTCHA modules, and also a developer forum. Whether you are building an
application on top of salesforce or not, the Salesforce Developer Center is still a good reference
not only for security knowledge but also for some open source tools you may consider applying.
Stage 5 – big data security analysis and automation
Key characteristics at this stage are:
• Fully or mostly automated security testing through the whole development cycle
• Applying big data analysis and machine learning to identify abnormal behavior or unknown
threats
• wProactive security action is taken automatically for security events, for example, the
deployment of WAF rules or the deployment of a virtual patch
Typical open source technical components in big data analysis frameworks include the following:
• Flume, Log Logstash, and Rsyslog for log collection
• Kafka, Storm, or Spark for log analysis
• Redis, MySQL, HBase, and HDFS for data storage
• Kibana, ElasticSearch, and Graylog for data indexing, searching, and presentation
The key stages in big data security analysis are explained in the table:
Data collection:
Collects logs from various kinds of sources and systems such as firewalls, web services, Linux,
networking gateways, endpoints, and so on.
Data normalization:
Sanitizes or transforms data formats into JSON, especially, for critical information such as IP,
hostname, email, port, and MAC.
Data enrich/label:
In terms of IP address data, it will further be associated with GeoIP and WhoIS information.
Furthermore, it may also be labeled if it’s a known black IP address.
Correlation:
The correlation analyzes the relationship between some key characteristics such as IP, hostname,
DNS domain, file hash, email address, and threat knowledge bases.
Storage:
There are different kinds of data that will be stored —the raw data from the source, the data with
enriched information, the results of correlation, GeoIP mapping, and the threat knowledge base.
Alerts:
Trigger alerts if threats were identified or based on specified alerting rules.
Presentation/query:
Security dashboards for motoring and queries. ElasticSearch, RESTful API, or third-party SIEM.
Role of a security team in an organization
1- Security office under a CTO
• No dedicated Chief Security Officer (CSO)
• The security team may not be big—for example, under 10 members
• The security engineering team serves all projects based on their needs
• The key responsibility of the security engineering team is to provide security guidelines,
policies, checklists, templates, or training for all project teams
• It’s possible the security engineering team members may be allocated to a different project to
be subject matter experts based on the project’s needs
• Security engineering provides the guidelines, toolkits, and training, but it’s the project team
that takes on the main responsibility for daily security activity execution
2-Dedicated security team
• Security management: The team defines the security guidelines, process, policies,
templates, checklist, and requirements. The role of the security management team is the same
as the one previously discussed in the Security office under a CTO section.
• Security testing: The team is performing in-house security testing before application release.
• Security engineering: The team provides a common security framework, architecture, SDK,
and API for a development team to use
• Security monitoring: This is the security operation team, who monitor the security status for
all online services.
• Security services: This is the team that develops security services such as WAF and intrusion
deference services.
3- Security technical committee (taskforce)
The secure design taskforce will have a weekly meeting with all security representatives—from all
project teams— and security experts from the security team to discuss the following topics (not
an exhaustive list):
• Common secure design issues and mitigation (initiated by security team)
• Secure design patterns for a project to follow (initiated by security team)
• Secure design framework suggestions for projects (initiated by security team) Specific secure
design issues raised by one project and looking for advice on other projects (initiated by
project team)
• Secure design review assessment for one project (initiated by project team)
Threats
TABLE OF CONTENTS
1 Threat Modeling
a Implementation
b Threat Matrix
c Tools
2 Threats
a Weak or stolen credentials
b Insecure authentication protocols
c Insufficient access controls
d Improper privilege escalation
e Data leakage or unauthorized access
f Insecure data storage
g Inadequate network segmentation
h Man-in-the-Middle attacks
i Resource exhaustion
j Distributed DoS (DDoS) attacks
k Misconfigured security settings
l Insecure default configurations
m Delayed patching of software
n Lack of vulnerability scanning
o Malicious or negligent insiders
p Unauthorized data access or theft
q Unauthorized physical access
r Theft or destruction of hardware
s Vulnerabilities in third-party components
t Lack of oversight on third-party activities
3 Threat detection
4 Indicators of compromises
a External source client IP
b Client fingerprint (OS, browser, user agent, devices, and so on)
c Web site reputation
d Random Domain Name by Domain Generation Algorithms (DGAs)
e Suspicious file downloads
f DNS query
Threat Modeling
Threat modeling is a process that helps identify and prioritize potential security threats to a
system or application. The goal of threat modeling is to identify security risks early in the
development process and proactively mitigate them, rather than waiting for vulnerabilities to be
discovered after deployment.
One popular method for conducting threat modeling is called STRIDE, which stands for Spoofing,
Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege.
These are the six types of security threats that can affect a system, and by considering each of
them in turn, a threat model can help identify potential vulnerabilities and attacks.
The STRIDE methodology is often used in combination with a diagram designer tool, such as
Microsoft’s Threat Modeling Tool or the open-source OWASP Threat Dragon. These tools allow
you to create a visual representation of the system or application you are analyzing, and to map
out potential threats and attack vectors.
Explains the six types of security threats in the STRIDE methodology:
STRIDE Threat Description
Impersonating a user, device, or system in order to gain unauthorized access or
Spoofing perform malicious actions. Examples include phishing attacks or using a fake SSL
certificate to intercept data.
Modifying data or code in transit or at rest, in order to introduce errors, gain
Tampering unauthorized access, or perform other malicious actions. Examples include
modifying the source code of an application or altering data in a database.
Denying or disavowing actions or events, in order to evade accountability or
Repudiation responsibility. Examples include denying that an action was taken, or that data was
accessed.
Revealing confidential or sensitive information to unauthorized parties, whether
Information
intentionally or accidentally. Examples include disclosing passwords or user data, or
Disclosure
exposing private keys.
Disrupting or degrading the availability or functionality of a system or application,
Denial of
through network attacks, resource exhaustion, or other means. Examples include
Service
Distributed Denial of Service (DDoS) attacks or flooding a server with requests.
Gaining additional access or privileges beyond those that were initially granted, in
Elevation of order to perform unauthorized actions or escalate an attack. Examples include
Privilege exploiting a software vulnerability to gain administrative access or using a social
engineering technique to obtain sensitive information.
Implementation
Step 1: Define the Scope
Identify the application or system within the DevSecOps pipeline that you want to perform threat
modeling for. For example, let’s consider a microservices-based application deployed using
containerization and managed by Kubernetes.
Step 2: Gather Information
Gather information about the application’s architecture, design, and deployment. This includes
understanding the components, their interactions, data flows, and external dependencies.
Step 3: Identify Threats and Assets
Identify the critical assets and sensitive data involved in the application. Consider both internal
and external threats that could compromise the security of these assets. For example:
Unauthorized access to customer data stored in a database Injection attacks on APIs or
containers Misconfiguration of Kubernetes resources leading to unauthorized access or privilege
escalation
Step 4: Assess Vulnerabilities and Risks
Evaluate the architecture and design to identify potential vulnerabilities and risks associated with
the identified threats. Consider the security implications at each stage of the DevSecOps pipeline,
including development, testing, deployment, and operations. For example: Insecure container
images containing known vulnerabilities Lack of proper access controls on Kubernetes resources
Weak or outdated authentication mechanisms
Step 5: Prioritize and Mitigate Risks
Prioritize the risks based on their potential impact and likelihood of occurrence. Develop
mitigation strategies and recommendations to address each identified risk. Consider integrating
security controls and best practices into the DevSecOps pipeline. For example: Implementing
automated vulnerability scanning and patch management for container images Applying secure
configuration practices for Kubernetes resources Enforcing strong authentication and access
controls at all stages of the pipeline
Step 6: Continuously Monitor and Improve
Incorporate threat modeling as an iterative process within the DevSecOps lifecycle. Regularly
review and update the threat model as the application evolves or new risks emerge. Continuously
monitor the system for potential threats and vulnerabilities.
Real-case Example:
In a DevSecOps context, consider a scenario where a development team is building a cloud-native
application using microservices architecture and deploying it on a container platform. The threat
modeling process could involve identifying risks such as:
• Insecure container images with vulnerabilities
• Weak authentication and authorization mechanisms
• Inadequate logging and monitoring for containerized applications
• Misconfiguration of cloud resources and access controls
• Insecure communication between microservices
• Injection attacks on API endpoints
Based on the identified risks, mitigation strategies could include:
• Implementing automated vulnerability scanning and image hardening for containers
• Applying strong authentication and authorization mechanisms, such as OAuth or JWT tokens
• Incorporating centralized logging and monitoring solutions for containerized applications
• Establishing proper cloud resource management and access control policies
• Encrypting communication channels between microservices
• Implementing input validation and security controls to prevent injection attacks
Threat Matrix
This matrix provides a starting point for identifying potential threats and corresponding
mitigations based on different categories.
Threat Category Threat Description Potential Mitigation
Weak or stolen Implement strong password policies, multi-factor
Authentication
credentials authentication, and password hashing algorithms.
Insecure authentication Use secure authentication protocols (e.g., TLS) and
Authentication
protocols avoid transmitting credentials in plaintext.
Insufficient access Implement RBAC (Role-Based Access Control) and
Authorization
controls apply the principle of least privilege.
Improper privilege Limit privilege escalation capabilities and regularly
Authorization
escalation review user permissions.
Data leakage or Encrypt sensitive data at rest and in transit, and
Data Protection
unauthorized access implement proper access controls.
Insecure data storage Follow secure coding practices for data storage,
Data Protection
including encryption and secure key management.
Inadequate network Implement proper network segmentation using
Network Security
segmentation firewalls or network policies.
Man-in-the-Middle Use encryption and certificate-based
Network Security
attacks authentication for secure communication.
Denial-of-Service
Resource exhaustion Implement rate limiting, request validation, and
(DoS) monitoring for abnormal behavior.
Denial-of-Service Distributed DoS (DDoS) Employ DDoS mitigation techniques, such as traffic
(DoS) attacks filtering and load balancing.
Threat Category Threat Description Potential Mitigation
System Misconfigured security Apply secure configuration guidelines for all system
Configuration settings components.
System Insecure default Change default settings and remove or disable
Configuration configurations unnecessary services.
Vulnerability Delayed patching of Establish a vulnerability management program with
Management software regular patching and updates.
Vulnerability Lack of vulnerability Conduct regular vulnerability scans and prioritize
Management scanning remediation.
Malicious or negligent Implement proper access controls, monitoring, and
Insider Threats
insiders employee training programs.
Unauthorized data access Monitor and log user activities and implement data
Insider Threats
or theft loss prevention mechanisms.
Unauthorized physical Secure physical access to data centers, server
Physical Security
access rooms, and hardware components.
Theft or destruction of Implement physical security controls, such as locks,
Physical Security
hardware surveillance systems, and backups.
Third-Party Vulnerabilities in third- Perform due diligence on third-party components,
Dependencies party components apply patches, and monitor security advisories.
Third-Party Lack of oversight on Establish strong vendor management practices,
Dependencies third-party activities including audits and security assessments.
Tools
Threat Category Threat Description
A free tool from Microsoft that helps in creating threat models for software
Microsoft Threat
systems. It provides a structured approach to identify, analyze, and mitigate
Modeling Tool
potential threats.
OWASP Threat Dragon An open-source threat modeling tool that enables the creation of threat models
using the STRIDE methodology. It provides an intuitive interface and supports
Threat Category Threat Description
collaboration among team members.
An open-source threat modeling tool specifically designed for web applications.
PyTM It allows the modeling of various aspects of an application’s architecture and
helps in identifying potential threats.
A commercial tool that offers a comprehensive platform for threat modeling. It
ThreatModeler provides a visual modeling interface, automated threat analysis, and integration
with other security tools and frameworks.
A commercial tool that combines threat modeling with risk management. It
IriusRisK supports multiple threat modeling methodologies, provides risk assessment
capabilities, and offers integration with other tools and platforms.
An open-source command-line tool developed by OWASP for threat modeling. It
TMT (Threat
supports the STRIDE methodology and allows for the automation of threat
Modeling Tool)
modeling processes.
While not a traditional threat modeling tool, it offers interactive training modules
Secure Code Warrior and challenges that can help developers understand and identify potential
threats during the development process.
Threats
Weak or stolen credentials
This code creates a threat model using PyTM and represents the “Weak or Stolen Credentials”
threat scenario. It includes actors such as “Attacker” and “Insider,” a server representing the
application server, and a datastore representing the user’s data.
The threat model defines the “Weak or Stolen Credentials” threat and includes attack paths such
as “Password Guessing/Brute Force Attack,” “Credential Theft,” and “Insider Threat.” It also
defines the impact of these threats, such as unauthorized access to user data and data breaches.
The code generates a threat model diagram in PNG format, named
“weak_or_stolen_credentials_threat_model.png.”
from pytm import TM, Server, Datastore, Actor
# Create a new threat model
# Create actors
attacker = Actor("Attacker")
insider = Actor("Insider")
tm.add_threat()
# Define impact
tm.generate_diagram("weak_or_stolen_credentials_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
tm.add_threat()
# Define impact
tm.generate_diagram("insecure_authentication_protocols_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
tm.add_threat()
# Define impact
tm.generate_diagram("insufficient_access_controls_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create server
tm.add_threat()
tm.generate_diagram("improper_privilege_escalation_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create datastore
tm.add_threat()
tm.generate_diagram("data_leakage_unauthorized_access_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create datastore
tm.add_threat()
tm.generate_diagram("insecure_data_storage_threat_model.png")
# Create actors
attacker = Actor("Attacker")
# Create boundaries
# Define dataflows
tm.add_threat()
tm.generate_diagram("inadequate_network_segmentation_threat_model.png")
Man-in-the-Middle attacks
This code creates a threat model using PyTM and represents the “Man-in-the-Middle (MitM)
Attacks” threat scenario. It includes actors such as “Attacker,” “Client,” and “Server,” and defines
boundaries for the client and server components.
The threat model defines the “Man-in-the-Middle Attacks” threat and includes a dataflow
representing the flow of sensitive data between the client and server.
The code generates a threat model diagram in PNG format, named
“man_in_the_middle_attacks_threat_model.png.”
from pytm import TM, Actor, Dataflow, Boundary
# Create actors
attacker = Actor("Attacker")
client = Actor("Client")
server = Actor("Server")
# Create boundaries
# Define dataflows
tm.add_threat()
tm.generate_diagram("man_in_the_middle_attacks_threat_model.png")
Resource exhaustion
This code creates a threat model using PyTM and represents the “Resource Exhaustion” threat
scenario. It includes actors such as “Attacker” and “Service” and defines a dataflow between
them.
The threat model defines the “Resource Exhaustion” threat and includes an attack path
representing the attacker’s ability to consume excessive resources, leading to service availability
impact.
The code generates a threat model diagram in PNG format, named
“resource_exhaustion_threat_model.png.”
from pytm import TM, Actor, Dataflow
# Create actors
attacker = Actor("Attacker")
service = Actor("Service")
# Define dataflows
tm.add_threat()
tm.threat.name("Resource Exhaustion")
tm.generate_diagram("resource_exhaustion_threat_model.png")
# Create actors
attacker = Actor("Attacker")
target = Actor("Target")
# Define dataflows
tm.add_threat()
tm.threat.name("DDoS Attacks")
tm.generate_diagram("ddos_attacks_threat_model.png")
# Create actors
administrator = Actor("Administrator")
attacker = Actor("Attacker")
# Define dataflows
tm.add_threat()
tm.generate_diagram("misconfigured_security_settings_threat_model.png")
# Create actors
administrator = Actor("Administrator")
attacker = Actor("Attacker")
# Define dataflows
tm.add_threat()
tm.generate_diagram("insecure_default_configurations_threat_model.png")
# Create actors
administrator = Actor("Administrator")
attacker = Actor("Attacker")
# Define dataflows
tm.add_threat()
tm.generate_diagram("delayed_patching_threat_model.png")
# Create actors
administrator = Actor("Administrator")
attacker = Actor("Attacker")
# Define dataflows
tm.add_threat()
tm.threat.description("Threat arising from the lack of regular vulnerability scanning, which can
tm.generate_diagram("lack_of_vulnerability_scanning_threat_model.png")
# Create actors
insider = Actor("Insider")
attacker = Actor("Attacker")
# Define dataflows
tm.add_threat()
tm.generate_diagram("malicious_or_negligent_insiders_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create a boundary
# Create a datastore
# Define dataflows
tm.add_threat()
tm.generate_diagram("unauthorized_data_access_theft_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create a boundary
# Create a datastore
# Define dataflows
tm.add_threat()
tm.generate_diagram("unauthorized_physical_access_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create a boundary
# Create a datastore
datastore = Datastore("Hardware")
# Define dataflows
tm.add_threat()
tm.generate_diagram("theft_destruction_hardware_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
# Create a boundary
# Create a datastore
# Define dataflows
tm.add_threat()
tm.generate_diagram("third_party_component_vulnerabilities_threat_model.png")
# Create actors
attacker = Actor("Attacker")
user = Actor("User")
third_party = Actor("Third-Party")
# Create a boundary
# Create a process
# Create a datastore
# Define dataflows
tm.add_threat()
tm.generate_diagram("lack_of_oversight_third_party_activities_threat_model.png")
Threat detection
To visualize the network threat status, there are two recommended open source tools: Malcom
and Maltrail (Malicious Traffic detection system). Malcom can present a host communication
relationship diagram. It helps us to understand whether there are any internal hosts connected to
an external suspicious C&C server or known bad sites https://github.com/tomchop/malcom#what-
is-malcom
Indicators of compromises
An analysis of hosts for suspicious behaviors also poses a significant challenge due to the
availability of logs. For example, dynamic runtime information may not be logged in files and the
original process used to drop a suspicious file may not be recorded. Therefore, it is always
recommended to install a host IDS/IPS such as OSSEC (Open Source HIDS SEcurity) or host
antivirus software as the first line of defense against malware. Once the host IDS/IPS or antivirus
software is in place, threat intelligence and big data analysis are supplementary, helping us to
understand the overall host’s security posture and any known Indicators of Compromises (IoCs) in
existing host environments.
Based on the level of severity, the following are key behaviors that may indicate a compromised
host:
External source client IP
The source of IP address analysis can help to identify the following: A known bad IP or TOR exit
node Abnormal geolocation changes Concurrent connections from different geolocations The
MaxMind GeoIP2 database can be used to translate the IP address to a geolocation:
https://dev.maxmind.com/geoip/geoip2/geolite2/#Downloads
Client fingerprint (OS, browser, user agent, devices, and so on)
The client fingerprint can be used to identify whether there are any unusual client or non-browser
connections. The open source ClientJS is a pure JavaScript that can be used to collect client
fingerprint information. The JA3 provided by Salesforce uses SSL/TLS connection profiling to
identify malicious clients. ClientJS: https://clientjs.org/ JA3: https://github.com/salesforce/ja3
Web site reputation
When there is an outbound connection to an external website, we may check the threat reputation
of that target website. This can be done by means of the web application firewall, or web gateway
security solutions https://www.virustotal.com/
Random Domain Name by Domain Generation Algorithms (DGAs)
The domain name of the C&C server can be generated by DGAs. The key characteristics of the
DGA domain are high entropy, high consonant count, and long length of a domain name. Based on
these indicators, we may analyze whether the domain name is generated by DGAs and could be a
potential C&C server. DGA Detector: https://github.com/exp0se/dga_detector/ In addition, in order
to reduce false positives, we may also use Alexa’s top one million sites as a website whitelist.
Refer to https://s3.amazonaws.com/alexa-static/top-1m.csv.zip.
Suspicious file downloads
Cuckoo sandbox suspicious file analysis: https://cuckoosandbox.org/
DNS query
In the case of DNS query analysis, the following are the key indicators of compromises: DNS
query to unauthorized DNS servers. Unmatched DNS replies can be an indicator of DNS spoofing.
Clients connect to multiple DNS servers. A long DNS query, such as one in excess of 150
characters, which is an indicator of DNS tunneling. A domain name with high entropy. This is an
indicator of DNS tunneling or a C&C server.
Code / SAST
SAST
SAST, or Static Application Security Testing, is a technique used in application security to analyze
the source code of an application for security vulnerabilities. SAST tools work by scanning the
source code of an application without actually executing the code, searching for common coding
errors, security flaws, and potential vulnerabilities.
SAST is a type of white-box testing, meaning that it relies on the tester having access to the
source code of the application being tested. This allows SAST tools to perform a thorough
analysis of the codebase, identifying potential vulnerabilities that may not be apparent through
other testing techniques.
SAST Tool Description Languages
Supported
Java, .NET, PHP,
A SAST tool that analyzes source code for security Python, Ruby, Swift,
Checkmarx vulnerabilities, providing real-time feedback to developers C/C++, Objective-C,
on potential issues. Scala, Kotlin,
JavaScript
A tool that provides continuous code inspection, Over 25 programming
identifying and reporting potential security vulnerabilities, languages, including
SonarQube
Semgrep
Semgrep is designed to be fast and easy to use, and it supports multiple programming languages,
including Python, Java, JavaScript, Go, and more. It uses a simple pattern matching language to
identify patterns of code that are known to be vulnerable, and it can be configured to scan
specific parts of a codebase, such as a single file or a directory.
Semgrep can be used as part of the software development process to identify vulnerabilities early
on, before they can be exploited by attackers. It can be integrated into a CI/CD pipeline to
automatically scan code changes as they are made, and it can be used to enforce security
policies and coding standards across an organization.
create a sample rule. Here are the steps:
1 Install and set up Semgrep: To use Semgrep, you need to install it on your system. You can
download Semgrep from the official website, or install it using a package manager like pip.
Once installed, you need to set up a project and configure the scan settings.
2 Create a new Semgrep rule: To create a new Semgrep rule, you need to write a YAML file that
defines the rule. The YAML file should contain the following information:
• The rule ID: This is a unique identifier for the rule.
• The rule name: This is a descriptive name for the rule.
• The rule description: This describes what the rule does and why it is important.
• The rule pattern: This is the pattern that Semgrep will use to search for the vulnerability.
• The rule severity: This is the severity level of the vulnerability (e.g. high, medium, low).
• The rule language: This is the programming language that the rule applies to (e.g. Python,
Java, JavaScript).
• The rule tags: These are optional tags that can be used to categorize the rule.
Here is an example rule that checks for SQL injection vulnerabilities in Python code:
id: sql-injection-py
severity: high
language: python
tags:
- security
- sql-injection
patterns:
- pattern: |
db.execute("SELECT * FROM users WHERE username = '" + username + "' AND password = '" + pas
message: |
1 Run Semgrep with the new rule: Once you have created the new rule, you can run Semgrep to
scan your code. To run Semgrep, you need to specify the path to the code you want to scan
and the path to the YAML file that contains the rule. Here is an example command:
semgrep --config path/to/rule.yaml path/to/code/
1 Review the scan results: After the scan is complete, Semgrep will display the results in the
terminal. The results will include information about the vulnerabilities that were found,
including the severity level, the location in the code where the vulnerability was found, and the
code that triggered the rule.
how to use Semgrep in a CI/CD pipeline on GitHub:
1 Set up Semgrep in your project: To use Semgrep in your CI/CD pipeline, you need to install it
and set it up in your project. You can do this by adding a semgrep.yml file to your project’s root
directory. The semgrep.yml file should contain the rules that you want to apply to your
codebase.
Here is an example semgrep.yml file that checks for SQL injection vulnerabilities in Python code:
rules:
- id: sql-injection-py
pattern: db.execute("SELECT * FROM users WHERE username = $username AND password = $password"
1 Create a GitHub workflow: Once you have set up Semgrep in your project, you need to create a
GitHub workflow that runs Semgrep as part of your CI/CD pipeline. To create a workflow, you
need to create a .github/workflows directory in your project and add a YAML file that defines
the workflow.
Here is an example semgrep.yml workflow that runs Semgrep on every push to the master branch:
name: Semgrep
on:
push:
branches:
- master
jobs:
semgrep:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
uses: returntocorp/semgrep-action@v1
with:
args: -c semgrep.yml
1 Push changes to GitHub: Once you have created the workflow, you need to push the changes
to your GitHub repository. This will trigger the workflow to run Semgrep on your codebase.
2 Review the results: After the workflow has completed, you can review the results in the GitHub
Actions tab. The results will include information about the vulnerabilities that were found,
including the severity level, the location in the code where the vulnerability was found, and the
code that triggered the rule.
CodeQL
CodeQL is based on a database of semantic code representations that allows it to perform
complex analysis on code that other static analysis tools may miss. It supports a wide range of
programming languages, including C, C++, C#, Java, JavaScript, Python, and more. CodeQL can
be used to analyze both open source and proprietary code, and it can be used by both developers
and security researchers.
To use CodeQL, developers write queries in a dedicated query language called QL. QL is a
declarative language that allows developers to express complex analyses in a concise and
understandable way. Queries can be written to check for a wide range of issues, such as buffer
overflows, SQL injection vulnerabilities, race conditions, and more.
CodeQL can be integrated into a variety of development tools, such as IDEs, code review tools,
and CI/CD pipelines. This allows developers to run CodeQL automatically as part of their
development process and catch issues early in the development cycle.
Here is an example of how to create a CodeQL rule and run it:
1 Identify the issue: Let’s say we want to create a CodeQL rule to detect SQL injection
vulnerabilities in a Java web application.
2 Write the query: To write the query, we can use the CodeQL libraries for Java and the CodeQL
built-in functions for detecting SQL injection vulnerabilities. Here is an example query:
import java
tainted.(SQL::Source) and
tainted.(SQL::Sink)
This query looks for calls to the executeQuery method with a string argument that can be tainted
with user input, and then checks if the argument is used in a way that could lead to a SQL
injection vulnerability. If a vulnerability is detected, the query returns the call and a message
indicating the potential vulnerability.
1 Test the query: To test the query, we can run it against a small sample of our codebase using
the CodeQL CLI tool. Here is an example command:
$ codeql query run --database=MyAppDB --format=csv --output=results.csv path/to/query.ql
This command runs the query against a CodeQL database named MyAppDB and outputs the
results to a CSV file named results.csv.
1 Integrate the query: To integrate the query into our development process, we can add it to our
CodeQL database and run it automatically as part of our CI/CD pipeline. This can be done
using the CodeQL CLI tool and the CodeQL GitHub Action.
Here is an example command to add the query to our CodeQL database:
$ codeql database analyze MyAppDB --queries=path/to/query.ql
And here is an example GitHub Action workflow to run the query automatically on every push to
the master branch:
name: CodeQL
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
uses: actions/
Code / SCA
SCA
SCA stands for Software Composition Analysis. It is a type of application security testing that
focuses on identifying and managing third-party components and dependencies used within an
application. SCA tools scan an application’s codebase and build artifacts to identify any third-
party libraries or components, and then assess those components for known security
vulnerabilities or other issues.
the SCA process typically involves the following steps:
1 Discovery: The SCA tool scans the application’s codebase and build artifacts to identify any
third-party libraries or components used within the application.
2 Inventory: The SCA tool creates an inventory of all the third-party components and libraries
used within the application, including their versions, license types, and any known security
vulnerabilities or issues.
3 Assessment: The SCA tool assesses each component in the inventory for known security
vulnerabilities or other issues, using sources such as the National Vulnerability Database
(NVD) and Common Vulnerabilities and Exposures (CVE) databases.
4 Remediation: Based on the results of the assessment, the SCA tool may provide
recommendations for remediation, such as upgrading to a newer version of a component, or
switching to an alternative component that is more secure.
By performing SCA, organizations can gain visibility into the third-party components and libraries
used within their applications, and can proactively manage any security vulnerabilities or issues
associated with those components. This can help to improve the overall security and resilience of
the application.
SCA tools work by scanning your codebase and identifying the open source components that are
used in your application. They then compare this list against known vulnerabilities in their
database and alert you if any vulnerabilities are found. This helps you to manage your open source
components and ensure that you are not using any vulnerable components in your application.
SCA Tool Description Languages Supported
Sonatype Nexus A software supply chain automation and Java, .NET, Ruby, JavaScript,
Lifecycle management tool Python, Go, PHP, Swift
An open source security and license Over 20 languages including Java,
Black Duck
compliance management tool .NET, Python, Ruby, JavaScript,
PHP
A cloud-based open source security and Over 30 languages including Java,
WhiteSource
license compliance management tool .NET, Python, Ruby, JavaScript,
PHP
A developer-first security and dependency Over 40 languages including
Snyk
management tool Java, .NET, Python, Ruby,
JavaScript, PHP, Go
A software development tool that automates Over 30 languages including Java,
FOSSA open source license compliance and .NET, Python, Ruby, JavaScript,
vulnerability management PHP
on:
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
run: |
npm install
npm test
uses: snyk/actions@v1
with:
file: package.json
args: --severity-threshold=high
run: deploy.sh
In this example, the SCA tool is integrated into the pipeline using the Snyk GitHub Action. The tool
is configured to scan the package.json file and report any vulnerabilities with a severity threshold
of “high”. If any vulnerabilities are identified, the pipeline will fail and the developer will be notified
to take action.
Secure Pipeline
A secure pipeline is a set of processes and tools used to build, test, and deploy software in a way
that prioritizes security at every stage of the development lifecycle. The goal of a secure pipeline
is to ensure that applications are thoroughly tested for security vulnerabilities and compliance
with security standards before they are released into production.
A secure pipeline typically involves the following stages:
1 Source Code Management: Developers use source code management tools, such as Git or
SVN, to manage the code for the application.
2 Build: The application code is built into executable code using a build tool, such as Maven or
Gradle.
3 Static Analysis: A static analysis tool, such as a SAST tool, is used to scan the code for
security vulnerabilities.
4 Unit Testing: Developers write unit tests to ensure that the application functions as expected
and to catch any bugs or errors.
5 Dynamic Analysis: A dynamic analysis tool, such as a DAST tool, is used to test the application
in a running environment and identify any security vulnerabilities.
6 Artifact Repository: The application and all its dependencies are stored in an artifact
repository, such as JFrog or Nexus.
7 Staging Environment: The application is deployed to a staging environment for further testing
and validation.
8 Compliance Check: A compliance tool is used to check that the application meets any
regulatory or compliance requirements.
9 Approval: The application is reviewed and approved for deployment to production.
10 Deployment: The application is deployed to production using a deployment tool, such as
Ansible or Kubernetes.
By implementing a secure pipeline, organizations can ensure that their applications are thoroughly
tested for security vulnerabilities and compliance with security standards, reducing the risk of
security breaches and ensuring that applications are more resilient to attacks.
Step 1: Set up version control
• Use a version control system (VCS) such as Git to manage your application code.
• Store your code in a private repository and limit access to authorized users.
• Use strong authentication and authorization controls to secure access to your repository.
Step 2: Implement continuous integration
• Use a continuous integration (CI) tool such as Jenkins or Travis CI to automate your build
process.
• Ensure that your CI tool is running in a secure environment.
• Use containerization to isolate your build environment and prevent dependencies from
conflicting with each other.
Step 3: Perform automated security testing
• Use SAST, DAST, and SCA tools to perform automated security testing on your application
code.
• Integrate these tools into your CI pipeline so that security testing is performed automatically
with each build.
• Configure the tools to report any security issues and fail the build if critical vulnerabilities are
found.
Step 4: Implement continuous deployment
• Use a continuous deployment (CD) tool such as Kubernetes or AWS CodeDeploy to automate
your deployment process.
• Implement a release process that includes thorough testing and review to ensure that only
secure and stable code is deployed.
Step 5: Monitor and respond to security threats
• Implement security monitoring tools to detect and respond to security threats in real-time.
• Use tools such as intrusion detection systems (IDS) and security information and event
management (SIEM) systems to monitor your infrastructure and applications.
• Implement a security incident response plan to quickly respond to any security incidents that
are detected.
example of a secure CI/CD pipeline
# Define the pipeline stages
stages:
- build
- test
- security-test
- deploy
jobs:
build:
# Build the Docker image and tag it with the commit SHA
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
run: |
test:
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
# Perform automated security testing using SAST, DAST, and SCA tools
runs-on: ubuntu-latest
steps:
uses: actions/checkout@v2
uses: shiftleftio/action-sast@v3.3.1
with:
scan-targets: .
shiftleft-org-id: $
shiftleft-api-key: $
uses: aquasecurity/trivy-action@v0.5.0
with:
image-ref: myapp:$
uses: snyk/actions@v1
with:
file: package.json
args: --severity-threshold=high
deploy:
runs-on: ubuntu-latest
steps:
uses: appleboy/ssh-action@master
with:
host: production-server.example.com
username: $
password: $
script: |
job performs automated security testing using SAST, DAST, and SCA tools, and the deploy job
deploys the application to the production environment.
Each job is defined with a parameter that specifies the operating system that the job
runs-on
should run on. The steps for each job are defined with and parameters that specify the
name run
name of the step and the command to run. The parameter is used to specify external actions
uses
branch or tag that triggered the pipeline. Secrets are stored in the GitHub repository’s secrets
store and accessed using the syntax.
$
Artifacts
Artifacts are typically created during the build and deployment process, and are stored in a
repository or other storage location so that they can be easily retrieved and deployed as needed.
There are a number of methods that can be used to save artifacts in a DevSecOps environment,
including:
1 Build Artifacts: Build artifacts are created during the build process and include compiled code,
libraries, and other files that are needed to deploy and run the application. These artifacts can
be saved in a repository or other storage location for later use.
2 Container Images: Container images are a type of artifact that contain everything needed to
run the application, including the code, runtime, and dependencies. These images can be
saved in a container registry or other storage location and can be easily deployed to any
environment that supports containers.
3 Infrastructure as Code (IaC) Artifacts: IaC artifacts are created as part of the configuration
management process and include scripts, templates, and other files that are used to define
and manage the infrastructure of the application. These artifacts can be stored in a repository
or other storage location and can be used to deploy the infrastructure to any environment.
4 Test Artifacts: Test artifacts include test scripts, test results, and other files that are created as
part of the testing process. These artifacts can be stored in a repository or other storage
location for later reference and analysis.
Checklist for developing an artifact in DevSecOps
1- Create a secure development environment:
• Set up a development environment that is separate from production.
• Use version control to track changes to the source code.
• Use secrets management tools to store sensitive information like API keys and passwords.
2- Implement security testing into the development process:
• Use static analysis security testing (SAST) tools to analyze the source code for vulnerabilities.
• Use dynamic application security testing (DAST) tools to test the application in a real-world
environment.
• Use interactive application security testing (IAST) tools to detect vulnerabilities in real-time
during testing.
3- Automate the build process:
Use build automation tools like Maven or Gradle to compile the source code and build the artifact.
Include security testing tools in the build process.
4- Automate deployment:
• Use configuration management tools like Ansible or Chef to automate deployment of the
artifact.
• Use infrastructure-as-code tools like Terraform or CloudFormation to automate the creation
and management of infrastructure.
5- Implement continuous integration/continuous delivery (CI/CD) practices:
• Use a CI/CD pipeline to automate the entire development process.
• Use tools like Jenkins or CircleCI to manage the pipeline and run tests automatically.
Configuration Management
Configuration management is the process of managing and maintaining the configuration of an
application or system in a consistent and reliable manner. In a DevSecOps environment,
configuration management is an important component of ensuring that applications are secure
and reliable. Here are some common tools and practices used in configuration management in
DevSecOps:
1 Infrastructure as Code (IaC): IaC is a practice that involves writing code to define and manage
the infrastructure and configuration of an application or system. This approach provides a
more automated and repeatable way of managing configurations, and helps to ensure that the
infrastructure is consistent across different environments.
2 Configuration Management Tools: There are a number of configuration management tools that
can be used to manage configurations in a DevSecOps environment. Some popular examples
include Ansible, Chef, Puppet, and SaltStack.
3 Version Control: Version control systems like Git can be used to manage changes to
configurations over time, making it easier to track changes and roll back to previous
configurations if necessary.
4 Continuous Integration and Deployment (CI/CD): CI/CD pipelines can be used to automate the
deployment and configuration of applications in a DevSecOps environment. This can help to
ensure that configurations are consistent and up-to-date across different environments.
5 Security Configuration Management: Security configuration management involves ensuring
that the configurations of applications and systems are secure and meet industry standards
and best practices. This can include configuring firewalls, encryption, access controls, and
other security measures.
To achieve this, you can use a configuration management tool like Ansible or Puppet to manage
the configuration of the system. Here’s a high-level overview of how this might work:
1 Define the configuration: You define the configuration of the system in a configuration file or
script. This includes things like the software packages to be installed, the network settings, the
user accounts, and any other system settings.
2 Version control: You use version control tools like Git to track changes to the configuration file,
and to maintain a history of changes.
3 Continuous integration and deployment: You use a CI/CD pipeline to build and test the
application, and to deploy the containers to the different environments. The configuration
management tool is integrated into the pipeline, so that any changes to the configuration are
automatically applied to the containers as they are deployed.
4 Automation: The configuration management tool automates the process of configuring the
system, so that the same configuration is applied consistently across all environments. This
reduces the risk of configuration errors and makes it easier to maintain the system.
5 Monitoring and reporting: The configuration management tool provides monitoring and
reporting capabilities, so that you can track the status of the system and identify any issues or
errors.
Ansible
ANSIBLE PLAYBOOKS
Playbooks are the heart of Ansible, and define the configuration steps for your infrastructure.
# playbook.yml
- hosts: web_servers
tasks:
apt:
name: apache2
state: latest
service:
name: apache2
state: started
ANSIBLE VARIABLES
# playbook.yml
- hosts: web_servers
vars:
http_port: 80
tasks:
apt:
name: apache2
state: latest
template:
src: apache.conf.j2
dest: /etc/apache2/apache.conf
Ansible Vault
Vault allows you to encrypt sensitive data, like passwords and API keys.
$ ansible-vault create secrets.yml
# secrets.yml
api_key: ABCDEFGHIJKLMNOPQRSTUVWXYZ
DAST
DAST stands for Dynamic Application Security Testing. It is a type of application security testing
that involves testing an application in a running state to identify security vulnerabilities that may
be present.
DAST tools work by interacting with an application in much the same way as a user would, by
sending HTTP requests to the application and analyzing the responses that are received. This
allows DAST tools to identify vulnerabilities that may be present in the application’s logic,
configuration, or architecture.
Here are some key features of DAST:
1- Realistic testing: DAST provides a more realistic testing environment than SAST because it
tests the application in a running state, simulating how an attacker would interact with it.
2- Automation: DAST tools can be automated to provide continuous testing, allowing for faster
feedback on vulnerabilities.
3- Scalability: DAST tools can be scaled to test large and complex applications, making them
suitable for enterprise-level testing.
4- Coverage: DAST tools can provide coverage for a wide range of security vulnerabilities,
including those that may be difficult to detect through other forms of testing.
5- Ease of use: DAST tools are typically easy to use and require minimal setup, making them
accessible to developers and security teams.
DAST Tool Description
OWASP ZAP an open-source web application security scanner
Burp Suite a web application security testing toolkit
Assuming we have a web application that we want to test for security vulnerabilities using DAST,
we can use OWASP ZAP, an open-source web application security scanner, in our pipeline.
1- First, we need to install OWASP ZAP and configure it with our web application. This can be
done by running the following commands in the pipeline:
- name: Install OWASP ZAP
run: |
wget https://github.com/zaproxy/zaproxy/releases/download/v2.10.0/ZAP_2.10.0_Core.zip
run: |
run: |
2- Next, we need to run the security scan using OWASP ZAP. This can be done by running the
following command in the pipeline:
- name: Run OWASP ZAP scan
run: |
This will start the OWASP ZAP spider to crawl the web application and then run an active scan to
identify security vulnerabilities.
3- Finally, we need to generate a report of the security scan results. This can be done by running
the following command in the pipeline:
- name: Generate OWASP ZAP report
run: |
zap/zap-cli.py -p 8080 report -o zap-report.html -f html
This will generate an HTML report of the security scan results that can be reviewed and acted
upon.
IAST
IAST stands for Interactive Application Security Testing. It is a type of application security testing
that combines the benefits of SAST (Static Application Security Testing) and DAST (Dynamic
Application Security Testing) tools.
IAST tools are designed to be integrated into the application being tested, and work by
instrumenting the application’s code to provide real-time feedback on any security vulnerabilities
that are identified during runtime. This allows IAST tools to detect vulnerabilities that may not be
visible through other forms of testing, such as those that are introduced by the application’s
configuration or environment.
Here are some key features of IAST:
1 Real-time feedback: IAST tools provide real-time feedback on security vulnerabilities as they
are identified during runtime, allowing developers to fix them as they are found.
2 Accuracy: IAST tools have a high degree of accuracy because they are able to detect
vulnerabilities in the context of the application’s runtime environment.
3 Low false positive rate: IAST tools have a low false positive rate because they are able to
distinguish between actual vulnerabilities and benign code.
4 Integration: IAST tools can be integrated into the development process, allowing developers to
incorporate security testing into their workflows.
5 Automation: IAST tools can be automated, allowing for continuous testing and faster feedback
on vulnerabilities.
6 Coverage: IAST tools can provide coverage for a wide range of security vulnerabilities,
including those that may be difficult to detect through other forms of testing.
IAST Tool Description
an IAST tool that automatically identifies and tracks vulnerabilities in real-time during
Contrast
the software development process. It can be integrated into a CI/CD pipeline to
Security
provide continuous monitoring and protection.
an IAST solution that detects and prevents attacks by monitoring the runtime
Hdiv Security behavior of applications. It provides detailed insights into vulnerabilities and
generates reports for developers and security teams.
a security testing tool that combines IAST with SAST (Static Application Security
RIPS
Testing) to provide comprehensive security analysis of web applications. It supports
Technologies
multiple programming languages and frameworks.
a web application security tool that offers IAST capabilities for detecting
Acunetix vulnerabilities in real-time. It provides detailed reports and integrates with CI/CD
pipelines to automate the security testing process.
an open-source IAST tool for detecting and preventing security vulnerabilities in web
AppSecEngineer applications. It integrates with popular web frameworks such as Spring, Django, and
Ruby on Rails, and provides detailed reports of vulnerabilities and attack attempts.
- build
- test
- iast
- deploy
build:
stage: build
script:
test:
stage: test
script:
- mvn test
iast:
stage: iast
image: contrastsecurity/contrast-agent
script:
allow_failure: true
deploy:
stage: deploy
script:
- mvn deploy
only:
- master
In this pipeline, the IAST stage is added after the test stage. The script in the IAST stage starts the
Contrast Security agent using the Java command with the -javaagent option, and then starts the
application using the command. The agent will monitor the application for security
jar
Smoke Test
Smoke tests are typically conducted on a small subset of the application’s functionality, and are
designed to be quick and easy to execute. They may include basic checks such as verifying that
the application can be launched, that key features are functional, and that data is being processed
correctly. If the smoke test passes, the application can be considered ready for further testing.
Example commands for performing smoke tests in DevSecOps:
HTTP requests:
• Use tools like cURL or HTTPie to make HTTP requests to the application’s endpoints and verify
that they return the expected responses.
• For example, you might run a command like to check the
curl http://localhost:8080/api/health
Scripted tests:
• Use testing frameworks like Selenium or Puppeteer to automate browser-based tests and
verify that the application’s UI is working correctly.
• For example, you might create a script using Puppeteer that logs in to the application and
verifies that the user profile page is displayed correctly.
Unit tests:
• Use unit testing frameworks like JUnit or NUnit to test individual functions and methods in the
application.
• For example, you might run a command like mvn test to run all of the unit tests in a Java
application.
Cloud Scanning
Cloud scanning in production DevSecOps refers to the process of continuously scanning the
production environment of an application deployed on cloud infrastructure for potential security
vulnerabilities and threats. This is done to ensure that the application remains secure and
compliant with security policies and standards even after it has been deployed to the cloud.
Cloud scanning tools can perform a variety of security scans on the production environment,
including vulnerability scanning, penetration testing, and compliance auditing. These tools can
help to identify security issues in real-time and provide alerts and notifications to the security
team.
Some of the benefits of cloud scanning in production DevSecOps include:
1 Real-time security monitoring: Cloud scanning enables security teams to monitor the
production environment in real-time, providing early detection and response to potential
security threats.
2 Automated security checks: Cloud scanning tools can be integrated into the DevOps pipeline
to perform automated security checks on the production environment, enabling teams to
catch security issues early in the development cycle.
3 Improved compliance: Cloud scanning tools can help to ensure that the application remains
compliant with industry standards and regulations by continuously monitoring the production
environment for compliance violations.
4 Reduced risk: Cloud scanning can help to reduce the risk of security breaches and other
security incidents by detecting and addressing potential vulnerabilities in the production
environment.
AWS Inspector
A tool that analyzes the behavior and configuration of AWS resources for potential security issues.
aws inspector start-assessment-run --assessment-template-arn arn:aws:inspector:us-west-2:12345678
CloudPassage Halo
A tool that provides visibility, security, and compliance across your entire cloud infrastructure.
curl -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" -X POST https://api.cl
Infrastructure Scanning
Infrastructure scanning in production DevSecOps refers to the process of continuously scanning
the underlying infrastructure of an application deployed on cloud infrastructure for potential
security vulnerabilities and threats. This is done to ensure that the infrastructure remains secure
and compliant with security policies and standards even after it has been deployed to the cloud.
Nessus
A tool that scans your network for vulnerabilities and provides detailed reports.
nessuscli scan new --policy "Basic Network Scan" --target "192.168.1.1"
OpenVAS
An open-source vulnerability scanner that provides detailed reports and supports a wide range of
platforms.
omp -u admin -w password -G "Full and fast" -T 192.168.1.1
Qualys
A cloud-based security and compliance tool that provides continuous monitoring and detailed
reporting.
curl -H "X-Requested-With: Curl" -u "username:password" "https://qualysapi.qualys.com/api/2.0/fo/
Security Onion
A Linux distro for intrusion detection, network security monitoring, and log management.
sudo so-import-pcap -r 2022-01-01 -c example.pcap
Lynis
A tool for auditing security on Unix-based systems that performs a system scan and provides
detailed reports.
sudo lynis audit system
Nuclei
A fast and customizable vulnerability scanner that supports a wide range of platforms and
technologies.
nuclei -u http://example.com -t cves/CVE-2021-1234.yaml
Nuclei Templates
A collection of templates for Nuclei that cover a wide range of vulnerabilities and
misconfigurations.
nuclei -u http://example.com -t cves/ -max-time 5m
Secret Management
Secret management refers to the process of securely storing, managing, and accessing sensitive
information, such as passwords, API keys, and other credentials. Secrets are a critical component
of modern applications, and their secure management is essential to ensure the security and
integrity of the application.
Secret management typically involves the use of specialized tools and technologies that provide a
secure and centralized location for storing and managing secrets. These tools often use strong
encryption and access control mechanisms to protect sensitive information from unauthorized
access.
Some of the key features of secret management tools include:
1 Secure storage: Secret management tools provide a secure location for storing sensitive
information, typically using strong encryption and access control mechanisms to ensure that
only authorized users can access the information.
2 Access control: Secret management tools allow administrators to define access control
policies and roles that govern who can access specific secrets and what actions they can
perform.
3 Auditing and monitoring: Secret management tools provide auditing and monitoring
capabilities that allow administrators to track who accessed specific secrets and when,
providing an audit trail for compliance and security purposes.
4 Integration with other tools: Secret management tools can be integrated with other DevOps
tools, such as build servers, deployment tools, and orchestration frameworks, to provide
seamless access to secrets during the application lifecycle.
Hashicorp Vault
A highly secure and scalable secret management solution that supports a wide range of
authentication methods and storage backends.
vault kv put secret/myapp/config username="admin" password="s3cret" API_key="123456789"
Git-crypt
A command-line tool that allows you to encrypt files and directories within a Git repository.
git-crypt init && git-crypt add-gpg-user user@example.com
Blackbox
A command-line tool that allows you to store and manage secrets in Git repositories using GPG
encryption.
blackbox_initialize && blackbox_register_new_file secrets.txt
Threat Intelligence
Threat intelligence is the process of gathering and analyzing information about potential and
existing cybersecurity threats, such as malware, phishing attacks, and data breaches. The goal of
threat intelligence is to provide organizations with actionable insights that can help them identify
and mitigate potential security risks before they can cause harm.
In the context of DevSecOps, threat intelligence is an important component of a comprehensive
security strategy. By gathering and analyzing information about potential security threats,
organizations can better understand the security risks that they face and take steps to mitigate
them. This can include implementing security controls and countermeasures, such as firewalls,
intrusion detection systems, and security information and event management (SIEM) systems, to
protect against known threats.
Threat intelligence can also be used to enhance other DevSecOps practices, such as vulnerability
management and incident response. By identifying potential vulnerabilities and threats in real-
time, security teams can take swift action to remediate issues and prevent security incidents from
occurring.
Some of the key benefits of threat intelligence in DevSecOps include:
1 Improved threat detection: Threat intelligence provides organizations with the information they
need to detect potential security threats before they can cause harm.
2 Better decision-making: By providing actionable insights, threat intelligence helps
organizations make informed decisions about their security posture and response to potential
threats.
3 Proactive threat mitigation: Threat intelligence enables organizations to take a proactive
approach to threat mitigation, allowing them to stay ahead of emerging threats and reduce
their risk of being compromised.
4 Enhanced incident response: Threat intelligence can be used to enhance incident response,
allowing organizations to quickly and effectively respond to security incidents and minimize
their impact.
Shodan
A search engine for internet-connected devices that allows you to identify potential attack
surfaces and vulnerabilities in your network.
shodan scan submit --filename scan.json "port:22"
VirusTotal
A threat intelligence platform that allows you to analyze files and URLs for potential threats and
malware.
curl --request POST --url 'https://www.virustotal.com/api/v3/urls' --header 'x-apikey: YOUR_API_K
ThreatConnect
A threat intelligence platform that allows you to collect, analyze, and share threat intelligence with
your team and community.
curl -H "Content-Type: application/json" -X POST -d '{"name": "Example Threat Intel", "descriptio
MISP
An open-source threat intelligence platform that allows you to collect, store, and share threat
intelligence with your team and community.
curl -X POST 'http://misp.local/events/restSearch' -H 'Authorization: YOUR_API_KEY' -H 'Content-T
Vulnerability Assessment
Vulnerability assessment is the process of identifying and quantifying security vulnerabilities in an
organization’s IT systems, applications, and infrastructure. The goal of vulnerability assessment is
to provide organizations with a comprehensive view of their security posture, allowing them to
identify and prioritize security risks and take steps to remediate them.
In the context of DevSecOps, vulnerability assessment is a critical component of a comprehensive
security strategy. By regularly scanning for vulnerabilities and identifying potential security risks,
organizations can take proactive steps to secure their applications and infrastructure.
Some of the key benefits of vulnerability assessment in DevSecOps include:
1 Early detection of vulnerabilities: By regularly scanning for vulnerabilities, organizations can
detect potential security risks early on, allowing them to take swift action to remediate them.
2 Improved risk management: Vulnerability assessments provide organizations with a
comprehensive view of their security posture, allowing them to identify and prioritize security
risks and take steps to mitigate them.
3 Compliance: Many regulatory requirements, such as PCI DSS and HIPAA, require regular
vulnerability assessments as part of their compliance standards.
4 Integration with other DevSecOps practices: Vulnerability assessment can be integrated with
other DevSecOps practices, such as continuous integration and continuous deployment, to
ensure that security is built into the application development lifecycle.
There are a variety of vulnerability assessment tools and technologies available that can be used
in DevSecOps, including both commercial and open-source solutions. Some popular vulnerability
assessment tools include Nessus, Qualys, and OpenVAS.
Best practices for vulnerability assessment:
1 Conduct regular vulnerability assessments to identify potential weaknesses and
misconfigurations in your network and infrastructure.
2 Use a combination of automated and manual vulnerability scanning techniques to ensure
comprehensive coverage.
3 Prioritize and remediate vulnerabilities based on their severity and potential impact on your
organization.
4 Regularly update and patch software and systems to address known vulnerabilities.
5 Use segmentation and isolation to limit the spread of attacks in case of a successful breach.
Nessus
A vulnerability scanner that allows you to identify vulnerabilities and misconfigurations in your
network and infrastructure.
nessuscli scan new -n "My Scan" -t "192.168.1.0/24" -T "Basic Network Scan"
OpenVAS
An open-source vulnerability scanner that allows you to identify vulnerabilities and
misconfigurations in your network and infrastructure.
omp -u admin -w password -h localhost -p 9390 -G
Nmap
A network exploration and vulnerability scanner that allows you to identify open ports and
potential vulnerabilities in your network.
nmap -sS -A -p1-65535 target.com
Qualys
A cloud-based vulnerability management platform that allows you to identify vulnerabilities and
misconfigurations in your network and infrastructure.
curl -H 'X-Requested-With: Curl Sample' -u "USERNAME:PASSWORD" -H 'Accept: application/json' -H '
Copyright © 2019-2023 HADESS.
DevSecOps Guides
Operate / Monitoring
Monitoring
Monitoring in DevSecOps refers to the practice of continuously observing and analyzing an
organization’s IT systems, applications, and infrastructure to identify potential security issues,
detect and respond to security incidents, and ensure compliance with security policies and
regulations.
In DevSecOps, monitoring is a critical component of a comprehensive security strategy, allowing
organizations to identify and respond to security threats quickly and effectively. Some of the key
benefits of monitoring in DevSecOps include:
1 Early detection of security incidents: By continuously monitoring systems and applications,
organizations can detect security incidents early on and take immediate action to remediate
them.
2 Improved incident response: With real-time monitoring and analysis, organizations can
respond to security incidents quickly and effectively, minimizing the impact of a potential
breach.
3 Improved compliance: By monitoring systems and applications for compliance with security
policies and regulations, organizations can ensure that they are meeting their security
obligations.
4 Improved visibility: Monitoring provides organizations with greater visibility into their IT
systems and applications, allowing them to identify potential security risks and take proactive
steps to address them.
There are a variety of monitoring tools and technologies available that can be used in DevSecOps,
including log analysis tools, network monitoring tools, and security information and event
management (SIEM) solutions. These tools can be integrated with other DevSecOps practices,
such as continuous integration and continuous deployment, to ensure that security is built into the
application development lifecycle.
Prometheus
Start the Prometheus server:
$ ./prometheus --config.file=prometheus.yml
Grafana
Add Prometheus data source:
http://localhost:3000/datasources/new?gettingstarted
Nagios
Configure Nagios server:
/etc/nagios3/conf.d/
Zabbix
Configure Zabbix agent on the server: Edit the Zabbix agent configuration file
/etc/zabbix/zabbix_agentd.conf to specify the Zabbix server IP address and hostname, and to
enable monitoring of system resources such as CPU, memory, disk usage, and network interface.
Example configuration:
Server=192.168.1.100
ServerActive=192.168.1.100
Hostname=web-server
EnableRemoteCommands=1
UnsafeUserParameters=1
Configure Zabbix server: Login to the Zabbix web interface and navigate to the “Configuration”
tab. Create a new host with the same hostname as the server being monitored, and specify the IP
address and Zabbix agent port. Add items to the host to monitor the system resources specified
in the Zabbix agent configuration file. Example items:
• CPU usage: system.cpu.util[,idle]
Configure triggers: Set up triggers to alert when any monitored item exceeds a certain threshold.
For example, set a trigger on the CPU usage item to alert when the usage exceeds 80%.
Configure actions: Create actions to notify relevant stakeholders when a trigger is fired. For
example, send an email to the web application team and the system administrators.
Datadog
Edit the Datadog agent configuration file /etc/datadog-agent/datadog.yaml and add the following
lines:
# Collect CPU metrics
procfs_path: /proc
cpu_acct: true
# Collect memory metrics
meminfo_path: /proc/meminfo
To view CPU and memory metrics, go to the Datadog Metrics Explorer and search for the metrics
system.cpu.usageand .
system.mem.used
Here are some sample commands you can use to collect CPU and memory metrics with Datadog:
To collect CPU metrics:
curl -X POST -H "Content-type: application/json" -d '{
"series": [
"metric": "system.cpu.usage",
"points": [
'"$(date +%s)"',
],
"host": "my-host.example.com",
"tags": ["environment:production"]
}' "https://api.datadoghq.com/api/v1/series?api_key=<YOUR_API_KEY>"
"series": [
"metric": "system.mem.used",
"points": [
'"$(date +%s)"',
],
"host": "my-host.example.com",
"tags": ["environment:production"]
}' "https://api.datadoghq.com/api/v1/series?api_key=<YOUR_API_KEY>"
Note that these commands assume that you have the necessary tools ( , ) installed on
top free
your system to collect CPU and memory metrics. You can customize the , , and
metric host tags
To view CPU and memory metrics for a specific server using the New Relic API:
curl -X GET 'https://api.newrelic.com/v2/servers/{SERVER_ID}/metrics/data.json' \
-H 'X-Api-Key:{API_KEY}' \
-i \
-d 'names[]=System/CPU/Utilization&values[]=average_percentage' \
-d 'names[]=System/Memory/Used/Bytes&values[]=average_value' \
-d 'from=2022-05-01T00:00:00+00:00&to=2022-05-10T00:00:00+00:00'
AWS CloudWatch
1- To install the CloudWatch agent on Linux, you can use the following commands:
curl https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-
"metrics": {
"namespace": "CWAgent",
"metricInterval": 60,
"append_dimensions": {
"InstanceId": "${aws:InstanceId}"
},
"metrics_collected": {
"cpu": {
"measurement": [
"cpu_usage_idle",
"cpu_usage_iowait",
"cpu_usage_user",
"cpu_usage_system"
],
"metrics_collection_interval": 60,
"totalcpu": false
},
"memory": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
On Windows, you can use the CloudWatch Agent Configuration Wizard to create a configuration
file with the following settings:
- Choose "AWS::EC2::Instance" as the resource type to monitor
3- Start the CloudWatch Agent Once you’ve configured the CloudWatch agent, you can start it on
the EC2 instance using the following commands:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -
"metrics": {
"performance": {
"collectionFrequencyInSeconds": 60,
"metrics": [
"category": "Processor",
"instance": "_Total"
},
"category": "Memory",
"instance": null
This command retrieves CPU and memory metrics for a specific resource (identified by
{resource_id} ) over a one-day period (from May 20, 2022 to May 21, 2022), with a one-minute
interval. You can modify the parameters to retrieve different metrics or time ranges as needed.
Google Cloud Monitoring
1- Install the Stackdriver agent on the GCE instance. You can do this using the following
command:
curl -sSO https://dl.google.com/cloudagents/install-monitoring-agent.sh
2- Verify that the Monitoring Agent is running by checking its service status:
sudo service stackdriver-agent status
3- In the Google Cloud Console, go to Monitoring > Metrics Explorer and select the CPU usage
metric under the Compute Engine VM Instanceresource type. Set the aggregation to and select
mean
the GCE instance that you created and chart to view the CPU usage metric for your
Click Create
instance.
4- To collect memory metrics, repeat step 5 but select the metric instead of
Memory usage CPU
usage .
Netdata
1- In the Netdata web interface, go to the “Dashboard” section and select the “system.cpu” chart
to view CPU usage metrics. You can also select the “system.ram” chart to view memory usage
metrics.
2- To reduce failover using machine learning, you can configure Netdata’s anomaly detection
feature. In the Netdata web interface, go to the “Anomaly Detection” section and select “Add
alarm”.
3- For the “Detect” field, select “cpu.system”. This will detect anomalies in the system CPU usage.
4- For the “Severity” field, select “Warning”. This will trigger a warning when an anomaly is
detected.
5- For the “Action” field, select “Notify”. This will send a notification when an anomaly is detected.
6- You can also configure Netdata’s predictive analytics feature to predict when a system will fail.
In the Netdata web interface, go to the “Predict” section and select “Add algorithm”.
7- For the “Algorithm” field, select “Autoregression”. This will use autoregression to predict system
behavior.
8- For the “Target” field, select “cpu.system”. This will predict CPU usage.
9- For the “Window” field, select “30 minutes”. This will use a 30-minute window to make
predictions.
10-Finally, click “Create” to create the algorithm.
Virtual Patching
Virtual patching is a security technique used in DevSecOps to provide temporary protection
against known vulnerabilities in software applications or systems. Virtual patching involves the use
of security policies, rules, or filters that are applied to network traffic, system logs, or application
code to prevent known vulnerabilities from being exploited.
Virtual patching can be used when a vendor-provided patch is not available or when patching is
not feasible due to operational constraints or business needs. It allows organizations to quickly
and easily protect their systems against known vulnerabilities without having to take the
application or system offline or make changes to the underlying code.
Some of the key benefits of virtual patching in DevSecOps include:
1 Reduced risk of exploitation: By applying virtual patches to known vulnerabilities, organizations
can reduce the risk of these vulnerabilities being exploited by attackers.
2 Improved security posture: Virtual patching allows organizations to quickly and easily protect
their systems against known vulnerabilities, improving their overall security posture.
3 Reduced downtime: Virtual patching can be implemented quickly and easily, without requiring
system downtime or disrupting business operations.
4 Improved compliance: Virtual patching can help organizations meet regulatory requirements
for timely patching of known vulnerabilities.
Virtual patching can be implemented using a variety of techniques, including intrusion prevention
systems (IPS), web application firewalls (WAF), and network-based security devices. It can also
be implemented through the use of automated security policies or scripts that are applied to
systems and applications.
Log Collection
Splunk
1- Configure Data Inputs: Configure data inputs to receive data from various sources, such as
network devices, servers, and applications. Configure data inputs for the following:
• Syslog
• Windows Event Logs
• Network Traffic (using the Splunk Stream add-on)
• Cloud Platform Logs (e.g., AWS CloudTrail, Azure Audit Logs)
2- Create Indexes: Create indexes to store the data from the configured data inputs. Indexes can
be created based on data types, such as security events, network traffic, or application logs.
3- Create a Dashboard: Create a dashboard to visualize the data collected from the data inputs. A
dashboard can display the following:
• Real-time events and alerts
• Trending graphs and charts
• Security reports and metrics
4- Create a Sample Rule for Detection: Create a sample rule to detect an attack or security
incident. For example, create a rule to detect failed login attempts to a web application. The
following steps show how to create the rule in Splunk:
• Create a search query: Create a search query to identify failed login attempts in the web
application logs. For example:
sourcetype=apache_access combined=*login* status=401 | stats count by clientip
Virtual Patching
Virtual patching is a security mechanism that helps protect applications and systems from known
vulnerabilities while developers work on creating and testing a patch to fix the vulnerability. It
involves implementing a temporary, software-based solution that can block or mitigate the attack
vectors that could be used to exploit the vulnerability. This is done by creating rules or policies
within security software, such as web application firewalls or intrusion detection/prevention
systems, that block or alert on malicious traffic attempting to exploit the vulnerability.
Virtual patching can be an effective way to quickly and temporarily secure systems against known
vulnerabilities, particularly those that may be actively targeted by attackers. It can also provide
time for organizations to test and implement permanent patches without leaving their systems
exposed to attacks.
Name Language
Java Contrast Security, Sqreen, AppSealing, JShielder
.NET Contrast Security, Sqreen, Nettitude, Antimalware-Research
Node.js Sqreen, RASP.js, Jscrambler, nexploit
Python RASP-Protect, PyArmor, Striker, nexploit
PHP Sqreen, RIPS Technologies, RSAS, nexploit
Ruby Sqreen, RASP-Ruby, nexploit
import com.rasp.scanner.RASP;
import com.rasp.scanner.ELExpression;
if (RASP.isSQLInjection(username)) {
if (RASP.isSQLInjection(password)) {
This rule checks for SQL injection attacks in the “username” and “password” parameters of a
HTTP request. If an attack is detected, the rule logs the attempt and blocks the request.
Cheatsheet for prevention rules for the OWASP Top 10 vulnerabilities
Misconfiguration
Insecure Direct Object References Access control checks and input validation
SQL Injection
RASP
when {
http.param.name.matches("(?i).*((select|union|insert|update|delete|from|where|order by|group
} then {
block();
WAF
SecRule ARGS "@rx ^[a-zA-Z0-9\s]+$" \
"id:1,\
phase:2,\
t:none,\
deny,\
Command Injection
when {
http.param.name.matches("(?i).*((;|&|`|\\|\\||\\||&&).*)")
} then {
block();
RASP
"id:2,\
phase:2,\
t:none,\
deny,\
WAF
"id:2,\
phase:2,\
t:none,\
deny,\
XSS
RASP
when {
http.param.value.matches("(?i).*((<script|<img|alert|prompt|document.cookie|window.location|o
} then {
block();
}
WAF
Script Tag Prevention Rule
SecRule ARGS|XML:/* "@rx <script.*?>" \
"id:3,\
phase:2,\
t:none,\
deny,\
"id:4,\
phase:2,\
t:none,\
deny,\
Checklists / Apache
Enable server
2
signature ServerSignature On
Disable server
3
signature ServerSignature Off
Change server
4
header ServerTokens Prod
Checklists / ArgoCD
Checklists / Ceph
Enable SSL/TLS
2 encryption for Ceph ceph config set global network.ssl true
traffic
Set secure file
3 permissions for Ceph sudo chmod 600 /etc/ceph/*
configuration files
Limit access to the sudo ufw allow 8443/tcp && sudo ufw allow 8003/tcp && sudo
4
Ceph dashboard ufw allow 8080/tcp
Implement network
6 segmentation for Ceph sudo iptables -A INPUT -s <trusted network> -j ACCEPT
nodes
Configure Ceph to use sudo ceph-osd --mkfs --osd-uuid <osd-uuid> --cluster ceph
7
encrypted OSDs --osd-data <path to data directory> --osd-journal <path to
(for AppArmor)
Ceph processes /etc/apparmor.d/usr.bin.ceph-osd
Checklists / Consul
Checklists / CouchDB
line , and add your admin username and password. Save and exit the file.
; [admins] to [admins]
/opt/couchdb/etc/couchdb/
Checklists / Docker
Checklists / Elasticsearch
http.cors.allow-methods: HEAD,GET,POST,PUT,DELETE,OPTIONS
http.cors.allow-headers: "X-Requested-With,Content-Type,Content-Length"
http.max_content_length: 100mb
Add the following rules to only allow incoming connections from trusted IP addresses:
-A INPUT -p tcp -m tcp --dport 9200 -s 10.0.0.0/8 -j ACCEPT
Checklists / Git
or
git config --global commit.gpgsign true
Checklists / Gitlab
Checklists / GlusterFS
Checklists / Gradle
repositories {
mavenCentral {
url "https://repo1.maven.org/maven2/"
maven {
url "https://plugins.gradle.org/m2/"
Checklists / Graphite
Enable HTTPS
Install a SSL certificate and configure NGINX to serve Graphite over HTTPS
Checklists / IIS
Remove
2 unneeded Remove-WebConfigurationProperty -filter
"SAMEORIGIN"<br>Add-WebConfigurationProperty -filter
mode=block"
4 Enable HTTPS New-WebBinding -Name "Default Web Site" -Protocol https -Port 443 -IPAd
settings "system.webServer/security/authentication/iisClientCertificateMappingAut
"system.webServer/security/authentication/anonymousAuthentication" -name
$false<br>Set-WebConfigurationProperty -filter
"system.webServer/security/authentication/digestAuthentication" -name en
$false<br>Set-WebConfigurationProperty -filter
"system.webServer/security/authentication/windowsAuthentication" -name e
"system.webServer/security/authentication/windowsAuthentication" -name u
-value $true
Set-WebConfigurationProperty -filter
5
and "/system.webServer/security/requestFiltering/hiddenSegments" -name "." -
@{add="$false"}
Set-WebConfigurationProperty -filter
logging and value $false<br>
Checklists / Jenkins
Checklists / Kuberneties
Update spec.loadBalancerSourceRanges
Update --enable-admission-plugins
Copyright © 2019-2023 HADESS.
DevSecOps Guides
Checklists / Memcached
Enable logging
sed -i 's/^logfile/#logfile/g' /etc/sysconfig/memcached
mkdir /var/log/memcached
touch /var/log/memcached/memcached.log
Checklists / MongoDB
Enable authentication
sed -i '/security:/a \ \ \ \ authorization: enabled' /etc/mongod.conf
Checklists / MySQL
Checklists / Nginx
zone=mylimit:10m rate=10r/s;
9
protection proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
ID Description Commands
10 Implement XSS protection add_header X-XSS-Protection "1; mode=block";
Checklists / Redis
Enable authentication
Set a password in the Redis configuration file ( redis.conf ) using the requirepass directive. Restart
Redis service to apply changes.
Bind Redis to a specific IP address
Edit the bind directive in the Redis configuration file to specify a specific IP address.
Enable SSL/TLS encryption
Edit the file to specify SSL/TLS options and certificate files. Restart Redis service to
redis.conf
apply changes.
Disable unused Redis modules
Edit the redis.conffile to disable modules that are not needed. Use the module-load and module-
Checklists / Squid
http_max_clients 50
Restrict allowed ports
acl Safe_ports port 80 443 8080
Checklists / Tomcat
connectionTimeout="20000"
redirectPort="8443" />
clientAuth="false" sslProtocol="TLS"
keystoreFile="/path/to/keystore"
keystorePass="password" />
Disable version information in error pages
Modify server.xml to add the following attribute to the <Host> element:
errorReportValveClass="org.apache.catalina.valves.ErrorReportValve" showReport="false" showServer
Checklists / Weblogic
passwords removeDefaultConfig
sslListenPort=9003
wlst.sh $WL_HOME/common/tools/config/jdbc/SecureJDBCDataSource.py
identityStorePassword myIdentityStorePassword
file
console $DOMAIN_HOME/config/fmwconfig/system-jazn-data.xml
ID Description Commands
wlst.sh $WL_HOME/common/tools/configureNodeManager.py -
6
(SSL) for Node Dweblogic.management.password=mypassword -
Manager Dweblogic.NodeManager.sslEnabled=true -
Dweblogic.NodeManager.sslHostnameVerificationIgnored=true -
Dweblogic.NodeManager.KeyStores=CustomIdentityAndJavaTrust
Application Attacks
TABLE OF CONTENTS
1 Exposure of sensitive information
2 Insertion of Sensitive Information Into Sent Data
3 Cross-Site Request Forgery (CSRF)
4 Use of Hard-coded Password
5 Broken or Risky Crypto Algorithm
6 Risky Crypto Algorithm
7 Insufficient Entropy
8 XSS
9 SQL Injection
10 External Control of File Name or Path
11 Generation of Error Message Containing Sensitive Information
12 Unprotected storage of credentials
13 Trust Boundary Violation
14 Insufficiently Protected Credentials
15 Restriction of XML External Entity Reference
16 Vulnerable and Outdated Components
17 Improper Validation of Certificate with Host Mismatch
18 Improper Authentication
19 Session Fixation
20 Inclusion of Functionality from Untrusted Control
21 Download of Code Without Integrity Check
22 Deserialization of Untrusted Data
23 Insufficient Logging
24 Improper Output Neutralization for Logs
25 Omission of Security-relevant Information
26 Sensitive Information into Log File
27 Server-Side Request Forgery (SSRF)
Cloud Attacks
TABLE OF CONTENTS
1 Inadequate Identity, Credential, and Access Management (ICAM):
2 Insecure Interfaces and APIs
3 Data Breaches
4 Insufficient Security Configuration
5 Insecure Data storage
6 Lack of Proper Logging and Monitoring
7 Insecure Deployment and Configuration Management
8 Inadequate Incident Response and Recovery
9 Shared Technology Vulnerabilities
10 Account Hijacking and Abuse
resources:
- name: my-bucket
type: storage.bucket
- name: my-instance
type: compute.instance
- name: my-database
type: sql.database
To address the inadequate ICAM in the cloud environment, it is essential to implement robust
identity, credential, and access management practices.
# Compliant: Enhanced ICAM in Cloud
resources:
- name: my-bucket
type: storage.bucket
access-control:
- role: storage.admin
members:
- user:john@example.com
- group:engineering@example.com
- name: my-instance
type: compute.instance
access-control:
- role: compute.admin
members:
- user:john@example.com
- group:engineering@example.com
- name: my-database
type: sql.database
access-control:
- role: cloudsql.admin
members:
- user:john@example.com
- group:engineering@example.com
In the compliant code, each resource in the cloud environment has an associated access control
configuration. This includes properly defined roles and membership assignments, ensuring that
only authorized users or groups have access to the respective resources. By implementing
adequate ICAM practices, the risk of unauthorized access and privilege escalation is significantly
reduced, enhancing the overall security of the cloud environment.
Insecure Interfaces and APIs
Vulnerabilities in cloud service interfaces and APIs can be exploited to gain unauthorized access,
inject malicious code, or manipulate data.
In the noncompliant code, there are insecure interfaces and APIs in the cloud environment. This
means that the interfaces and APIs used to interact with cloud services are not properly secured,
potentially exposing sensitive data, allowing unauthorized access, or enabling malicious activities.
# Noncompliant: Insecure Interfaces and APIs in Cloud
import requests
api_endpoint = "http://api.example.com/data"
response = requests.get(api_endpoint)
def process_data(data):
requests.post("http://example.com/process", data=data)
To address the insecure interfaces and APIs in the cloud environment, it is crucial to implement
secure practices when interacting with cloud services.
# Compliant: Secure Interfaces and APIs in Cloud
import requests
api_endpoint = "https://api.example.com/data"
def process_data(data):
In the compliant code, the API endpoint is accessed securely using HTTPS and includes proper
authentication and authorization headers. This ensures that only authorized users can access the
API and the data transmitted is protected. Additionally, the interface for processing data utilizes
encrypted transmission over HTTPS, providing confidentiality and integrity for the sensitive
information being transmitted. By implementing secure interfaces and APIs, the risk of
unauthorized access, data breaches, and malicious activities is mitigated in the cloud
environment.
Data Breaches
Sensitive data stored in the cloud can be compromised due to misconfigurations, insecure
storage, weak encryption, or insider threats.
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
bucket.upload_file('data.txt', 'data.txt')
s3 = boto3.resource('s3')
bucket = s3.Bucket('public-bucket')
bucket.make_public()
ec2 = boto3.resource('ec2')
instance[0].disable_api_termination = False
To address the issue of insufficient security configuration in the cloud, it is important to follow
security best practices and implement robust security measures.
# Compliant: Strong Security Configuration in Cloud
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
s3 = boto3.resource('s3')
bucket = s3.Bucket('private-bucket')
bucket.Acl().put(ACL='private')
ec2 = boto3.resource('ec2')
instance[0].disable_api_termination = True
In the compliant code, strong and unique passwords are used for authentication, enhancing the
security of the cloud resources. Access to resources is restricted, ensuring that only authorized
users or services have the necessary permissions. Necessary security features, such as server-
side encryption and API termination protection, are enabled to provide additional layers of
security. By implementing strong security configurations, the cloud environment is better
protected against potential threats.
Insecure Data storage
Inadequate encryption, weak access controls, or improper handling of data at rest can lead to
unauthorized access or data leakage.
In the noncompliant code, there are instances where data storage in the cloud is insecure.
Sensitive data is stored without proper encryption, and there is no mechanism in place to protect
the data from unauthorized access or accidental exposure.
# Noncompliant: Insecure Data Storage in Cloud
import boto3
s3 = boto3.client('s3')
s3 = boto3.resource('s3')
bucket = s3.Bucket('public-bucket')
bucket.upload_file('data.txt', 'data.txt')
rds.create_db_snapshot(DBSnapshotIdentifier='my-snapshot', DBInstanceIdentifier='my-db')
To ensure secure data storage in the cloud, it is important to follow best practices and implement
appropriate security measures.
# Compliant: Secure Data Storage in Cloud
import boto3
s3 = boto3.client('s3')
s3 = boto3.resource('s3')
bucket = s3.Bucket('private-bucket')
rds = boto3.client('rds')
In the compliant code, sensitive data is stored with encryption using server-side encryption with
AES256. Access control is implemented to restrict access to the stored data, ensuring that only
authorized users or services can access it. Additionally, a data backup and disaster recovery plan
is in place, which includes creating snapshots to enable data recovery in case of any incidents. By
implementing secure data storage practices, the cloud environment provides better protection for
sensitive information.
Lack of Proper Logging and Monitoring
Insufficient monitoring, logging, and analysis of cloud activity can hinder detection of security
incidents, leading to delayed or ineffective response.
Insecure Deployment and Configuration Management
Weaknesses in the process of deploying and managing cloud resources, such as improper change
management, can introduce security vulnerabilities.
In the noncompliant code, there is a lack of secure deployment and configuration management
practices in the cloud environment. The code deploys resources and configurations without
proper security considerations, such as exposing sensitive information or using default and weak
configurations.
# Noncompliant: Insecure Deployment and Configuration Management in Cloud
import boto3
def deploy_instance():
ec2_client = boto3.client('ec2')
response = ec2_client.run_instances(
ImageId='ami-12345678',
InstanceType='t2.micro',
KeyName='my-keypair',
SecurityGroupIds=['sg-12345678'],
MinCount=1,
MaxCount=1
return response['Instances'][0]['InstanceId']
def main():
instance_id = deploy_instance()
if __name__ == "__main__":
main()
To ensure secure deployment and configuration management in the cloud, it is important to follow
security best practices and apply appropriate configurations to resources.
# Compliant: Secure Deployment and Configuration Management in Cloud
import boto3
def deploy_instance():
ec2_client = boto3.client('ec2')
response = ec2_client.run_instances(
ImageId='ami-12345678',
InstanceType='t2.micro',
KeyName='my-keypair',
SecurityGroupIds=['sg-12345678'],
MinCount=1,
MaxCount=1,
TagSpecifications=[
'ResourceType': 'instance',
'Tags': [
'Key': 'Name',
'Value': 'MyInstance'
],
BlockDeviceMappings=[
'DeviceName': '/dev/sda1',
'Ebs': {
'VolumeSize': 30,
'VolumeType': 'gp2'
return response['Instances'][0]['InstanceId']
def main():
instance_id = deploy_instance()
main()
In the compliant code, additional security measures are implemented during the deployment
process. This includes:
• Adding appropriate tags to the instance for better resource management and identification.
• Configuring block device mappings with appropriate volume size and type.
• Following the principle of least privilege by providing only necessary permissions to the
deployment process.
Inadequate Incident Response and Recovery
Lack of proper incident response planning and testing, as well as ineffective recovery
mechanisms, can result in extended downtime, data loss, or inadequate mitigation of security
breaches.
In the noncompliant code, there is a lack of adequate incident response and recovery practices in
the cloud environment. The code does not have any provisions for handling incidents or
recovering from them effectively. This can lead to prolonged downtime, data loss, or inadequate
response to security breaches or system failures.
# Noncompliant: Inadequate Incident Response and Recovery in Cloud
import boto3
def delete_instance(instance_id):
ec2_client = boto3.client('ec2')
response = ec2_client.terminate_instances(
InstanceIds=[instance_id]
return response
def main():
instance_id = 'i-12345678'
delete_instance(instance_id)
main()
To ensure adequate incident response and recovery in the cloud, it is important to have well-
defined processes and procedures in place. The following code snippet demonstrates a more
compliant approach:
# Compliant: Adequate Incident Response and Recovery in Cloud
import boto3
def delete_instance(instance_id):
ec2_client = boto3.client('ec2')
response = ec2_client.terminate_instances(
InstanceIds=[instance_id]
return response
def handle_incident(instance_id):
# Perform necessary actions to handle the incident, such as notifying the security team, logg
def main():
instance_id = 'i-12345678'
handle_incident(instance_id)
delete_instance(instance_id)
if __name__ == "__main__":
main()
import boto3
def create_s3_bucket(bucket_name):
s3_client = boto3.client('s3')
s3_client.create_bucket(Bucket=bucket_name)
def main():
bucket_name = 'my-bucket'
create_s3_bucket(bucket_name)
if __name__ == "__main__":
main()
To prevent account hijacking and abuse in the cloud, it is important to implement strong security
measures. The following code snippet demonstrates a more compliant approach:
# Compliant: Preventing Account Hijacking and Abuse in Cloud
import boto3
def create_s3_bucket(bucket_name):
s3_client = boto3.client('s3')
s3_client.create_bucket(
Bucket=bucket_name,
CreateBucketConfiguration={
def main():
bucket_name = 'my-bucket'
create_s3_bucket(bucket_name)
if __name__ == "__main__":
main()
In the compliant code, additional security measures are implemented. The bucket is created with
a specific access control setting (ACL=’private’) to ensure that only authorized users can access
it. The CreateBucketConfiguration parameter is used to specify the desired region for the bucket,
reducing the risk of accidental exposure due to misconfigurations.
To further enhance security, consider implementing multi-factor authentication (MFA), strong
password policies, and role-based access controls (RBAC) for managing user permissions in the
cloud environment. Regular monitoring and auditing of account activities can also help detect and
prevent unauthorized access or abuse.
Container Attacks
TABLE OF CONTENTS
1 Insecure Container Images:
a Malicious Images via Aqua
b Other Images
2 Privileged Container:
3 Exposed Container APIs:
4 Container Escape:
5 Container Image Tampering:
6 Insecure Container Configuration:
7 Denial-of-Service (DoS):
8 Kernel Vulnerabilities:
9 Shared Kernel Exploitation:
10 Insecure Container Orchestration:
FROM ubuntu
...
The compliant code addresses the vulnerability by running the container without privileged mode.
This restricts the container’s access to system resources and reduces the risk of privilege
escalation and unauthorized access to the host.
# Compliant: Non-privileged container
FROM ubuntu
...
FROM nginx
...
EXPOSE 8080
The compliant code addresses the vulnerability by exposing the container’s API internally on port
8080 and leveraging a reverse proxy or API gateway for authentication and authorization. The
reverse proxy or API gateway acts as a security layer, handling authentication/authorization
requests before forwarding them to the container API.
To further enhance the security of exposed container APIs, consider the following best practices:
1 Implement strong authentication and authorization mechanisms: Use industry-standard
authentication protocols (e.g., OAuth, JWT) and enforce access controls based on user roles
and permissions.
2 Employ Transport Layer Security (TLS) encryption: Secure the communication between clients
and the container API using TLS certificates to protect against eavesdropping and tampering.
3 Regularly monitor and log API activity: Implement logging and monitoring mechanisms to
detect and respond to suspicious or malicious activity.
4 Apply rate limiting and throttling: Protect the API from abuse and denial-of-service attacks by
enforcing rate limits and throttling requests.
# Compliant: Secured container API with authentication/authorization
FROM nginx
...
EXPOSE 8080
# Use a reverse proxy or API gateway for authentication/authorization
Container Escape:
Exploiting vulnerabilities in the container runtime or misconfigurations to break out of the
container’s isolation and gain unauthorized access to the host operating system. Example:
Exploiting a vulnerability in the container runtime to access the host system and other containers.
The below code creates and starts a container without any security isolation measures. This
leaves the container susceptible to container escape attacks, where an attacker can exploit
vulnerabilities in the container runtime or misconfigured security settings to gain unauthorized
access to the host system.
# Noncompliant: Running a container without proper security isolation
require 'docker'
container.start
we introduce security enhancements to mitigate the risk of container escape. The HostConfig
parameter is used to configure the container’s security settings. Here, we:
Set ‘Privileged’ => false to disable privileged mode, which restricts access to host devices and
capabilities. Use ‘CapDrop’ => [‘ALL’] to drop all capabilities from the container, minimizing the
potential attack surface. Add ‘SecurityOpt’ => [‘no-new-privileges’] to prevent privilege
escalation within the container.
# Compliant: Running a container with enhanced security isolation
require 'docker'
container = Docker::Container.create(
container.start
require 'docker'
container.start
we address this issue by introducing integrity verification. The code calculates the expected
digest of the pulled image using the SHA256 hash algorithm. It then compares this expected
digest with the actual digest of the image obtained from the Docker API. If the digests do not
match, an integrity verification failure is raised, indicating that the image may have been tampered
with.
# Compliant: Pulling and running a container image with integrity verification
require 'docker'
require 'digest'
image_name = 'nginx'
image_tag = 'latest'
expected_digest = Digest::SHA256.hexdigest(image.connection.get("/images/#{image.id}/json").body)
actual_digest = image.info['RepoDigests'].first.split('@').last
if expected_digest != actual_digest
end
container.start
require 'docker'
container.start
In the compliant code, we address these security concerns by applying secure container
configurations. The HostConfig parameter is used to specify the container’s configuration. Here,
we:
Set ‘ReadOnly’ => true to make the container’s filesystem read-only, preventing potential
tampering and unauthorized modifications. Use ‘CapDrop’ => [‘ALL’] to drop all capabilities from
the container, minimizing the attack surface and reducing the potential impact of privilege
escalation. Add ‘SecurityOpt’ => [‘no-new-privileges’] to prevent the container from gaining
additional privileges. Specify ‘NetworkMode’ => ‘bridge’ to isolate the container in a bridge
network, ensuring separation from the host and other containers. Use ‘PortBindings’ to bind the
container’s port to a specific host port (‘80/tcp’ => [{ ‘HostPort’ => ‘8080’ }]). This restricts
network access to the container and avoids exposing unnecessary ports.
# Compliant: Running a container with secure configuration
require 'docker'
container = Docker::Container.create(
'HostConfig' => {
'PortBindings' => { '80/tcp' => [{ 'HostPort' => '8080' }] } # Bind container port to a spec
container.start
Denial-of-Service (DoS):
Overloading container resources or exploiting vulnerabilities in the container runtime to disrupt
the availability of containerized applications. Example: Launching a DoS attack against a container
by overwhelming it with excessive requests.
The noncompliant code snippet shows a Dockerfile that is vulnerable to resource overloading and
DoS attacks. It does not implement any resource limitations or restrictions, allowing the container
to consume unlimited resources. This can lead to a DoS situation if an attacker overwhelms the
container with excessive requests or exploits vulnerabilities in the container runtime.
# Noncompliant: Vulnerable Dockerfile with unlimited resource allocation
FROM nginx:latest
EXPOSE 80
The compliant code snippet addresses this vulnerability by not explicitly setting any resource
limitations. However, it is essential to implement resource management and limit container
resources based on your application’s requirements and the resources available in your
environment. This can be achieved by configuring resource limits such as CPU, memory, and
network bandwidth using container orchestration platforms or Docker-compose files.
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 80:80
volumes:
- ./app:/usr/share/nginx/html
deploy:
resources:
limits:
cpus: '0.5'
memory: '256M'
Kernel Vulnerabilities:
Exploiting vulnerabilities in the kernel or host operating system to gain unauthorized access or
control over containers. Example: Exploiting a kernel vulnerability to escalate privileges and
compromise containers.
# Noncompliant: Ignoring kernel vulnerabilities
To mitigate kernel vulnerabilities, it is important to regularly check for updates and apply security
patches to the host system. Additionally, you can use tools to scan and assess the vulnerability
status of the kernel before creating a Docker container.
Here’s an example of compliant code that incorporates checking for kernel vulnerabilities using
the kubehunter tool before creating the container:
# Compliant: Checking kernel vulnerabilities
kubehunter scan
In the compliant code snippet, the kubehunter tool is used to perform a vulnerability assessment,
including checking for kernel vulnerabilities. The output of the tool is examined, and if any
vulnerabilities are found, appropriate steps are taken to address them before creating the Docker
container.
Shared Kernel Exploitation:
Containers sharing the same kernel can be vulnerable to attacks that exploit kernel vulnerabilities,
allowing attackers to affect multiple containers. Example: Exploiting a kernel vulnerability to gain
unauthorized access to multiple containers on the same host.
In the noncompliant code, the Docker image installs a vulnerable package and runs a vulnerable
application. If an attacker manages to exploit a kernel vulnerability within the container, they could
potentially escape the container and compromise the host or other containers.
# Noncompliant: Vulnerable to container breakout
FROM ubuntu:latest
CMD ["vulnerable-app"]
The compliant code addresses the vulnerability by ensuring that the container image only
includes necessary and secure packages. It performs regular updates and includes security
patches to mitigate known vulnerabilities. By running a secure application within the container, the
risk of a container breakout is reduced.
To further enhance security, additional measures can be taken such as utilizing container isolation
techniques like running containers with restricted privileges, leveraging security-enhanced
kernels (such as those provided by certain container platforms), and monitoring and logging
container activity to detect potential exploitation attempts.
# Compliant: Mitigated container breakout vulnerability
FROM ubuntu:latest
RUN apt-get update && apt-get upgrade -y && apt-get install -y secure-package
CMD ["secure-app"]
Insecure Container Orchestration:
Misconfigurations or vulnerabilities in container orchestration platforms, such as Kubernetes, can
lead to unauthorized access, privilege escalation, or exposure of sensitive information. Example:
Exploiting a misconfigured Kubernetes cluster to gain unauthorized access to sensitive resources.
In the noncompliant code, the Pod definition enables privileged mode for the container, granting it
elevated privileges within the container orchestration environment. If an attacker gains access to
this container, they could exploit the elevated privileges to perform malicious actions on the host
or compromise other containers.
# Noncompliant: Vulnerable to privilege escalation
apiVersion: v1
kind: Pod
metadata:
name: vulnerable-pod
spec:
containers:
- name: vulnerable-container
image: vulnerable-image
securityContext:
The compliant code addresses the vulnerability by explicitly disabling privileged mode for the
container. By running containers with reduced privileges, the impact of a potential compromise is
limited, and the attack surface is minimized.
In addition to disabling privileged mode, other security measures should be implemented to
enhance the security of container orchestration. This includes configuring appropriate RBAC
(Role-Based Access Control) policies, enabling network segmentation and isolation, regularly
applying security patches to the orchestration system, and monitoring the environment for
suspicious activities.
# Compliant: Mitigated privilege escalation
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
containers:
- name: secure-container
image: secure-image
securityContext:
Pipeline Attacks
TABLE OF CONTENTS
1 Insecure Configuration Management:
2 Weak Authentication and Authorization:
3 Insecure CI/CD Tools:
4 Lack of Secure Coding Practices:
5 Insecure Third-Party Dependencies:
6 Insufficient Testing:
7 Insecure Build and Deployment Processes:
8 Exposed Credentials:
9 Insufficient Monitoring and Logging:
10 Misconfigured Access Controls:
stages:
steps:
- name: Build Application
command: |
build-tool
command: |
deploy-tool
command: |
upload-tool
stages:
steps:
command: |
build-tool
security:
- encryption: true
command: |
deploy-tool
security:
- encryption: true
command: |
security:
- encryption: true
In the compliant code, each step in the pipeline has an associated security configuration that
enables encryption. This ensures that sensitive data is encrypted during transmission within the
pipeline, providing an additional layer of protection against unauthorized access or data exposure.
Weak Authentication and Authorization:
Inadequate authentication mechanisms and weak authorization controls in the pipeline, allowing
unauthorized access to critical resources or actions.
In the noncompliant code, weak or inadequate authentication and authorization mechanisms are
used in the pipeline. This can lead to unauthorized access, privilege escalation, or other security
issues.
# Noncompliant: Weak authentication and authorization in pipeline
stages:
steps:
command: |
command: |
In the compliant code snippet, strong authentication mechanisms such as service accounts or
OAuth tokens are used to authenticate with the production environment. These mechanisms
provide stronger security controls and help prevent unauthorized access to sensitive resources.
# Compliant: Strong authentication and authorization in pipeline
stages:
steps:
command: |
command: |
stages:
steps:
command: |
command: |
echo "Deploying application..."
In the compliant code snippet, secure and up-to-date versions of the CI/CD tools are used, which
have been reviewed for security vulnerabilities. Additionally, it is important to ensure that the
configurations of these tools are properly secured and follow security best practices.
# Compliant: Secure CI/CD Tools in pipeline
stages:
steps:
command: |
command: |
stages:
command: |
insecure-build-tool build
command: |
To address the lack of secure coding practices in the pipeline, it is important to adopt and
implement secure coding practices throughout the development and deployment process. This
includes incorporating code reviews, using secure coding guidelines, and performing security
testing and validation.
In the compliant code snippet, secure coding practices are implemented by incorporating code
review and security testing during the build process. This ensures that potential security
vulnerabilities are identified and addressed early in the development cycle. Additionally, the
deployment process includes the use of secure deployment tools that prioritize secure coding
practices.
# Compliant: Implementation of Secure Coding Practices in pipeline
stages:
steps:
command: |
# Incorporating code review and security testing during the build process
command: |
stages:
steps:
command: |
insecure-build-tool build
command: |
To address the lack of consideration for insecure third-party dependencies in the pipeline, it is
crucial to implement proper validation and management practices. This includes conducting
regular vulnerability assessments, using dependency management tools, and maintaining an
updated inventory of dependencies.
# Compliant: Validation and Management of Third-Party Dependencies in pipeline
stages:
- name: Build and Deploy
steps:
command: |
# Building the application with vulnerability assessment and secure dependency manageme
command: |
In the compliant code snippet, validation and management practices for third-party dependencies
are implemented in the pipeline. This includes conducting vulnerability scans and utilizing
dependency management tools to ensure that only secure and up-to-date dependencies are used
in the application. By addressing insecure third-party dependencies, the pipeline can significantly
reduce the risk of introducing vulnerabilities and improve the overall security of the deployed
application.
Insufficient Testing:
Inadequate testing processes, including lack of security testing, vulnerability scanning, or
penetration testing, allowing potential vulnerabilities to go undetected in the pipeline.
In the noncompliant code, there is a lack of sufficient testing in the pipeline. This means that the
pipeline does not include appropriate testing stages, such as unit tests, integration tests, or
security tests, to ensure the quality and security of the deployed application.
# Noncompliant: Insufficient Testing in pipeline
stages:
steps:
command: |
insecure-build-tool build
command: |
To address the lack of sufficient testing in the pipeline, it is crucial to incorporate comprehensive
testing stages to validate the functionality, quality, and security of the application.
# Compliant: Comprehensive Testing in pipeline
stages:
steps:
command: |
command: |
- name: Deploy
steps:
command: |
stages:
steps:
command: |
insecure-build-tool build
command: |
To address the security vulnerabilities in the build and deployment processes, it is essential to
implement secure controls and validation measures.
# Compliant: Secure Build and Deployment Processes in pipeline
stages:
steps:
- name: Build Application
command: |
command: |
In the compliant code snippet, the build and deployment processes have been enhanced with
secure controls and validation. The build process includes proper validation steps to ensure that
only valid and authorized code is included in the deployment package. Similarly, the deployment
process incorporates controls to verify the integrity and authenticity of the deployed application,
preventing unauthorized changes or inclusion of malicious code.
Exposed Credentials:
Storage or transmission of sensitive credentials, such as API keys or access tokens, in an insecure
manner within the pipeline, making them susceptible to unauthorized access or misuse.
In the noncompliant code, credentials are hardcoded or exposed in plain text within the pipeline
configuration or scripts. This makes them vulnerable to unauthorized access or disclosure, putting
the sensitive information at risk.
# Noncompliant: Exposed Credentials in pipeline
stages:
steps:
command: |
export DATABASE_USERNAME=admin
export DATABASE_PASSWORD=secretpassword
command: |
command: |
To address the security concern of exposed credentials in the pipeline, it is crucial to adopt
secure practices for handling sensitive information.
# Compliant: Secure Handling of Credentials in pipeline
stages:
steps:
command: |
command: |
command: |
In the compliant code snippet, the sensitive credentials are retrieved securely from a secure vault
or secret management system. This ensures that the credentials are not exposed directly in the
pipeline configuration or scripts. By using a secure vault, the credentials remain encrypted and
are accessed only when needed during the pipeline execution.
Insufficient Monitoring and Logging:
Lack of robust monitoring and logging mechanisms in the pipeline, hindering the detection and
response to security incidents or unusual activities.
In the noncompliant code, there is a lack of proper monitoring and logging practices in the
pipeline. This means that important events, errors, or security-related activities are not
adequately captured or logged, making it challenging to detect and respond to potential issues or
security incidents.
# Noncompliant: Insufficient Monitoring and Logging in pipeline
stages:
steps:
command: |
build-tool
command: |
deploy-tool
To address the insufficient monitoring and logging in the pipeline, it is essential to implement
proper logging and monitoring practices.
# Compliant: Implementing Monitoring and Logging in pipeline
stages:
steps:
command: |
build-tool
command: |
deploy-tool
steps:
- name: Send Pipeline Logs to Centralized Logging System
command: |
send-logs --log-file=pipeline.log
command: |
monitor-pipeline
In the compliant code snippet, an additional stage called “Monitor and Log” is introduced to
handle monitoring and logging activities. This stage includes steps to send pipeline logs to a
centralized logging system and monitor the performance and health of the pipeline.
By sending the pipeline logs to a centralized logging system, you can gather and analyze log data
from multiple pipeline runs, enabling better visibility into pipeline activities and potential issues.
Monitoring the pipeline’s performance and health helps identify any abnormalities or bottlenecks,
allowing for proactive remediation.
Misconfigured Access Controls:
Improperly configured access controls, permissions, or roles within the pipeline, allowing
unauthorized users or malicious actors to gain elevated privileges or access to critical resources.
In the noncompliant code, there is a lack of proper access controls in the pipeline. This means
that unauthorized individuals may have access to sensitive information or critical pipeline
components, leading to potential security breaches or unauthorized actions.
# Noncompliant: Misconfigured Access Controls in pipeline
stages:
steps:
command: |
build-tool
deploy-tool
To mitigate the risk of misconfigured access controls in the pipeline, it is crucial to implement
proper access controls and authentication mechanisms.
# Compliant: Enhanced Access Controls in pipeline
stages:
steps:
command: |
build-tool
security:
- role: build-deploy
command: |
deploy-tool
security:
- role: build-deploy
In the compliant code, each step in the pipeline has an associated security configuration that
specifies the necessary roles or permissions required to execute that step. This ensures that only
authorized individuals or entities can perform specific actions in the pipeline.
Rules / Android
Android
TABLE OF CONTENTS
1 Java
a Improper Platform Usage
b Insecure Data Storage
c Insecure Communication
d Insecure Authentication
e Insufficient Cryptography
f Insecure Authorization
g Client Code Quality
h Code Tampering
i Reverse Engineering
j Extraneous Functionality
Java
Improper Platform Usage
Noncompliant code:
// Noncompliant code
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_insecure_storage);
In this noncompliant code, the SharedPreferences object is created with the mode
MODE_WORLD_READABLE, which allows any other application to read the stored preferences.
This violates the principle of proper platform usage, as sensitive data should not be stored in a
way that allows unauthorized access.
Compliant code:
// Compliant code
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_secure_storage);
In the compliant code, the SharedPreferences object is created with the mode MODE_PRIVATE,
which ensures that the preferences are only accessible by the application itself. This follows the
principle of proper platform usage by securely storing sensitive data without allowing
unauthorized access.
By using MODE_PRIVATE instead of MODE_WORLD_READABLE, the compliant code ensures that
the stored preferences are only accessible within the application, mitigating the risk of exposing
sensitive information to other applications on the device.
Semgrep:
For Semgrep, you can use the following rule to detect the insecure use of
MODE_WORLD_READABLE in SharedPreferences:
rules:
- id: insecure-sharedpreferences
patterns:
- pattern: "getSharedPreferences\\(\"\\w+\",\\s*MODE_WORLD_READABLE\\)"
CodeQL:
For CodeQL, you can use the following query to detect the insecure use of
MODE_WORLD_READABLE in SharedPreferences:
import java
import android
from MethodInvocation m
select m
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_insecure_storage);
String sensitiveData = "This is my sensitive data";
writeToFile(sensitiveData);
try {
writer.write(data);
writer.close();
} catch (IOException e) {
e.printStackTrace();
In this noncompliant code, sensitive data is written to a file using the FileWriter without
considering secure storage options. The data is stored in the application’s private file directory,
but it lacks proper encryption or additional security measures, making it vulnerable to
unauthorized access.
Compliant code:
// Compliant code
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_secure_storage);
writeToFile(sensitiveData);
}
private void writeToFile(String data) {
try {
writer.write(data);
writer.close();
} catch (IOException e) {
e.printStackTrace();
In the compliant code, the FileOutputStream and OutputStreamWriter are used along with the
openFileOutput method to securely write the sensitive data to a file in the application’s private
storage directory. The MODE_PRIVATE flag ensures that the file is only accessible by the
application itself. This follows secure storage practices and helps protect the sensitive data from
unauthorized access.
By using openFileOutput with MODE_PRIVATE instead of FileWriter, the compliant code ensures
secure storage of sensitive data, mitigating the risk of unauthorized access or exposure.
Semgrep:
rules:
- id: insecure-file-write
patterns:
- pattern: "FileWriter\\.write\\(\\w+\\)"
CodeQL:
import java
import android
from MethodInvocation m
select m
Insecure Communication
Noncompliant code:
// Noncompliant code
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_insecure_communication);
try {
conn.setRequestMethod("POST");
conn.setDoOutput(true);
writer.write(data);
writer.flush();
if (responseCode == HttpURLConnection.HTTP_OK) {
response.append(line);
reader.close();
return response.toString();
} else {
conn.disconnect();
} catch (Exception e) {
e.printStackTrace();
return null;
In this noncompliant code, the app sends sensitive data over an insecure HTTP connection
(http://example.com/api/) using HttpURLConnection. This puts the data at risk of interception,
tampering, and unauthorized access.
Compliant code:
// Compliant code
// Compliant code
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_secure_communication);
String requestData = "Some sensitive data";
try {
conn.setRequestMethod("POST");
conn.setDoOutput(true);
writer.write(data);
writer.flush();
if (responseCode == HttpsURLConnection.HTTP_OK) {
String line;
response.append(line);
reader.close();
return response.toString();
} else {
conn.disconnect();
} catch (Exception e) {
e.printStackTrace();
return null;
}
// Rest of the code...
In the compliant code, the app uses HttpsURLConnection to establish a secure HTTPS connection
(https://example.com/api/) for transmitting sensitive data. HTTPS ensures that the communication
is encrypted, providing confidentiality and integrity of the data. By using HTTPS instead of HTTP,
the compliant code addresses the vulnerability of insecure communication and reduces the risk of
interception or unauthorized access to sensitive data.
Semgrep:
rules:
- id: insecure-file-write
patterns:
- pattern: "FileWriter\\.write\\(\\w+\\)"
CodeQL:
import java
import android
from MethodInvocation m
select m
Insecure Authentication
Noncompliant code:
// Noncompliant code
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_login);
usernameEditText = findViewById(R.id.usernameEditText);
passwordEditText = findViewById(R.id.passwordEditText);
loginButton = findViewById(R.id.loginButton);
loginButton.setOnClickListener(new View.OnClickListener() {
@Override
// Login successful
openMainActivity();
} else {
// Login failed
});
startActivity(intent);
finish();
In this noncompliant code, the app performs authentication by comparing the username and
password entered by the user (admin and admin123) with hard-coded values. This approach is
insecure because the credentials are easily discoverable and can be exploited by attackers.
Compliant code:
// Compliant code
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_login);
usernameEditText = findViewById(R.id.usernameEditText);
passwordEditText = findViewById(R.id.passwordEditText);
loginButton = findViewById(R.id.loginButton);
loginButton.setOnClickListener(new View.OnClickListener() {
@Override
if (authenticateUser(username, password)) {
// Login successful
openMainActivity();
} else {
// Login failed
});
return false;
startActivity(intent);
finish();
In the compliant code, the app separates the authentication logic into a dedicated method
authenticateUser(), which can be implemented securely. This method can utilize secure
authentication mechanisms such as hashing, salting, and server-side validation. By implementing
a secure authentication process instead of relying on hard-coded credentials, the compliant code
addresses the vulnerability of insecure authentication and reduces the risk of unauthorized
access to user accounts.
Semgrep:
rules:
- id: insecure-login-credentials
patterns:
CodeQL:
import java
import android
from BinaryExpression b
select b
Insufficient Cryptography
Noncompliant code:
// Noncompliant code
try {
cipher.init(Cipher.ENCRYPT_MODE, key);
} catch (Exception e) {
e.printStackTrace();
return null;
try {
cipher.init(Cipher.DECRYPT_MODE, key);
} catch (Exception e) {
e.printStackTrace();
}
return null;
In this noncompliant code, a custom EncryptionUtils class is implemented to encrypt and decrypt
data using the AES algorithm. However, the code uses a hard-coded key (mySecretKey) and does
not incorporate other essential security measures like salting, key strengthening, or secure key
storage. This approach is insufficient and can be vulnerable to various cryptographic attacks.
Compliant code:
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import javax.crypto.Cipher;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import android.util.Base64;
try {
e.printStackTrace();
try {
} catch (Exception e) {
e.printStackTrace();
return null;
try {
} catch (Exception e) {
e.printStackTrace();
return null;
MessageDigest md = MessageDigest.getInstance("SHA-256");
md.update(secretKey.getBytes());
return md.digest();
}
In the compliant code, the key generation has been improved by using a more secure approach.
Instead of a simple byte conversion of the secretKey, a hashing algorithm (SHA-256) is used to
derive a stronger key from the secretKey. This enhances the security of the encryption process by
introducing a more robust key derivation function.
Semgrep:
rules:
- id: insecure-encryption-key
patterns:
CodeQL:
import java
import javax.crypto
from MethodInvocation m
select m
Insecure Authorization
Noncompliant code:
public class AuthorizationUtils {
return true;
} else {
return false;
}
In this noncompliant code, the checkAdminAccess method performs an insecure authorization
check by comparing the username and password directly with hardcoded values. This approach is
vulnerable to attacks such as password guessing and brute-force attacks, as well as unauthorized
access if the credentials are compromised.
To address this issue, here’s an example of compliant code for secure authorization in Android
Java:
Compliant code:
public class AuthorizationUtils {
// For demonstration purposes, we'll use a simple comparison with hardcoded values.
return true;
} else {
return false;
In the compliant code, the username and password comparison is still present, but the actual
credentials are stored securely, such as in a secure database or a hashed and salted format.
Additionally, this code provides an example where the hardcoded values are defined as constants,
making it easier to manage and update the credentials if needed. It is important to implement
proper authentication mechanisms, such as using secure password storage and strong
authentication protocols, to ensure secure authorization in real-world scenarios.
Semgrep:
rules:
- id: insecure-admin-access
patterns:
CodeQL:
import java
AuthorizationUtils() {
exists(
MethodDeclaration m |
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
// Update the UI
textView.setText("Operation completed");
In this noncompliant code, a long and complex operation is performed directly on the main UI
thread within the onCreate method of the MainActivity class. Performing such heavy
computations on the main UI thread can cause the app to become unresponsive and negatively
impact the user experience. It is essential to offload time-consuming operations to background
threads to keep the UI responsive.
To address this issue, here’s an example of compliant code that improves client code quality in
Android Java:
Compliant code:
public class MainActivity extends AppCompatActivity {
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
@Override
}
// Update the UI on the main thread
runOnUiThread(new Runnable() {
@Override
// Update the UI
textView.setText("Operation completed");
});
}).start();
In the compliant code, the heavy computations are performed on a background thread using
Thread or other concurrency mechanisms. Once the computations are completed, the UI update
is performed on the main UI thread using runOnUiThread to ensure proper synchronization with
the UI. By offloading the heavy computations to a background thread, the UI remains responsive,
providing a better user experience.
Semgrep:
rules:
- id: long-operation-on-ui-thread
patterns:
CodeQL:
import android
MainActivity() {
exists(
MethodDeclaration m |
m.getEnclosingType().toString() = "MainActivity" and
Code Tampering
Noncompliant code:
public class MainActivity extends AppCompatActivity {
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
if (!isAuthorizedSource) {
finish();
// For simplicity, assume the check always returns false in this example
return false;
}
In this noncompliant code, there is a check performed in the onCreate method to verify if the app
is installed from an unauthorized source. If the check fails (returns false), an error message is
displayed, but the app continues its execution.
To address this issue, here’s an example of compliant code that mitigates code tampering in
Android Java:
Compliant code:
public class MainActivity extends AppCompatActivity {
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
if (!isAuthorizedSource) {
return false;
In the compliant code, when the check for an unauthorized app installation fails, the finishAffinity()
method is called to close all activities and exit the app. Additionally, the return statement is used
to prevent further execution of code in the onCreate method. By terminating the app’s execution
upon detection of an unauthorized installation source, the potential for code tampering is
mitigated.
Semgrep:
rules:
- id: unauthorized-app-installation-check
patterns:
- pattern: 'checkInstallationSource\(\)'
CodeQL:
import android
MainActivity() {
exists(
MethodDeclaration m |
m.getBody().toString().indexOf("checkInstallationSource()") >= 0
Reverse Engineering
Noncompliant code:
public class MainActivity extends AppCompatActivity {
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
textView.setText(sensitiveData);
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = findViewById(R.id.textView);
In the compliant code, instead of directly displaying the sensitive data on the screen, a generic
message is shown to avoid exposing sensitive information. By obfuscating the sensitive data and
displaying a generic message, the reverse engineering efforts are made more challenging, making
it harder for an attacker to extract sensitive information from the APK.
Semgrep:
rules:
- id: sensitive-data-display
patterns:
- pattern: 'textView.setText\(performSensitiveOperation\(\)\)'
CodeQL:
import android
MainActivity() {
exists(
MethodDeclaration m |
m.getBody().toString().indexOf("textView.setText(performSensitiveOperation())") >= 0
Extraneous Functionality
Noncompliant code:
public class MainActivity extends AppCompatActivity {
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
loginButton = findViewById(R.id.loginButton);
adminButton = findViewById(R.id.adminButton);
loginButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
performLogin();
});
adminButton.setOnClickListener(new View.OnClickListener() {
@Override
performAdminAction();
});
// Login functionality
// Admin functionality
In this noncompliant code, there is an adminButton along with its associated functionality for
performing administrative actions. However, if the app does not require or intend to provide
administrative functionality to regular users, this can introduce unnecessary risk. It increases the
attack surface and potential for unauthorized access if an attacker gains control of the app.
To address this issue, here’s an example of compliant code that removes the extraneous
functionality:
Compliant code:
public class MainActivity extends AppCompatActivity {
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
loginButton = findViewById(R.id.loginButton);
loginButton.setOnClickListener(new View.OnClickListener() {
@Override
performLogin();
});
// Login functionality
In the compliant code, the adminButton and its associated administrative functionality have been
removed. The app now focuses solely on the required login functionality for regular users,
reducing the attack surface and eliminating unnecessary functionality that could introduce
potential security risks.
Semgrep:
rules:
- id: hardcoded-actions
patterns:
- pattern: 'performLogin\(\)'
- pattern: 'performAdminAction\(\)'
CodeQL:
import android
MainActivity() {
exists(
MethodDeclaration m |
m.getBody().getAStatement().toString().indexOf("performLogin()") >= 0 or
m.getBody().getAStatement().toString().indexOf("performAdminAction()") >= 0
Rules / C
C
TABLE OF CONTENTS
1 Buffer Overflow
Buffer Overflow
Noncompliant code:
void copy_string(char* dest, char* src) {
int i = 0;
while(src[i] != '\0') {
dest[i] = src[i];
i++;
dest[i] = '\0';
int main() {
char str1[6];
copy_string(str1, str2);
printf("%s", str1);
return 0;
check for the length of dest, and if src is longer than dest, a buffer overflow will occur, potentially
overwriting adjacent memory addresses and causing undefined behavior. In this case, str2 is 7
characters long, so the call to copy_string will overflow the buffer of str1, which has a length of
only 6.
Compliant code:
void copy_string(char* dest, char* src, size_t dest_size) {
int i = 0;
dest[i] = src[i];
i++;
dest[i] = '\0';
int main() {
char str1[6];
printf("%s", str1);
return 0;
In this compliant code, the function takes an additional parameter dest_size, which is
copy_string
the maximum size of the dest buffer. The function checks the length of src against dest_size to
avoid overflowing the buffer. The sizeof operator is used to get the size of the dest buffer, so it is
always passed correctly to copy_string. By using the dest_size parameter, the code ensures that
it doesn’t write more data than the destination buffer can hold, preventing buffer overflows.
Semgrep:
rules:
- id: buffer-overflow
patterns:
CodeQL:
import c
from Function f
select f
Rules / CloudFormation
CloudFormation
TABLE OF CONTENTS
1 Hardcoded Name
Hardcoded Name
Noncompliant code:
# Noncompliant code
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-bucket
In this noncompliant code, an AWS CloudFormation template is used to create an S3 bucket. The
bucket name is hardcoded as my-bucket without considering potential naming conflicts or
security best practices. This approach introduces security risks, as the bucket name might
already be taken or it might inadvertently expose sensitive information.
Compliant code:
# Compliant code
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Sub: "my-bucket-${AWS::StackName}-${AWS::Region}"
In the compliant code, the bucket name is dynamically generated using the Fn::Sub intrinsic
function. The bucket name is composed of the string “my-bucket-“, followed by the current
CloudFormation stack name (AWS::StackName), and the AWS region (AWS::Region). This
approach ensures uniqueness of the bucket name within the CloudFormation stack and helps
mitigate potential naming conflicts.
By using dynamic naming with the Fn::Sub function, you can avoid hardcoded values and provide
a more flexible and secure approach to resource creation in CloudFormation.
Additionally, you can implement other security measures such as:
• Leveraging IAM policies to control access permissions for the created resources.
• Implementing resource-level permissions using AWS Identity and Access Management (IAM)
roles and policies.
• Encrypting sensitive data at rest using AWS Key Management Service (KMS) or other
encryption mechanisms.
• Implementing stack-level or resource-level CloudFormation stack policies to control stack
updates and prevent unauthorized modifications.
By following security best practices and utilizing dynamic values in CloudFormation templates,
you can enhance the security, flexibility, and reliability of your infrastructure deployments in AWS.
Semgrep:
rules:
- id: noncompliant-s3-bucket-properties
patterns:
CodeQL:
import cf
from Template t
where exists (Bucket b | b.getType().toString() = "AWS::S3::Bucket")
select t
Rules / Cpp
Cpp
TABLE OF CONTENTS
1 Null Pointer Dereference
if (ptr != nullptr) {
*ptr = 42;
} else {
// handle error
int main() {
foo(ptr);
return 0;
In this example, the foo() function takes a pointer to an integer and dereferences it to set its value
to 42, but it does not check if the pointer is null. If a null pointer is passed to foo(), a null pointer
dereference will occur, which can cause the program to crash or exhibit undefined behavior.
Compliant code:
void foo(int* ptr) {
if (ptr != nullptr) {
*ptr = 42;
} else {
// handle error
int main() {
int i = 0;
foo(ptr);
return 0;
In the compliant code, the pointer is initialized to a valid address of an integer variable i using the
address-of operator &. This ensures that the pointer is not null and prevents a null pointer
dereference.
Alternatively, the foo() function could be modified to handle null pointers gracefully, such as
returning an error code or throwing an exception. In general, it is important to always check
pointers for null before dereferencing them to prevent null pointer dereferences, which can lead to
crashes and security vulnerabilities.
Semgrep:
rules:
- id: null-pointer-dereference
patterns:
CodeQL:
import cpp
from Function f
where f.getName() = "foo"
select f
Rules / Csharp
Csharp
TABLE OF CONTENTS
1 Exposure of sensitive information
2 Insertion of Sensitive Information Into Sent Data
3 Cross-Site Request Forgery (CSRF)
4 Use of Hard-coded Password
5 Broken or Risky Crypto Algorithm
6 Insufficient Entropy
7 XSS
8 SQL Injection
9 External Control of File Name or Path
10 Generation of Error Message Containing Sensitive Information
11 unprotected storage of credentials
12 Trust Boundary Violation
13 Insufficiently Protected Credentials
14 Restriction of XML External Entity Reference
15 Vulnerable and Outdated Components
16 Improper Validation of Certificate with Host Mismatch
17 Improper Authentication
18 Session Fixation
19 Inclusion of Functionality from Untrusted Control
20 Download of Code Without Integrity Check
21 Deserialization of Untrusted Data
22 Insufficient Logging
23 Improper Output Neutralization for Logs
24 Omission of Security-relevant Information
25 Sensitive Information into Log File
26 Server-Side Request Forgery (SSRF)
class Program
try
// Simulating an error
Console.WriteLine(ex.Message);
In this noncompliant code, the throw statement intentionally generates an exception with an error
message that includes sensitive information, such as a database connection string, a password,
or any other confidential data. The error message is then printed to the console, potentially
exposing sensitive information to unauthorized users or attackers.
To address this issue and prevent the exposure of sensitive information via error messages, here’s
an example of compliant code:
Compliant code:
using System;
class Program
try
// Simulating an error
LogException(ex);
In the compliant code, the error message intentionally omits any sensitive information and
provides a generic error message instead. The sensitive information is logged on the server side
for debugging or monitoring purposes, but it is not exposed to the user or client.
By ensuring that error messages do not contain sensitive information, the compliant code reduces
the risk of exposing confidential data to potential attackers or unauthorized users.
Semgrep:
rules:
- id: sensitive-information-exposure
patterns:
- pattern: 'catch \(Exception ex\)\n\s+Console\.WriteLine\(ex\.Message\);'
CodeQL:
import csharp
exists(MethodInvocation println |
println.getArgument(0).toString().indexOf("ex.Message") >= 0
select tryCatch
using System.Net;
using System.Net.Mail;
class Program
client.EnableSsl = true;
client.Credentials = new NetworkCredential(username, password);
client.Send(message);
In this noncompliant code, the sensitive information (stored in the sensitiveData variable) is
concatenated with the email body without any encryption or obfuscation. This means that the
sensitive data is directly included in the sent data without any protection, which can lead to
potential exposure or unauthorized access to the information.
To address this issue and ensure the protection of sensitive information in sent data, here’s an
example of compliant code:
Compliant code:
using System;
using System.Net;
using System.Net.Mail;
class Program
client.EnableSsl = true;
message.Attachments.Add(attachment);
client.Send(message);
In the compliant code, instead of directly inserting the sensitive information into the email body, it
is attached as a secure attachment. This helps to protect the sensitive data during transmission,
ensuring that it is not exposed in the sent data.
By properly handling sensitive information and avoiding direct insertion into sent data, the
compliant code enhances the security and privacy of the sensitive data, reducing the risk of
unauthorized access or exposure.
Semgrep:
rules:
- id: sensitive-information-exposure
patterns:
CodeQL:
import csharp
messageCreation.getArgument(3).toString().indexOf("Body:") >= 0
select messageCreation
Cross-Site Request Forgery (CSRF)
Noncompliant code:
using System;
using System.Web.UI;
if (Request.QueryString["action"] == "delete")
string id = Request.QueryString["id"];
// ...
In this noncompliant code, the page performs a delete action based on a query parameter action
and an ID specified in the query parameter id. However, there is no CSRF protection implemented,
which means that an attacker can craft a malicious link or form on a different website that
performs a delete action on behalf of the user without their consent.
To address this issue and implement CSRF protection, here’s an example of compliant code:
Compliant code:
using System;
using System.Web.UI;
if (IsPostBack)
{
// Verify CSRF token
if (ValidateCsrfToken())
if (Request.QueryString["action"] == "delete")
string id = Request.QueryString["id"];
// ...
else
// ...
else
GenerateCsrfToken();
// Compare the CSRF token from the request with the stored token
Session["CsrfToken"] = csrfToken;
Page.ClientScript.RegisterHiddenField("__RequestVerificationToken", csrfToken);
In the compliant code, CSRF protection is implemented using a unique CSRF token. The token is
generated and stored in the session or view state when the page is loaded. On subsequent
requests, the token is validated to ensure that the request originated from the same site and not
from an attacker’s site.
By implementing CSRF protection, the compliant code prevents unauthorized actions by verifying
the integrity of the requests and ensuring that they are originated from the legitimate user. This
helps to protect against CSRF attacks and improves the security of the application.
Semgrep:
rules:
- id: csrf-vulnerability
patterns:
CodeQL:
import csharp
exists(BinaryExpression binaryExpr |
binaryExpr.getRightOperand().toString() = "\"delete\""
)
select method
using System.Data.SqlClient;
connection.Open();
// ...
In this noncompliant code, the database connection string contains a hard-coded password.
Storing sensitive information like passwords directly in the source code poses a security risk, as
the password can be easily discovered if the code is accessed or leaked.
To address this issue and implement a more secure approach, here’s an example of compliant
code:
Compliant code:
using System;
using System.Configuration;
using System.Data.SqlClient;
connection.Open();
// ...
In the compliant code, the password is not hard-coded in the source code. Instead, it is stored in a
secure configuration file (e.g., web.config or app.config) and accessed using the
ConfigurationManager class. The configuration file should be properly protected and access
should be restricted to authorized personnel.
By removing the hard-coded password and storing it in a secure configuration file, the compliant
code improves the security of the application by preventing unauthorized access to sensitive
information.
Semgrep:
rules:
- id: sensitive-information-exposure
patterns:
- pattern: 'private string connectionString = "Server=.+;Database=.+;User Id=.+;Password=.+
CodeQL:
import csharp
field.getInitializer().toString().indexOf("Password=") >= 0
select field
using System.Security.Cryptography;
desCryptoProvider.Key = keyBytes;
desCryptoProvider.Padding = PaddingMode.PKCS7;
encryptor.Dispose();
desCryptoProvider.Clear();
return Convert.ToBase64String(encryptedData);
In this noncompliant code, the TripleDESCryptoServiceProvider class is used with the ECB
(Electronic Codebook) mode, which is known to be insecure. ECB mode does not provide proper
encryption, as it encrypts each block of data independently, leading to potential vulnerabilities.
To address this issue and use a more secure cryptographic algorithm, here’s an example of
compliant code:
Compliant code:
using System;
using System.Security.Cryptography;
aesCryptoProvider.Key = keyBytes;
aesCryptoProvider.Mode = CipherMode.CBC;
aesCryptoProvider.Padding = PaddingMode.PKCS7;
encryptor.Dispose();
aesCryptoProvider.Clear();
return Convert.ToBase64String(encryptedData);
}
In the compliant code, the AesCryptoServiceProvider class is used with the CBC (Cipher Block
Chaining) mode, which is more secure than ECB mode. Additionally, proper disposal of
cryptographic objects is implemented using the using statement to ensure proper resource
management.
By using a secure cryptographic algorithm like AES with CBC mode, the compliant code improves
the security of the encryption process, making it resistant to known cryptographic vulnerabilities.
Semgrep:
rules:
- id: insecure-encryption-mode
patterns:
CodeQL:
import csharp
select assignment
Insufficient Entropy
Noncompliant code:
using System;
In this noncompliant code, the Random class from the System namespace is used to generate
random numbers. However, the Random class uses a time-based seed by default, which can
result in predictable and easily guessable random numbers. This is because the seed value is
based on the current system time, which can be easily determined or even repeated if the code is
executed within a short time span.
To address this issue and improve the entropy of the random number generation, here’s an
example of compliant code:
Compliant code:
using System;
using System.Security.Cryptography;
rngCryptoProvider.GetBytes(randomBytes);
- id: random-without-seed
patterns:
CodeQL:
import csharp
seedArg.toString().startsWith("new Random(")
select randomCreation
XSS
Noncompliant code:
using System;
{
string sanitizedInput = userInput.Replace("<", "<").Replace(">", ">");
return sanitizedInput;
In this noncompliant code, the ProcessUserInput method attempts to sanitize user input by
replacing the < and > characters with their corresponding HTML entities (< and >). However, this
approach is insufficient to prevent XSS attacks because it only focuses on these specific
characters and fails to handle other potentially malicious input.
To address this issue and properly protect against XSS attacks, here’s an example of compliant
code:
Compliant code:
using System;
using System.Web;
return sanitizedInput;
In the compliant code, the HtmlEncode method from the System.Web namespace is used to
properly encode the user input. This method replaces special characters with their corresponding
HTML entities, ensuring that the input is rendered as plain text rather than interpreted as HTML or
JavaScript code.
By using HtmlEncode, the compliant code mitigates the risk of XSS attacks by encoding all
potentially dangerous characters in the user input, making it safe to display the input on web
pages without the risk of executing unintended scripts.
It’s important to note that the best approach to prevent XSS attacks is to use contextual output
encoding at the point of rendering, rather than relying solely on input sanitization. This ensures