0% found this document useful (0 votes)
14 views33 pages

Enterprise, Io T&Cloud Security Fundamentals

The document outlines the evolution of enterprise security, highlighting the inadequacies of older security architectures in addressing modern threats such as BYOD and cloud computing. It emphasizes the need for a comprehensive security architecture that considers data, processes, applications, and user roles, rather than relying solely on perimeter security. Additionally, it discusses the importance of risk analysis, trust models, and the integration of security into business processes to mitigate risks effectively.

Uploaded by

afreenbegumaffu0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views33 pages

Enterprise, Io T&Cloud Security Fundamentals

The document outlines the evolution of enterprise security, highlighting the inadequacies of older security architectures in addressing modern threats such as BYOD and cloud computing. It emphasizes the need for a comprehensive security architecture that considers data, processes, applications, and user roles, rather than relying solely on perimeter security. Additionally, it discusses the importance of risk analysis, trust models, and the integration of security into business processes to mitigate risks effectively.

Uploaded by

afreenbegumaffu0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

History of Enterprise Security

 The earlier security architectures do not meet the newer


 Older times → no concept of DMZ, as no public Internet enterprise trends such as
existed  Bring your own device (BYOD) and cloud migration and
 Only form of Networking in the form of dial-up networking cloud computing.
connections → not much security concerns as phone  It also does not address the internal network facet of
numbers had to be known information security the older security architectures
 Modems used to make outbound calls and accept inbound deemed internal assets, employees, contractors, and
calls to primarily process batch jobs for large backend business partners as trusted.
systems  Example shortcomings of the earlier security architectures:
 Security Challenge: war dialing became a method to  It fails to secure internal assets from internal threats
identify modems in large banks of phone numbers for  It remains static and inflexible; small deviations circumvent
attackers to gain unauthorized access to the connected and undermine intended security
equipment or network  All internal users are equal, no matter what device is used
 Specialized equipment was designed and sold to enterprises or if the user is a non-employee
to provide security for the modem infrastructure  Security is weak for enterprise data; access is not
effectively controlled at the user level.
As networking technologies evolved: enterprise assets became
accessible on the Internet weaknesses in the systems and network Evolution in Enterprise Security Architecture
security were quickly identified by attackers
 Network equipment manufacturers started developing Older / earlier "security" architecture addresses user access to data in
security products to defeat specific security threats as they a very generic manner, focusing primarily on what protocols can be
were identified → “Band-aid Approach” pattern of reaction- used at what tier of the network (VLAN etc).
based development of security tools continues, driven
primarily by mitigating specific threats as they are identified
 Anti-virus, firewalls, intrusion detection/prevention, and
other security technologies are the direct result of an existing
threat, and are reactive.
Consequences:
 Enterprise security → perimeter security by design and
function Until recently this made sense; though not true, it
was thought that the known threat has always been external
 It has led to bloated security budgets, crowded perimeter The new security architecture addresses all facets
zones, and very little increase in security. of security and provides a realistic picture of the
 We have purchased and implemented the latest risk posed by any implementation.
nextgeneration firewall technology, intrusion prevention It takes into account data, processes, applications,
systems and a similar other myriad of security tools. user roles, and users, in addition to the traditional
 We have increased the complexity, instead of effectiveness network security mechanisms to provide end-toend security from
in mitigating threats holistically → the current Enterprise entry to the network to the data
Security facade. resident within the enterprise.
The Evolving Network Edge

 Applications becoming web-enabled


Demand for Enterprise data from business
partners and 3rd parties
 Typically driven by a need to outsource some functions
 Traditionally, enterprise data resided in internal trusted
network segments
 With cloud-based offerings and virtualized DMZ
implementations.
 New trend for internal network security design is a “trust no
one” model.Internal data-systems are firewalled and Dilemma in Enterprise Security
protected at same level as a DMZ.  Lack of senior management understanding of security issue.
 Budgetary constraints.

Security Architectures + Security as a Process


Driving Forces for Network Edge Evolution
Security Architecture Models
 Traditional Enterprise
Generic Layered Model
 Network Edge offering a basic Internet presence
 Only connected layers communicate with each other
 Modern Enterprise
 Example, the typical implementation of an Internet accessible
 Feature-rich Internet-accessible (web) applications
web application positions the presentation and logic tiers within
 Complex connections to business partners
the DMZ infrastructure with the backend data located in the
 Increasing use of cloud-based services
internal network
 Capital Savings from BYOD schemes
Micro-architectures
Enterprise Security Architecture Pitfalls
 Complex Models
1
 Source and destination zones, allowed protocols, special  In other words, if the data is lost, stolen, or manipulated, it
permitted communication channels per endpoint type would cause adverse implications for the enterprise.
 Trust models can be used as a method of placing certain user
Advanced Models types in buckets, with these buckets further defined by a risk
 Based on Data Risk assessment.
 Data risk is comprised of understanding what data needs
protection including from whom and what, based on loss Defining Data in a Trust Model
probability.
 An enterprise must understand what data exists, why the data
Micro-Architecture exists, data sensitivity, and data criticality.
 Typical locations of data can be determined by understanding
 A micro architecture is architecture within architecture business processes
 An example may be the logical three-tier DMZ architecture  In case they are not well defined, then an enterprise can
 Tier 1: Web or Presentation begin by looking at databases and network shares for data
 Tier 2: Application or Logic at rest.
 Tier 3: Database or Data  This process should identify a majority of the enterprise
 This type of architecture is more network-centric (aka data include end-point devices to look for local database
network segments), but can play a part in the overall data- instances and data stored in typical desktop processing
centric security architecture of an enterprise applications. Laptops are one location that has been a
 The method may be used in a cloud-based solution, where significant cause of data breaches, because critical and
an enterprise desires to maintain the three-tier approach high-risk data was stored on a laptop with no protection,
Virtualization has had a unique effect on the security and was stolen.
architecture.  If the enterprise is responsible for meeting the requirements of
 In order to enforce the presentation, application, and a regulatory body, it is imperative to fully understand the
database tiers, there should essentially be three distinct requirements and what is expected as proof of compliance.
physical systems segmented by a firewall  Requirements should then be integrated into the developed trust
 With the ability to host all three hosts on a single physical models and an effective security architecture.
system, the lines of segmentation have been blurred
 segmentation happens at a lower physical hardware layer Defining Processes in a Trust Model
below the virtualized system's operating system, yet above  Identify Risks in Business Processes
the traditional physical network segmentation of switches,  Once processes have been identified, opportunities should
routers, and firewalls. be taken to correct any process that introduces risks to the
 Data-centric Security Architectures: enterprise, as processes are primarily datacentric with
 Data-centric security architectures emphasize enterprise direct data access and manipulation capabilities.
data, where it is stored, how it is transmitted, and the
details of any data interaction. Defining Applications in a Trust Model
 The focus of a security architecture is not the network  After identification of the enterprise data and processes, we
segment or the system; it is the data, which is the need to define the applications that transmit, process, or store
purpose for the network, and the system the defined data.
 Trust models need to be developed in such a way that  The methods in which the applications interact with the data
they encompass all the interactions with the data they become the factors defining users, roles, and ultimately the
are designed to protect. security mechanisms required.
 In some cases, applications and protocols can represent the
same thing.

Defining Users in a Trust Model


 User interacts with an application that has access to data.
 User may be a person, script, system, or another
application.
 Not all users will require the same level of access
 It is critical to identify as many users as possible and also
the types of interactions with the enterprise data.
 There are high-level distinctions for users such as:
 Internal (employee)
Data Risk-centric Architectures  External (non-employee)
 Business Partner
 Risk is a key factor of any security architecture  Contractor
 systems and applications exist because there is data to be
generated, processed, transmitted, and stored. Defining Roles in a Trust Model
 Risk introduced in an enterprise is significantly data-  An important part of defining users is to identify the
driven. interactions that the users will have with the data including how
 It does not mean that we only protect enterprise data; we the access will be facilitated—whether through an application,
still need to protect the network that makes data access shell, script, or direct.
possible.  Identified user roles based on information learned versus
 What does data risk-centric mean? simply by departmental role. High-level roles:
 From the perspective of the security architecture, we  Application User:
need to focus on the data with the most risk to the  Focus on the fact that the enterprise does not know
business (e.g. credit card data). the security posture of the end system

2
 An enterprise is neither responsible nor in a BYOD: Mobile Devices:
position to update the anti-virus signatures on the  Most mobile devices are cellular smartphones or tablets.
external system or make sure the end system is  Commonly implemented security measures include using a
patched Mobile Device Management (MDM) solution.
 the level of trust should be none with the highest
level of monitoring and protection implemented.
 Application Owner: BYOD: Personal Computers:
 Third party has access to a system on the internal  Some enterprises are leveraging virtualization in a "trust no
network and the data it processes one" model where the only way to access anything is through a
 There must be a level of trust virtual desktop environment.
 The enterprise more than likely signed a business  Other (generally smaller) organizations are allowing employees
contract to enable this relationship with a contract to bring their own PCs to access enterprise assets, with no
in place, there are legal protections provided for virtualization and balancing access with risk.
the enterprise.  Limit the access to all the data that has been assessed at a
risk level of high and above, or to a level the enterprise's
 System Owner: risk tolerance will allow
 Similar to a business partner, however, the
contractor may seem more like an employee Security as a Process
 They reside on-site and perform the job functions Security is a process that requires the integration of
of a full-time staff member security into business processes to ensure enterprise risk is
 The more access granted, the more security minimized to an acceptable level.
mechanisms must be in place to reduce the risk of
elevated privileges Risk Analysis
 Data Owner:  Risk analysis is the process of assessing the components of risk;
 Has significant level of access to the enterprise threats, impact, and probability as it relates to an asset, in our
data. case enterprise data.
 As an internal employee, trust level is the most  A simple risk analysis output may be the decision to spend
trusted. capital to protect an asset based on value of the asset and the
 With this access level, there is great responsibility scope of impact if the risk is not mitigated.
not only for the data owner, but also for the  It is the method to properly implement security architecture for
enterprise. enterprise initiatives.
 If the data is decided to have little value, then the
security mechanisms can be reduced Threat Assessment
 Automation scripts and applications:  A threat is anything that can act negatively towards the
 Unique, as no human interaction involved many enterprise assets
times the permissions are incorrectly configured  It may be a person, virus, malware, or a natural disaster
and allow scripts the ability to launch interactive  Once a threat is defined, the attributes of threats must be
logons, and shell access equivalent to a standard identified and documented
user.  The documentation of threats should include the type of threat,
 If authentication is required the credentials are identified threat groupings, motivations if any, and methods of
sometimes embedded in the script. actions.
 These factors contribute to the trust level of the
script and automation scripts can be trusted, but Impact Assessment
not like an internal user.  Impact is the outcome of threats acting against the enterprise.
 Types of Impacts: Immediate and Residual
Defining Policies and Standards  Immediate impacts are rather easy to determine
 The policies that will guide a secure access and use of the  Residual impacts are longer term and often known later.
enterprise data.  Impact analysis needs to be thorough and complete.
 The standards that ensure a consistent application of policy.
Probability Assessment
BYOD Initiative  Probability is the likelihood of the Risk to mature.
 Bring your own laptop, cell, and tablet are a few of the new  Probability data is as difficult, if not more difficult, to find than
initiatives threat data.
 This model is being used by many enterprises to reduce their  Probability and Impact are equally important to decide whether
IT budgets (or not) to handle a threat.
 Data access typically occurs through systems owned by the
enterprise Assessing Risk
 There are two methods to analyze and present risk: qualitative
and quantitative.
 a quantitative risk analysis will use descriptive labels like in
any qualitative method.
 Qualitative risk analysis provides a perspective of risk in
levels with labels such as Critical, High, Medium, and
Low.
 The enterprise must still define what each level means in a
general financial perspective
 There is more financial and mathematical basis involved in a
quantitative analysis.

3
 Quantitative risk analysis is an in-depth assessment of how employees should use these technologies securely
what the monetary loss would be to the enterprise if the and responsibly.
identified risk were realized  Example: The technology use policy may require
 Enterprises with a mature risk office will undertake this employees to use company-provided email accounts for
type of analysis to drive priority budget items or find business communication and prohibit the use of personal
areas to increase insurance, effectively transferring devices for work-related tasks.
business risk.
 The cost to mitigate would be less than the loss  Remote Access Policy:
expectancy over a determined period of time. This is  A remote access policy defines the requirements and
simple return on investment (ROI) calculation guidelines for accessing the organization's network and
 Annual loss expectancy (ALE): The ALE is the resources from outside the corporate network, such as
calculation of what the financial loss would be to the through VPNs or remote desktop services.
enterprise if the threat event was to occur for a single  Example: The policy might mandate the use of multi-
year period. factor authentication for remote access and specify which
 Cost of protection (COP): The COP is the capital types of devices are allowed to connect remotely.
expense associated with the purchase or
implementation of a security mechanism to mitigate or  5. Data Classification Policy:
reduce the risk scenario.  This policy categorizes data based on its sensitivity and
importance to the organization. It typically includes
guidelines for handling, storing, and transmitting data
Security Policies and Standards according to its classification level.
 Policy versus standard  Example: Data may be classified as "public," "internal use
 Policy dictates what must be done, whereas Standard only," "confidential," or "highly confidential," with
states how it gets done.A policy's intent is to address corresponding restrictions on access and encryption
behaviors and state principles for IT interaction with requirements.
the enterprise.
 Standards focus on configuration and implementation  6. Data Handling Policy:
based on what is outlined in policy.  A data handling policy outlines procedures for accessing,
 Role of Tools: Tools need to be implemented to measure processing, storing, and sharing data securely. It includes
compliance and provide enforcement of policies and guidelines for protecting data throughout its lifecycle,
standards. from creation to disposal.
 Typical set of security policies includes:  Example: The policy may require employees to use
 Information security policy encryption when transmitting sensitive data and specify
 Acceptable use policy which employees have access to certain types of
 Technology use policy information.
 Remote access policy
 Data classification policy  Data Retention Policy:
 Data handling policy  This policy establishes guidelines for how long different
 Data retention policy types of data should be retained and when it should be
 Data destruction policy securely disposed of. It ensures compliance with legal and
regulatory requirements while minimizing the risk of
Enterprise Policies retaining unnecessary data.
 Information Security Policy:  Example: The policy may dictate that customer
 This policy outlines the organization's approach to transaction records must be retained for seven years
safeguarding its information assets. It includes before they can be securely deleted.
directives on protecting data from unauthorized access,
ensuring the integrity of data, and maintaining the  Data Destruction Policy:
availability of information systems.  A data destruction policy outlines procedures for securely
 Example: The information security policy may include and permanently disposing of data when it is no longer
requirements for regular password updates, encryption needed. It typically includes methods for data sanitization
of sensitive data, and guidelines for reporting security to prevent unauthorized recovery.
incidents.  Example: The policy may require the use of software-
based data wiping tools or physical destruction (e.g.,
 Acceptable Use Policy: shredding) of storage devices before they are disposed of
 An acceptable use policy defines the acceptable ways or recycled.
in which employees may use company resources,
including computers, networks, and the internet. It sets Enterprise Standards
guidelines for responsible use and outlines  Typical set of security Standards includes:
consequences for violating those guidelines.  Wireless Network Security Standard
 Example: The policy might prohibit employees from  Enterprise Monitoring Standard
accessing social media sites during work hours or  Enterprise Encryption Standard
downloading unauthorized software onto company  System Hardening Standard
computers.

 Technology Use Policy:


 This policy governs the use of specific technologies  Wireless Network Security Standard:
within the organization, such as email, internet, and  This standard outlines the requirements and best practices
company-owned devices. It outlines expectations for for securing wireless networks within an organization. It

4
includes measures to protect against unauthorized  Standard firewalls simply check for the policy allowing the
access, data interception, and network disruptions. source IP, destination IP, and TCP/UDP port, without a further
 Example measures may include the use of strong deep packet analysis.
encryption protocols such as WPA2 or WPA3,  Next Generation Firewalls (NGFW) perform more deep packet
implementation of secure authentication methods like analysis to mitigate malicious traffic masquerading as
EAP-TLS, regular monitoring of wireless network legitimate.
traffic for anomalies, and separation of guest and  An NGFW can inspect traffic for data, threats, and web traffic.
internal networks.  Single-pass architecture (SP3) integrates multiple threat
prevention disciplines (IPS, anti-malware, URL filtering, etc)
 Enterprise Monitoring Standard: into a single streambased engine with a uniform signature
 The Enterprise Monitoring Standard defines the format.
procedures and tools used for monitoring the  Allows traffic to be fully analyzed in a single pass without the
organization's IT infrastructure and systems. It ensures incremental performance degradation seen in other multi-
that necessary monitoring is in place to detect and function gateways
respond to security incidents, performance issues, and  Advantages:
compliance violations.  Most significant benefit of the NGFW is awareness due to
 Examples could include the deployment of network deeppacket inspection and analysis
intrusion detection systems (NIDS), log monitoring  Reduced DMZ complexity - with next generation firewalls,
solutions, security information and event management new technologies become a part of the firewall tier,
(SIEM) platforms, and regular review of monitoring including intrusion prevention, user authorization,
data for signs of suspicious activity. application awareness, and advanced malware mitigation.
 Disadvantages:
 Enterprise Encryption Standard:  This shift in firewall capabilities may add confusion to the
 This standard establishes guidelines for implementing role the appliance plays in the overall network protection.
encryption across the organization's data,  In comparison to web application and database firewalls,
communications, and storage systems to protect while the next generation firewall provides some coverage
sensitive information from unauthorized access or across these areas today, the available platforms do not
disclosure. have the advanced capabilities of purposefully designed
 It may specify the types of data that require encryption web application firewalls or database firewalls.
(e.g., personally identifiable information, financial
data), encryption algorithms and key lengths to be used, NGFW is capable of basic detection and mitigation of common
and procedures for key management and distribution. web application attacks, but lacks the more in-depth coverage
provided by web application firewalls with database
 System Hardening Standard: counterparts.
 System hardening involves configuring IT systems and
devices to reduce their attack surface and minimize Thus, implementing a NGFW in addition to web application
security vulnerabilities. The System Hardening and database firewalls provides the most comprehensive
Standard provides guidelines for securely configuring coverage for a network.
operating systems, applications, and network devices.
 Example practices may include disabling unnecessary NGFW: Application Awareness
services and protocols, applying security patches and
updates regularly, implementing strong password  Traditional firewalls only look at the source and destination IP
policies, enabling firewalls, and using host-based addresses and the TCP or UDP port to make a decision to block
intrusion detection/prevention systems (HIDS/HIPS) or permit a packet.
where applicable.
 These standards collectively help establish a robust
security posture for an organization by addressing
different aspects of network security, monitoring,
encryption, and system hardening. They serve as a
framework for implementing security controls and best  NGFW is able to perform deep packet inspection to also
practices to protect against various threats and risks. decode and inspect the application data in network
communication.
Defence In Depth
When developing an enterprise security strategy, a layered NGFW: Intrusion Prevention
approach is the best method to ensure detection and  Intrusion prevention coverage is normally required for every
mitigation of attacks at each tier of the network connection to the enterprise network
infrastructure.  With the average cost of an IPS being over $40,000, this adds
up quickly in addition to the support and maintenance costs.
Defence in depth is a military strategy that seeks to delay  Simplifies management of IT security and the skillsets required
rather than prevent the advance of an attacker, buying time to operationally support the Solution.
and causing additional casualties by yielding space. Rather  One less appliance in the DMZ - increases the performance.
than defeating an attacker with a single, strong defensive
line, defence in depth relies NGFW: Malware mitigation
on the tendency of an attack to lose momentum over time or  The newest addition to the features that NGFWs are offering is
as it covers a larger area. advanced malware protection in the form of botnet
identification along with malware analysis in the cloud.
Next Generation Firewalls

5
 Performed by a solution built into the firewall, where the  Some tools are appliance-based. The decoding and analysis
malware is examined in the cloud, protection developed and happens on the box.
mitigation implemented by the manufacturer.  Other vendors provide the service in the cloud.
 Several manufacturers in the IDS/IPS and NGFW technology
IDS/IPS areas have made significant progress in providing APT
 Intrusion detection and prevention technology has remained detection and mitigation, both on the box and in the cloud.
a mainstay at the network perimeter.
 Intrusion detection is a method for detecting an attack but DNS Resolution
taking no action.  DNS resolution can make for easy exploitation if there is no
 Still have a significant implementation in the internal control on where the mapping information is obtained.
network server segments to passively observe the  Hosts are pointed to maliciously controlled Internet servers by
behaviors of internal network users has all the detection manipulating DNS information.
logic of intrusion prevention but without the ability to  The method also relies on compromised or specifically built
actively mitigate a threat. DNS servers on the Internet, allowing malware writers to make
 Intrusion prevention is similar to intrusion detection, but has up their own, unique and sometimes inconspicuous domain
the capability to disrupt and mitigate malicious traffic by names.
blocking and other methods.
 A defense in-depth strategy is best implemented by
including IDS/IPS as an essential network protection
mechanism

IDS/IPS: Detection Methods


 IDS/IPS devices use a combination of three methods to
detect and mitigate attacks:
 Behavior
 Anomaly
 Signature
 Behavior Analysis:
 Behavioral analysis takes some intelligence from the
platform to first gain an understanding of how the
network "normally" operates.
 Any deviation from this baseline becomes an outlier and
triggers the IDS/IPS based on this behavioral deviation.
 The primary caveat with this approach is the mistake of
baselining malicious traffic within standard network
traffic as "normal“.
 Anomaly Detection:
 Anomaly detection at the network perimeter can be
extremely effective in analyzing inbound HTTP DNS Zone Transfer
requests where the protocol is correct, but there has  A DNS zone transfer should be limited to only trusted partners
been some manipulation to the packet. and limited to only zones that need to be transferred.
 Signature-based Detection:  There may be internal and external DNS implementations with
 A consistent method to detect known malicious attacks. records specific to the network areas they service.
 IDS/IPS looks for known patterns in the packets being  The internal DNS server may have records for all internal hosts
inspected. and services, while a DNS server in the DMZ may only have
 When a signature or pattern match is found, a records for DMZ services.
predetermined action is taken.  it is critical to keep the records uncontaminated from other
 Detects the most common, generic attacks. zones
 Ineffective for the more sophisticated attacks.  Specifically, TXT may give too much information that can be
 Another annoyance with this method is the high rate of used in a malicious manner against the enterprise.
false positives.
DNSSEC
APT Detection and Mitigation  Most prevalent DNS attack is DNS poisoning, where the DNS
information on the Internet is poisoned with false information,
 APT = Advanced Persistent Threat allowing attackers to direct clients to whatever IP address they
 Are complicated and well disguised malware use desire.
complicated zero-day vulnerabilities, multi-encoded  Security extensions have been added to the DNS protocol by
malicious payloads, encryption, obfuscation, and clever the Internet Engineering Task Force (IETF) DNS Security
masquerading techniques. (DNSSEC) specification provides security for specific
 APT mitigation solutions work by providing a safe information components of the DNS protocol in an effort to
environment usually virtualized instances or sandboxes of provide authenticity to the DNS information.
operating systems are employed, where malicious software  The importance of DNSSEC is that it is intended to give the
can run and infect the operating recipient DNS server confidence in the source of the DNS
 System. records or resolver data that it receives.
 The tool then analyzes everything the malicious software did,
and decodes the payload to identify the threat and create a Email Service Security
"signature" to mitigate further exploitation.  Email service is a critical business function
 Technology in this space is new and relatively less known.

6
 With the increased growth and acceptance of cloud-based service.
services, e-mail is amongst the first to be leveraged.  Disadvantages:
 Some enterprises have already moved their e-mail  Technically, a debatable solution if web-based email
implementation to the cloud. solution is used.
 Enables lower cost and as-a-service implementation.
 enterprises have lower control over email security. File Transfer Service

Spam Filtering  Many times is a necessity to facilitate business operations


 E-mail is one of the most popular methods to spread  protocols and methods that are viable options include FTP,
malware or lead users to malware hosted on the Internet. SFTP, FTPS, SSH, and SSL; many more proprietary
 Methods to protect the enterprise from SPAM include cloud- options available too.
based and local SPAM filtering at the network layer and  A method to ensure secure communication and the ability to
host-based solutions at the client. control what is transferred and to whom is to implement an
 Spam Filtering @ Cloud: Works by configuring the DNS intermediary transfer host.
mail record (MX)* to identify the service provider's e-mail  Solution should also require authentication to be used and
servers the user list audited regularly, for both voluntarily and
 This configuration forces all e-mails destined to e-mail involuntarily terminated employees.
addresses owned by the enterprise through the SPAM
solution filtering systems before forwarding to the final User Authenticartion
enterprise servers and user mailbox  For SSH, SFTP, and other such protocols, there are two
 Outbound mail from the enterprise would take the normal methods of authentication, namely user credentials and keys.
path to the destination as configured, to use DNS to find the  Enterprise configures either locally or using directory services,
destination domain email server IP address. such as Windows Active Directory for users that can access the
 Advantages: service.Security implications involved:
 Zero or limited administration of the solution  For local accounts, the fact that they are locally stored on
 Reduction in Spam traffic the server may leave them vulnerable to compromise.
 Reduction in malware and other threats.  The system administrator will also have to manually
 Disadvantages: manage user credentials on each and every system
 Significant cost, depending on service fee structure configured
 lack of visibility and control of filters  For systems that rely on a central user directory, the
 service failure => no email or unwanted delays implementation must be thought out to ensure that any
compromise of the system does not lead to a compromise
Local Spam Filtering of the internal user directory.
 Only an option when the enterprise is not using a web-  Authentication via Simple Public Key Infrastructure (SPKI):
hosted/cloud-based email solution. private-public key combination can be used for authenticating
 With web-based e-mail hosting, the SSL connection exists systems, applications, and users.
from the user's browser or email client to the hosted e-mail
servers. Securing Internet Access Service
 SSL decryption could be possible, but the overhead and  Internal user access to the Internet is probably deemed a more
privacy implications should be weighed carefully. critical service than even e-mails.
 Decrypting SSL by presenting a false certificate in order to  To provide some level of security and monitoring, the use of
snoop breaks SSL theory and is considered a man-in-the- Internet proxy technology is required.
middle attack.  There are standalone proxy solutions and the aforementioned
 Advantages: NGFWs have this feature, which allows for URL filtering
 More control over configuration of filters based on category and known malicious destinations.
 Vendor continuously updates the appliance to include
 New block list updates and signatures
 Ability to also own the DNS infrastructure that tells
 Other e-mail systems where to send e-mail
 In the event of appliance failure, e-mails can be routed
around the failure using DNS to maintain the e-mail

Securing Websites
 Internet accessible websites are the most targeted asset on the
Internet due to common web application security issues, such
as SQL injection.
 There are several approaches to securing websites, but it is
truly a layered security approach requiring:
7
 Secure Coding with any classification model, there should be tiers based
 Firewalls on criticality.
 IPS  System labels applied will serve as an input to the overall
 Secure Coding: Utilizing a secure software development security architecture.
lifecycle (SSDLC) is the best method to ensure that secure
coding practices are being followed: System Management
 Framework for how the coding process is to be  System patching may be based on
completed with testing and validation of the code.  Criticality of the system,
 Process is iterative for each new instance of code or  The severity of the vulnerability, or
modified portions of code.  Impact of an unpatched software package
 Vulnerabilities identified should be documented and  System classification plays a significant role in the patching
tracked through remediation within a centralized cycle of systems and must be integrated in the patch and
vulnerability or defect management solution. vulnerability management processes.
 NGFW: NGFW can be leveraged to protect Internet-facing
enterprise websites and applications. File Integrity Monitoring
 NGFW can also be used for inspecting and mitigating all  One of the methods used to detect changes to a known
illegitimate traffic, such as denial of service attacks, filesystem's files, and in the case of Windows, the registry.
before they reach the web servers.  To detect these changes, FIM tools create a hash database of
 IPS: the known good versions of files in each filesystem location.
 Intrusion prevention may also be implemented at the  Tool can then periodically or real-time scan the filesystem
network perimeter to mitigate known attack patterns looking for any changes to the installation including known
for web applications. files and directories.
 IPS can provide excellent denial of service protection  Manual mode FIM:
and block exploit callbacks.  Advantages:
 Web Application Firewalls:  Least taxing on the system because the scans only
 Designed to specifically mitigate attacks against web run when the console initiates the scan either adhoc
applications through pattern and behavioral analysis. or on a schedule.
 advanced web application firewalls use another  IT knows when the system may have higher memory
component at the database tier of the web applications. and processor utilization and it ideally will not affect
Benefits include: business operations.
 Ability to determine if a detected threat warrants  Disadvantages:
further investigation; i.e. whether the threat was  A caveat to this solution is that changes can go
able to interact with the database or not. undetected for longer periods of time depending on
 Attacks that do get past the first layer of the web how often scans are run on schedule.
application firewall can be mitigated at the
database tier of the network architecture.  Real-time FIM:
 enforce security controls for database access  Advantages:
initiated not only by the web application but also  All add, delete, and modification actions are detected
by database administrators. in real time allowing for almost immediate ability to
review and remediate.
Network Segmentation  Disadvantage:
 Before any network segmentation can occur, critical data,  But the constant running of the tool may be taxing to
processes, applications, and systems must be identified. a system that is loaded with several agents for
 Network segmentation using a firewall is the simplest various purposes.
network-based security control
 Alongside, highly recommended security monitoring tools, Application Whitelisting
such as Security Information and Event Management  A method to control what applications have permission to run
(SIEM) and File Integrity Monitoring (FIM) should be on a system.
implemented to ensure that in the event of an attack, there is  If malicious software is installed on the system, it will not be
monitoring for early detection and timely incident response. able to execute.
 Tool can also prevent unapproved application install.
Securing the Systems  If the application is not preapproved, the installation can be
Processes and methods covered: blocked
 System Classificatio  If the installation is successful, the tool can block the
 File integrity monitoring (FIM) \ application from running.
 Application Whitelisting HIPS
 Host-based intrusion prevention system (HIPS)  Host-based intrusion prevention system (HIPS) is very similar
 Host Firewalls in concept to network intrusion prevention.
 System Protection using Anti-virus  HIPS leverages being installed on the system it is protecting -->
 User account management it has additional awareness of running applications and services.
 Host-based intrusion detection uses the same types of detection
System Classification methods as the network-based counterpart.
 When securing Enterprise Network, Network Segmentation  Primary method is signature-based detection as this is the
plays a key role: easiest method to implement on a host without taxing the
 Helps placing systems of high value and criticality in operating system with true behavioral analysis.
segmented areas of the network.
 To identify these systems, it is necessary to understand Host Firewall
the important business processes and applications as

8
 Host firewall can be a great method to filter traffic to and  DLP solutions can:
from the system.  Help find data in various locations within the enterprise
 Firewall should be considered as another layer of defense  Enforce encryption, in some cases
from intrusion attempts against applications, services, and  Block insecure transmission, and
the host itself.  Block unauthorized copying and storing of data, based
 Solution is similar to application whitelisting in regards to upon data classification.
the requirement of knowing what applications are running
and how they must communicate. Data in Storage
 Some applications open random ports or have extremely  Data can be stored in network shares, databases, document
large ranges of ports. Some host firewalls are able to allow repositories, online storage, and portable storage devices.
dynamic port use, thus alleviating the need to go through the  Most DLP solutions have the ability to scan data stores and
exercise of analyzing the application. also provide an agent that can be deployed on end systems to
monitor and prevent unauthorized actions for classified
Anti-virus enterprise data.
 Anti-virus is considered as a necessary security mechanism  Using DLP, a discovery scan can be initiated to identify data in
for the low-hanging fruit predictable malware. locations.
 Anti-virus primarily use two methods to detect malware:  Also, it can be used in an ongoing scheduled scan to
 Signature: This method looks for known patterns of continuously monitor the data stores for data that should or
malware. should not reside in the data location.
 Heuristics: In this method the behavior of potential
malware is analyzed for malicious actions. Data in Use
 Typically, anti-virus solutions will install an agent on the  Data in use is data that is actively processed within an
endpoint, run scans continuously, and any new file application, process, memory or other location, temporarily for
introduced is scanned immediately. the duration of a function or transaction.
 Enterprise data not stored long term, only long enough to
User Account Management - UAM perform a function or transaction.
 Accounts on a system are some level of access that may be  Data in use can be monitored by an agent installed on the end
the door in for malicious activity. system to permit only certain uses of the data and deny actions
 Review of system accounts should be in accordance to the such as storing the data locally or sending the data via e-mail or
system classification and other security policies. other communication method.
 Implementation on employee-owned devices introduces
privacy issues because any personal transactions such as online
 User Roles and Permissions: banking, medical record lookup, and so on may be detected and
 Need for properly defining system users and roles to details of the transaction stored in the DLP database for review.
perform required tasks.
 Both for server systems and end-user systems. Data in Transit
 UAM Account Auditing:  Data in transit is data that is being moved from one system to
 To detect rogue accounts on systems, the enterprise another, either locally or remotely, such as file transfer systems,
should perform user account auditing across all e-mail, and web applications.
systems on a regular basis.  Various DLP solutions have accounted for this fact and provide
 Accounts should be disabled or deleted at the time of solutions capable of intercepting and decrypting
termination as part of a formal process. communications to look for classified data.
 Policy Enforcement:  Focus of DLP for data in transit is specifically data leaving the
 Enforcement may come in the form of an implemented enterprise through egress connections.
tool, but it may also come from the monitoring of user  DLP Network: Simplest solution to implement in an enterprise
activity on systems. environment also the quickest method to determine what data is
leaving the network in an insecure manner.
Data Classification Process  DLP Email and Web: Email and Internet access are the most
 Involves two steps: identification and classification of commonly used enterprise services. Focus more on loss of
enterprise data. enterprise confidential data via emails or web.
 Classification is done based on:  DLP Discover:
 Importance and  Is a tool that can scan network shares, document
 Impact potential repositories, databases, and other data at rest.
 There are many data types that exist in order for the business  Requires an account with permissions to be configured, to
to operationally function. allow the scans to open the data stores and inspect for
 Data can be located in multiple places both internal and policy matches.
external to the enterprise network, including in employer-  DLP Endpoint:
owned and employee-owned assets.  DLP Endpoint is an agent-based technology that must be
 Data can be at rest, in use or in transit installed on every end point. closest to the end user where
 The act of assigning a label to identified data types that the human interaction is the highest and, in theory, where
indicate required protection mechanisms.Driven by business the greatest risk is introduced to enterprise data.
risk and data value.  Requires a significant implementation of agents that have to
be installed, managed, and the output operationalized for
Data Loss Prevention meaningful and actionable reporting.
 Data Loss Prevention (DLP) is a tool that can enforce
protection of data that has been classified. Data Protection Methods
 The primary purpose of DLP is to protect against the  Data Protection, using different methods
unauthorized exfiltration of enterprise data.  Encryption and Hashing

9
 Tokenization  While the solution does provide some protection, it is not at
 Data Masking the same level as tokenization, encryption, or hashing.
 Authorization
Authorization
Encryption and Hashing  Granting permissions based on who or what the authorized is:
 Both encryption and hashing are typically what is thought of  An important part of the enterprise data protection and
when data protection is discussed whether in storage, transit, security program.
or in use by applications  This facet of data security highlights the defense in depth
 Mostly for data in storage or in transit. mantra of information security.
 Encryption is the method of mathematically generating a
cipher text version of clear text data to render it IoT Security: Involved Domains
unrecognizable.  Device Security
 There are two general types of encryption – symmetric and  Securing the IoT Device
asymmetric  Challenges: Limited System Resources
 Hashing is simpler, but only supports data integrity.  Network Security
 Encryption can happen at the location of storage,prior to  Security the network connecting IoT Devices to
storage, or during the process of storing.  Backend Systems
 Online encryption is in effect while data is accessible  Challenges: Wider range of devices + communication
 Offline is when data is not directly accessible such as on protocols + standards
backup tapes, turned off systems, etc.  Cloud/ Back-end Systems Security
 Data stored in databases can be encrypted via two methods  Securing the backend Applications from attacks
 First method utilizes the built-in encryption capabilities  Firewalls, Security Gateways, IDS/IPS
of the database itself to protect the stored data.  Mutual Authentication
Beneficial when attempting to make encryption  Device(s) → User(s)
invisible to the applications and processes accessing  Passwords, PINs, Multi-factor, Digital Certificates
the data.  Encryption
 Second method uses encrypting at the application and  Data Integrity for data at rest and in transit
process layer.  Strong Key Management Processes

Application Encryption IoT Layers


 The encryption of the data occurs in the application not the  Sensing Layer:In determining the sensing layer of an IoT, the
database main concerns are:
 Data arrives as already encrypted in the database.  Cost, size, resource, and energy consumption. The things
 All applications and processes using this data need a might be equipped with sensing devices such as RFID
method to decrypt and encrypt the data →typically a tags, sensors, actuator, etc., whichshould be designed to
shared private key. minimize required resources as well as cost.
 Benefits:  Deployment: The IoT end-nodes (such as RFID reader, tags,
 Database performance gains for not using encryption at sensors, etc.) can be deployed one-time, or in incremental
the database tier or random ways depending on application requirements.
 The data is always encrypted in the databases.  Heterogeneity: A variety of things or hybrid networks
make the IoT very heterogeneous.
Tokenization  Communication: The IoT end-nodes should be designed in
 Tokenization is a method that assigns a value to a segment such a way that it is able to communicate with each other.
of data, so that the initial sensitive data value no longer  Network: The IoT involves hybrid networks, such as
exists. Wireless Sensor Networks (WSNs), WMNs, and
 use in applications and storage in the database. supervisory control and data acquisition (SCADA)
 processes, systems, and applications are able to process systems.
the token value as they would process the sensitive data.  Network Layer: The security requirements in network layer
 However, this method ensures that the token has no real involve:
value to anyone or anything outside of the process  Overall security requirements, including confidentiality,
 A database is used to map the original data to the token integrity, privacy protection, authentication, group
value. authentication, keys protection, availability, etc.
 A common use for tokenization is in the retail industry for  Privacy leakage: Since some IoT devices physically
the replacement of credit card data within the network and located in untrustedplaces, which cause potential risks for
assets. attackers to physically find the privacy information such
as user identification, etc.
Data Masking  Communication security: It involves the integrity and
 This method is commonly used in processes where there is confidentiality of signaling in IoT communications.
human interaction.  Over connected: The over connected IoT may run risk of
 A similar method can be achieved in database views and losing control of the user. Two security concerns may be
specialized encryption solutions to enforce the least caused: (1) DoS attack, the bandwidth required by
privilege and access only on a need-to-know basis. signaling authentication can cause network congestion
 Advantages: and further cause DoS; (2) Keys security, for the over
 relative ease of implementation connected network, the keys operations could cause heavy
 Disadvantages: network resources consumption.
 Masking as used on a database implementation is simply  MITM attack: The attacker makes independent connections
a view presented with the original data intact and with the victims and relays messages between them,
viewable by database administrators. making them believe that they are talking directly to each

10
other over a private connection, when in fact the information like personal data, cryptographic keys, or
attacker controls the entire conversation. credentials.
 Fake network message: Attackers could create fake  Lastly, devices must support software updates to patch
signaling to isolate/misoperate the devices from the IoT. vulnerabilities and exploits.
 Service Layer:
 Service discovery. It finds infrastructure that can provide Network Layer Security
the required service and information in an effective  This layer of the IoT framework represents the connectivity
way. and messaging between things and cloud services
 Service composition. It enables the combination and  Communications in the IoT are usually over a combination of
interaction among the connected things. Discovery private and public networks, so securing the traffic is obviously
exploits the relationships of things to find the desired important.
service, and service composition schedules or recreates  The primary difficulty arises when you consider the challenges
more suitable services to obtain the most reliable ones. of cryptography on devices with constrained resources.
 Trustworthiness management. It aims to understand the  An Arduino Uno takes up to 3 min to encrypt a test payload
trusted devices and information provided by other when using RSA 1024 bit keys
services.  However an elliptical curve digital signature algorithm with
 Service APIs. It provides the interactions between a comparable RSA key length can encrypt the same
services required by users. payload in 0.3 s.
 Interface Layer:  This indicates that device manufactures cannot use resource
 Remote safe configuration, software downloading and constraints as an excuse to avoid security in their products.
updating, security patches, administrator authentication,  Another security consideration for the network layer is that
unified security platform, etc. For the security many IoT devices communicate over protocols other than WiFi.
requirements on communications between layers:  This means the IoT gateway is responsible for maintaining
 Integrity and confidentiality for transmission between confidentiality, integrity, and availability while translating
layers, cross-layer authentication and authorization, between different wireless protocols.
sensitive information isolation, etc.
Service Layer Security
 This layer of the framework represents the IoT management
system and is responsible for onboarding devices and users,
applying policies and rules, and orchestrating automation
across devices.
 Access control measures to manage user and device identity
and the actions they are authorized to take is critical at this
layer
 To achieve nonrepudiation, it is also important to maintain an
audit trail of changes made by each user and device so that it is
impossible to refute actions taken in the system
 Big Data Challenges:
 Providing clear data use notification so that customers have
visibility and finegrained control of the data sent to the
cloud service
 keeping customer data stored in the cloud service
segregated and/or encrypted with customer-provided keys,
and when analyzing data in aggregate across customers,
the data should be anonymized.

NIST GUIDANCE ON INTERNET OF THINGS


 NISTIR 8259: Foundational Cybersecurity Activities for IoT
Device Manufacturers.
Sensing Layer Security:  The guidance provides six clear steps that manufacturers
This layer of the framework is characterized as the intersection of should follow, which are further separated into two phases:
people, places, and things  Pre-Market: before the device is sold
 These things can be simple devices like connected  Post-Market: after the device is sold
thermometers and light bulbs, or complex devices such as  Four pre-market activities (1–4) and two post-market activities
medical instruments and manufacturing equipment. (5–6) for IoT manufacturers to address cybersecurity in IoT
 For security in IoT to be fully realized, it must be designed devices
and built into the devices themselves.  Activity 1: Identify expected customers and define
 This means that IoT devices must be able to prove their expected use cases.
identity to maintain authenticity sign and encrypt their  Activity 2: Research customer cybersecurity goals.
data to maintain integrity, and limit locally stored data  Activity 3: Determine how to address customers’ goals.
to protect privacy.  Activity 4: Plan for adequate Support of customers’ goals.
 The security model for devices must be strict enough to  Activity 5: Define approaches for communication to
prevent unauthorized use, but flexible enough to customers.
support secure, ad hoc interactions with people and  Activity 6: Decide what & how to communicate to
other devices on a temporary basis customers.
 Physical security is another important aspect for devices.
This creates the need to design tamper resistance into IoT Device Vulnerabilities
devices so that it is difficult to extract sensitive  Firmware vulnerability exploits

11
 For the majority of IoT devices, the firmware is  Device authentication:
essentially the operating system or the software  IoT devices connect to each other, to servers, and to various
underneath the OS other networked devices. Every connected device needs to
 Most IoT firmware does not have as many security be authenticated to ensure they do not accept inputs or
protections in place requests from unauthorized parties
 Often the vulnerabilities in the firmware cannot be  Encryption:
patched.  Prevents on-path attacks.
 Credential-based attacks:  Encryption must be combined with authentication to
 IoT devices come with default administrator usernames prevent MITM attacks. Otherwise, the attacker could set
and passwords up separate encrypted connections between one IoT
 Well-known, or simple to guess, and often, not very device and another, and neither would be aware that their
secure communications are being intercepted.
 In some cases, these credentials cannot be reset  Turning off unneeded features:
 Often, IoT device attacks occur simply because an  Most IoT devices come with multiple features, some of
attacker guesses the right credentials. which may go unused by the owner
 On-path attacks (or Man-in-the-Middle attacks)  Even when features are not used, they may keep additional
 IoT devices are particularly vulnerable to such attacks ports open on the device
because many of them do not encrypt their  The more ports an Internet-connected device leaves open,
communications by default the greater the attack surface — often attackers simply
 On-path attackers position themselves between two ping different ports on a device, looking for an opening.
parties that trust each other and intercept  Turning off unnecessary device features will close these
communications between the two extra ports.
 MITM attacks can also happen by Impersonation, where  DNS filtering:
a malicious node sets up two sessions (with device and  DNS filtering is the process of using the Domain Name
server), impersonating and relaying messages between System to block malicious websites
them  Adding DNS filtering as a security measure to a network
 Physical hardware-based attacks with IoT devices prevents those devices from reaching out
 Many IoT devices, like IoT security cameras, stoplights, to places on the Internet they should not (i.e. an attacker's
and fire alarms, are placed in more or less permanent domain).
positions
 An attacker having physical access to an IoT device's IoT Security Framework
hardware can steal its data or take over the device At the heart of the IoT Security Framework are the following key
 They could do this by accessing programmatic functions:
interfaces left on the circuit board, such as JTAG  Authentication
and RS232 serial connectors  Authorization
 Some microcontrollers may have disabled these  Access Control
interfaces, but could still allow direct reads from Authentication
the attached memory chips if the attacker solders At the heart of the framework is the authentication layer, used to
on new connection pins provide and verify the identity information of an IoT entity.
 This approach would affect only one device at a time,
but a physical attack could have a larger effect if the  Device identifiers include RFID, shared secret, X.509
attacker gains information that enables them to certificates, the MAC address of the endpoint, or some type of
compromise additional devices on the network. immutable hardware based root of trust
 Establishing identity through X.509 certificates provides a
strong authentication system. However, in the IoT domain,
many devices may not have enough memory to store a
certificate or may not even have the required CPU power to
execute the cryptographic operations of validating the X.509
certificates
 There exists opportunities for further research in defining
smaller footprint credential types and less compute-intensive
cryptographic constructs and authentication protocols (aka
Lightweight Cryptography)

Authorization
 The second layer of this framework is authorization that
controls a device’s access (to network services, back-end
services, data etc)
Device Security  With authentication and authorization components, a trust
 Software and firmware updates: relationship is established between IoT devices to exchange
 IoT devices need to be updated for vulnerability patch or appropriate information.
software update
 Credential security: Access Control
 IoT device admin credentials should be updated if  Role Based Access Control (or RBAC):
possible.  Most existing authorization frameworks for computer
 It is best to avoid reusing credentials across multiple networks and online services are role based
devices and applications —each device should have a
unique password
12
 First, the identity of the user is established and then his  Lightweight cryptography is a cryptographic algorithm or
or her access privileges are determined from the user’s protocol tailored for implementation in constrained
role within an organization environments including RFID tags, sensors, contactless smart
 That applies to most of existing network authorization cards, healthcare devices, and so on.
systems and protocols (RADIUS, LDAP, IPSec,  The traditional cryptography is designed at the application
Kerberos, SSH) layer without regard to the limitations of IoT Devices, making
 Rule Based Access Control: it difficult to directly apply the existing cryptography
 An administrator may define rules that govern access to primitives to IoT.
a resource  They investigated a channel model using the “wiretap channel,”
 Rules may be based on conditions, such as time of day in which a transceiver attempts to communicate reliably and
and location securely with a legitimate receiver over a noisy channel, while
 Can work in conjunction with RBAC its messages are being eavesdropped by a passive adversary
 Attribute Based Access Control (or ABAC): through another noisy channel.
 Attributes (e.g. age, location, etc) are used to allow  Information-theoretic secure communication was introduced in
access. 1949 by American mathematician Claude Shannon, one of the
 Users or devices need to prove their attributes. founders of classical information theory
 In ABAC, it is not mandatory to verify the identity of the  In Shannon’s wiretap model, he assumed both the main and
user to establish his or her access privileges, just that eavesdropper’s channels to be noiseless.
the user/device possesses the attributes is sufficient.  Wyner revisited this problem with relaxed assumptions,
 Discretionary access control (or DAC): mainly:
 Owners or administrators of the protected system, data  The noiseless communication assumption of
or resource set the policies defining who or what is Shannon was relaxed by assuming a possibly noisy
authorized to access the resource main channel and an eavesdropper channel that is a
 Not a good method, since these methods are not noisy version of the signal received at the legitimate
centralized and hard to scale receiver.
 Cababilities Based Access control (CBAC)  Wyner’s results showed that positive secure rates of
 Capabilities-Based Access Control (CBAC) is a security communication are achievable, under certain
model that grants permissions to users or processes based on conditions of noise or interference in the channels.
the possession of specific capabilities or tokens rather than  Secure communication without the need to share a secret key,
their identity or attributes. In CBAC, access control or what is now called as the key-less security approach
decisions are determined by whether a subject (such as a suggested a new paradigm of secure communication protocols.
user or a process) possesses the necessary capabilities to  That is, exploiting properties of the wireless medium (noise or
perform a particular action on a resource. interference or jamming) to satisfy the secrecy constraints.
 The key-less security approach can be used in wireless
ACL-based Systems networks to securely exchange the shared-secret key between
 ACL = Access Control List two communicating nodes, which can be used for all
 A table that can tell the IoT system all access rights each subsequent communications
user/ application has to particular IoT end node.
 Most common privileges include the ability to access or Transport Encryption
control an IoT device.  TLS/SSL: The transport encryption is done using secure
 Challenge with ACL-based Systems: transport protocols such as TLS and SSL
 In many architectures, IoT devices operate as “servers”,  Both the TLS and SSL are cryptographic protocols that
with clients. Connecting to them to fetch collected data. provide communications security over a network
 Server IP and port information is public knowledge =>  TLS uses TCP and therefore does not encounter packet
no security reordering and packet loss issues.
 Minimum security is typically implemented using  Datagram Transport Layer Security (DTLS):
<username, password> → an embodiment of IoT ACL-  DTLS is developed based on TLS by providing equivalent
based device systems security services, such as confidentiality, authentication,
 Approach is not scalable as more users join or are and integrity protection.
revoked.  In DTLS, a handshake mechanism is designed to deal with
 Complexity of managing the ACL at the device can the packet loss, reordering, and retransmission.
become a bottle-neck  DTLS provides three types of authentication: non-
 A more scalable approach for IoT is to use “capabilities” authentication, server authentication, and server and client
for enabling “capability-based access” authentication.
 A capability is essentially a cryptographic key, that gives  Mutual TLS (mTLS): Mutual Transport Layer Security
access to some ability (e.g. to communicate with the (mTLS) is a type of mutual authentication, which is when both
device). sides of a network connection authenticate each other.
 TLS is a protocol for verifying the server in a client-server
Implementation Methods connection;
Lightweight Cryptography  mTLS verifies both connected devices, instead of just one.
 mTLS is important for IoT security because it ensures only
legitimate devices and servers can send commands or request
data.
 It also encrypts all communications over the network so
that attackers cannot intercept them.
 mTLS requires issuing TLS certificates to all authenticated
devices and servers.

13
 A TLS certificate contains the device's public key and Social IoT (Internet of Things) refers to the integration of social
information about who issued the certificate. networking concepts and features into IoT systems. Essentially, it
 Showing a TLS certificate to initiate a network involves leveraging the capabilities of IoT devices to interact and
connection can be compared to a person showing their communicate with each other and with users through social media
ID card to prove their identity. platforms or other social networking channels.

Security Analytics Here are a few key aspects of Social IoT:


 This method can be used for detection of compromised  Enhanced Connectivity: Social IoT enables IoT devices to
devices connect not only with each other but also with social media
 A security analytics infrastructure can significantly reduce platforms, allowing for enhanced communication and
vulnerabilities and security issues related to the Internet of collaboration between devices and users.
Things  Crowdsourcing Data: By integrating social networking features,
 This requires collecting, compiling, and analyzing data Social IoT can leverage crowdsourcing to gather data from a
from multiple IoT sources, combining it with threat large number of users. This data can then be analyzed to extract
intelligence, and sending it to the security operations valuable insights and improve the functionality of IoT systems.
center (SOC)  Community Engagement: Social IoT facilitates community
 Applies AI/ML practices to IoT Security engagement by enabling users to interact with each other and
share information related to IoT devices and applications. This
Use of RPKs can foster a sense of community among users with common
RPK can play a significant role in ensuring the security and interests or objectives.
integrity of communication between IoT devices and their  Personalization: By integrating social networking features,
associated networks. Here's how RPK can be beneficial in IoT: Social IoT can personalize the user experience based on social
interactions, preferences, and behavior patterns. This allows for
Secure Device Authentication: RPK enables IoT devices to more tailored and relevant content and recommendations.
authenticate themselves to the network securely. By using digital  Collaborative Problem Solving: Social IoT enables users to
certificates issued through RPK, devices can prove their identity, collaborate in solving problems or addressing challenges related
ensuring that only authorized devices can connect to the network. to IoT devices or applications. This collaborative approach can
lead to more innovative solutions and faster problem resolution.
 Data Integrity and Confidentiality: RPK helps maintain
the integrity and confidentiality of data exchanged between Properties of Trust : START Property
IoT devices and backend systems. Through cryptographic  Subjective: Trust in OSNs can be subjective because it often
mechanisms, RPK ensures that data remains secure during depends on individual perceptions, experiences, and interactions
transmission, protecting it from unauthorized access or within the online environment. What one user may consider
tampering. trustworthy behavior or content, another user may not.
 Protection Against Attacks: IoT networks are vulnerable to  Topic Dependent: Trust in OSNs can vary depending on the
various attacks, including spoofing, man-in-the-middle topic or context of interaction. Users may trust certain
attacks, and unauthorized access. RPK can mitigate these individuals or sources more than others based on their expertise,
threats by providing mechanisms for verifying the credibility, or relevance to specific topics or interests.
authenticity of communication between devices and  Asymmetric: Trust in OSNs can be asymmetric, meaning that
detecting and preventing malicious activities. trust levels may differ between users or between users and the
 Secure Device Management: RPK facilitates secure device platform itself. For example, users may trust their friends or
management by enabling secure communication between connections more than strangers, and trust in the platform's
IoT devices and management platforms. This ensures that privacy and security measures may vary among users.
firmware updates, configuration changes, and other  Risking Betrayal: Trust in OSNs involves the risk of betrayal,
management tasks are performed securely, without as users may share personal information, opinions, or content
introducing vulnerabilities into the IoT ecosystem. with others with the expectation of privacy or confidentiality.
 Enhanced Trustworthiness: By deploying RPK, IoT However, there's always a risk that this trust may be violated
deployments can enhance the overall trustworthiness of the through data breaches, unauthorized access, or misuse of
ecosystem. Devices and networks can be verified to ensure information.
that they are operating according to predefined security  Time-Sensitive: Trust in OSNs can be time-sensitive, meaning
policies, reducing the risk of compromise or unauthorized that it may evolve or change over time based on ongoing
access. interactions, experiences, and developments within the online
 Resilient IoT Infrastructure: RPK helps create a more environment. Users may gain or lose trust in individuals,
resilient IoT infrastructure by protecting against common content, or platforms based on their behavior, performance, or
vulnerabilities and attacks. This resilience is crucial for changes in circumstances.
ensuring the availability and reliability of IoT services,
especially in critical applications such as healthcare, Challenges:
industrial automation, and smart cities.  Heterogenous network
 Multi-vendor
Overall, RPK provides a robust framework for securing IoT  Multi device types
communication, protecting devices and networks from threats,  Resource Consideration
and ensuring the integrity and confidentiality of data exchanged  Storage
within IoT ecosystems. Its adoption can significantly enhance the  Compute
security posture of IoT deployments, promoting trust and  Variety of Attack
reliability in connected systems.  On-off attack, ballot stuffing attack, bad-mouthing
attack, sybil attacks
Social IOT (SIoT)
Trust in social or Service network

14
 Trust: Trust refers to the belief or confidence that users have edge manipulation, which can undermine trust
in other users, service providers, or the platform itself within assessments.
the network. Trust can be built through positive experiences,
reliable interactions, and consistent delivery of promises or  Dynamic Interaction Trust Model: In this model, trust is
expectations. Trust influences users' willingness to engage, assessed based on the ongoing interactions between entities,
share information, transact, and collaborate within the considering factors such as frequency, recency, and quality of
network. Factors such as reputation, credibility, security interactions.
measures, and past experiences contribute to the Advantages:
establishment and maintenance of trust within social or  Real-Time Adaptability: Dynamic models can adapt to
service networks. changes in behavior and relationships over time,
More interaction leads to more trust. providing more accurate and up-to-date trust
Advantage: Higher accuracy, Dynamic updation assessments.
Disadvantages: High-volume traffic analysis, impacted by  Resilience: By continuously evaluating interactions,
changes in interaction patterns. dynamic models can detect and respond to changes in
 Influence: Influence pertains to the ability of users or entities trustworthiness more effectively.
within the network to impact the opinions, behaviors, and  Personalization: Dynamic models can personalize trust
decisions of others. In social networks, influence can be assessments based on individual preferences and
measured through metrics such as followers, likes, shares, experiences.
retweets, and comments. Influential users or entities may Disadvantages:
have a significant reach and persuasive power, allowing them  Computational Overhead: Constantly updating trust
to shape discussions, trends, and perceptions within the assessments based on real-time interactions can impose
network. Identifying influential individuals or sources can be computational overhead, especially in systems with
valuable for targeting marketing campaigns, spreading high transaction volumes.
messages, and driving user engagement and adoption.  Algorithm Complexity: Designing effective algorithms
 Recommendation: Recommendations involve suggesting or for dynamic trust assessment can be complex,
endorsing specific content, products, services, or actions to requiring careful consideration of factors such as trust
users based on their preferences, interests, or behaviors decay rates and weighting of interaction attributes.
within the network. Recommendations can be personalized or  Data Requirements: Dynamic models rely on a
algorithmically generated, leveraging user data, browsing continuous stream of interaction data, which may not
history, social connections, and collaborative filtering always be readily available or reliable.
techniques. Effective recommendations can enhance user
experience, satisfaction, and retention by providing relevant  Hybrid Interaction Trust Model: A hybrid model combines
and timely suggestions that align with users' needs and multiple approaches to trust assessment, leveraging the
preferences. Recommendations also contribute to user strengths of different methods to provide more robust and
engagement, discovery, and exploration within the network, accurate trust evaluations.
fostering a sense of community and trust. Advantages:
Influence is the tool that triggers Trust, Recommendation is the  Comprehensive Evaluation: Hybrid models can
method for propagation of influence. incorporate a diverse range of trust factors, including
Advantage: Higher accuracy reputation, behavior, direct interactions, and contextual
Disadvantage: Generates high traffic volume, requires higher information, leading to more comprehensive trust
processing. assessments.
 Robustness: By combining multiple trust assessment
Interaction Trust model classification methods, hybrid models can mitigate the limitations of
Let's delve into each type of interaction-based trust model: individual approaches and provide more resilient trust
 Graph-Based Interaction Trust Model: In this model, trust evaluations.
relationships are represented as a graph where nodes  Flexibility: Hybrid models can be tailored to specific
represent entities (users, devices, services) and edges use cases and system requirements, allowing for
represent interactions or relationships between them. greater flexibility in trust assessment.
Advantages: Disadvantages:
 Scalability: Graph structures are highly scalable,  Complexity: Integrating multiple trust assessment
making them suitable for modeling complex methods into a coherent framework can increase model
relationships in large-scale systems. complexity and implementation challenges.
 Flexibility: Graph-based models can capture diverse  Data Integration: Hybrid models may require
types of interactions and relationships, allowing for integrating data from disparate sources, which can be
a nuanced understanding of trust dynamics. challenging due to differences in data formats, quality,
 Network Analysis: Graph-based models facilitate and reliability.
network analysis techniques, enabling the  Algorithm Selection: Choosing the appropriate
identification of influential nodes, communities, and algorithms and weighting schemes for different trust
patterns within the trust network. factors in a hybrid model requires careful consideration
Disadvantages: and may involve trade-offs between competing
 Complexity: Managing and analyzing large graphs objectives.
can be computationally intensive and complex.
 Interpretability: Understanding trust relationships  Ratings trust Models: In a rating model, trust is assessed based
within a graph may be challenging, especially in on explicit ratings or feedback provided by users about their
networks with many nodes and edges. experiences with other entities in the system. Users typically
 Vulnerability to Attacks: Graph-based models may rate entities on predefined criteria such as reliability,
be vulnerable to attacks such as Sybil attacks or competence, and integrity.
Advantages:

15
 Transparency: Rating models provide transparent limitations of individual approaches and provide more
feedback to users, enabling them to make informed resilient trust evaluations.
decisions about whom to trust.  Flexibility: Cross-integrated models can be customized
 User Empowerment: Users have direct input into the to specific use cases and system requirements,
trust assessment process through their ratings, allowing for greater flexibility in trust assessment.
giving them a sense of control and ownership over
their trust decisions. Disadvantages:
 Accountability: Entities are incentivized to maintain  Complexity: Integrating multiple trust assessment
high trust ratings to attract positive feedback from methods into a coherent framework can increase model
users, fostering accountability and trustworthiness. complexity and implementation challenges.
Disadvantages:  Data Integration: Cross-integrated models may require
 Bias and Manipulation: Rating systems may be integrating data from disparate sources, which can be
susceptible to bias or manipulation, as users can challenging due to differences in data formats, quality,
artificially inflate or deflate ratings for strategic and reliability.
purposes.  Algorithm Selection: Choosing the appropriate
 Limited Context: Ratings may not capture the full algorithms and weighting schemes for different trust
context of interactions or relationships between factors in a cross-integrated model requires careful
entities, leading to potentially biased or incomplete consideration and may involve trade-offs between
trust assessments. competing objectives.
 Cold Start Problem: New entities may struggle to
establish trust ratings initially, as they lack a Attack Scenarios for SIoT Trust Models
sufficient history of interactions to generate  Slandering (or Bad-Mouthing) Attack: In a slandering
meaningful ratings. attack, malicious entities deliberately spread false or
negative information about other entities in the system to
 Opinion Model: In an opinion model, trust is assessed based undermine their reputation and trustworthiness. This type of
on the opinions or recommendations of trusted individuals or attack aims to discredit targeted entities and manipulate the
sources within a community or network. Users rely on the trust assessment process.
judgments and experiences of others to inform their trust  Example: In an online marketplace, a seller may
decisions. engage in slandering by posting fake negative reviews
Advantages: about competitors to deter customers from purchasing
 Social Validation: Opinions from trusted sources their products.
provide social validation and reassurance to users,  Impact: Slandering attacks can erode trust in the
helping them navigate complex trust decisions. system by misleading users and damaging the
 Efficiency: Opinion models can accelerate trust reputation of targeted entities. They can also disrupt
assessment by leveraging the collective wisdom of fair competition and undermine the integrity of trust
the community, rather than relying solely on mechanisms.
individual experiences or interactions.
 Expertise Recognition: Users can identify and trust  Sybil Attack: A Sybil attack involves a malicious entity
opinion leaders or experts within a domain, creating multiple fake identities (Sybil nodes) to gain
enhancing the quality and relevance of trust disproportionately high influence or control over a network.
recommendations. These fake identities are used to manipulate trust
Disadvantages: mechanisms, such as reputation systems or voting processes,
 Dependency on Sources: Opinion models rely on the by artificially inflating the attacker's perceived
availability and credibility of trusted sources, which trustworthiness.
may not always be reliable or objective.  Example: In a peer-to-peer network, a single malicious
 Echo Chambers: Opinion models may reinforce user creates multiple fake accounts to control a
existing biases or echo chambers within a significant portion of the network's resources or
community, leading to the amplification of certain influence the selection of specific peers for interactions.
opinions and marginalization of others.  Impact: Sybil attacks can compromise the security and
 Limited Diversity: Opinion models may overlook fairness of decentralized systems by allowing attackers
diverse perspectives and experiences, particularly if to manipulate trust mechanisms and gain undue
they disproportionately rely on a small subset of advantages. They can also undermine the accuracy and
influential sources. reliability of trust assessments by introducing fake or
biased information.
 Cross-Integrated Model: A cross-integrated model
combines multiple trust assessment methods, such as ratings,  On-Off Attack: An On-Off attack, also known as a flip-
opinions, and other factors, to generate more comprehensive flop attack, involves a malicious entity repeatedly
and accurate trust evaluations. This model integrates data alternating between cooperative and non-cooperative
from diverse sources to provide a holistic view of behaviors to deceive other entities and manipulate their
trustworthiness. trust. The attacker switches between trustworthy and
Advantages: untrustworthy states strategically to exploit trust
 Comprehensive Evaluation: Cross-integrated models mechanisms or gain unfair advantages.
consider a wide range of trust factors, including  Example: In a collaborative online platform, a user
ratings, opinions, behavioral data, and contextual may intermittently contribute valuable insights and
information, leading to more comprehensive and then deliberately provide misleading or harmful
accurate trust assessments. information to confuse other participants and
 Robustness: By combining multiple trust assessment manipulate their trust.
methods, cross-integrated models can mitigate the

16
 Impact: On-Off attacks can disrupt trust Digital Forensic Process
relationships and undermine the stability and
effectiveness of collaborative systems by creating
uncertainty and mistrust among participants. They
can also exploit vulnerabilities in trust mechanisms
by exploiting the unpredictability of the attacker's
behavior.

 Ballot Stuffing: Ballot stuffing is a form of


manipulation in voting systems where malicious entities
fraudulently cast multiple votes or inflate the number of
votes for a particular candidate or outcome. This attack
aims to skew the results of voting processes by
artificially increasing the apparent support for a chosen
option.
 Example: In an online poll or voting system, a Computer Forensics:
malicious user may use automated scripts or fake  Focused on computing devices
identities to cast numerous votes in favor of a  Requires understanding of Boot Process, File Systems,
specific candidate or proposal, distorting the Registry / Configuration Files, OS Functions etc.
outcome of the vote.  Static and Live Acquisition of Data.
 Impact: Ballot stuffing attacks can undermine the
integrity and fairness of voting systems by Network Forensics:
invalidating the principle of one-person-one-vote  Systematic tracking of incoming and outgoing traffic to
and distorting the democratic process. They can also ascertain how an attack was carried out
erode trust in the legitimacy of election results and  Determine the cause of abnormal traffic (internal bug, attackers)
compromise the credibility of decision-making  Live acquisitions are especially useful
mechanisms.
Data Storage Devices
IoT Forensics  Requires understanding of data storage devices for data
acquisition
 SCSI Disks, IDE/EIDE Disks, SATA Drives
 CD, CD-R, CD-RW, DVD
 RAID Systems (RAID 0, 1, 2, 3, 4, 5, 6, 10)

Mobile Device Forensics


 A wealth of information on cell phones/ Smart Phones
 Crimes targeting Mobile Devices
 Requires understanding of Mobile Device Organization, OS,
File System and Storage system.
Cyber Forensics Cloud Forensics:
Cyber Forensics is the application of investigation and analysis  Combines cloud computing with digital forensics
techniques to gather and preserve evidence from a particular  Requires investigators to work with multiple computing assets,
computing device in a way that is suitable for presentation in a such as virtual and physical servers, networks, storage devices,
court of law. applications, and much more.
Digital Forensic Science IoT Forensics:
The use of scientifically derived and proven methods toward the  IoT is a combination of many technology zones: Device,
preservation, collection, validation, identification, analysis, Network and Cloud
interpretation, documentation and presentation of digital evidence  IoT Forensics thus covers: Cloud forensics, Network forensics
derived from digital sources for the purpose of facilitating or and Device forensics.
furthering the reconstruction of events found to be criminal, or  Evidence could be from home appliances, cars, tags readers,
helping to anticipate unauthorized actions shown to be disruptive sensor nodes, medical implants in humans or animals, or other
to planned operations. IoT devices.

IoT Forensics: Issues

17
Traditional Forensics Vs IoT Forensics challenges related to securing the chain of evidence and to
There are several aspects of differences and similarity between prove the evidence has not been changed or modified
traditional and IoT forensics
 In terms of evidence sources, traditional evidence could be Lack of Individual Identity:
computers, mobile devices, servers or gateways. In IoT  Even though the investigators find an evidence in the Cloud
forensics, the evidence could be home appliances, cars, tags that prove a particular IoT device in crime scene is the cause of
readers, sensor nodes, medical implants in humans or the crime, it does not mean this evidence could lead to
animals, or other IoT devices. identification of the criminal.
 In terms of Jurisdiction and Ownership, there are no
differences, it could be individuals, groups, companies, Lack of Security:
governments, etc.  Evidence in IoT devices could be changed or deleted because
 In terms of evidences data types, IoT data type could be any of lack of security, which could make these evidence not solid
possible format, it could be a proprietary format for a enough to be accepted in a court of law
particular vendor. However, in traditional forensics, data
types are mostly electronic documents or standard file Variety of Device Types:
formats.  In identification phase of forensics, the digital investigator
 In terms of networks, the network boundaries are not as needs to identify and acquire the evidence from a digital crime
clear as the traditional networks, increasing in the blurry scene.
boundary lines.  Usually, evidence source is types of a computing system such
as computer and/or a mobile phone.
IoT Forensics  However, in IoT, the source of evidence could be objects like a
 IoT technology is a combination of many technology zones: smart refrigerator or smart coffee maker
IoT zone, Network zone and Cloud zone.  The device could be turned-off because it could have run out of
 These zones can be the source of IoT Digital Evidences battery, which makes its chances to be found difficult,
 Evidence can be collected from a smart IoT device or a especially if the IoT devices is very small, in hidden places or
sensor, from an internal network such as a firewall or a looks like a traditional device.
router, or from outside networks such as Cloud or an  Carrying the device to the lab and finding a space could be
application. another challenge that investigators face
 Based on these zones, IoT Forensics covers three aspects in  Extracting evidence from these devices is considered another
term of forensics: Cloud forensics, network forensics and challenge as most of the manufacturers adopt different
device level forensics. platforms, operating systems and hardware.
 Most of IoT devices have the ability to (directly or indirectly)
connect through applications to share their resources in the Lifecycle Changes in Data Formats:
Cloud, with all valuable data that is stored in the Cloud →  The format of the data that is generated by IoT devices is not
Cloud Forensics. identical to what is saved in the Cloud.
 Different kinds of networks that IoT devices use to send and  Data processing using analytic and translation functions in
receive data. It could be home networks, industrial networks, different places is likely before being stored in the Cloud.
LANs, MANs and WANs. For instance, if an incident occurs Hence, in order to be accepted in a court of law, the data form
in IoT devices, all logs from network devices through which should be returned to its original format before performing
the traffic flows could be potential evidence analysis.
 Device Level Forensics include all potential digital evidence
that can be collected from IoT devices like graphics, audio, Cloud Computing
video. Videos and graphics from CCTV camera or audios Cloud computing is a model for enabling ubiquitous, convenient, on-
from Amazon Echo, can be great examples of digital demand network access to a shared pool of configurable computing
evidences in the device level forensics. resources (e.g., networks, servers, storage, applications, and services)
that can be rapidly provisioned and released with minimal
Challenges in IoT Forensics management effort or service provider interaction.
Data Location:
 Most of the IoT data is spread in different locations, which Service Model
are out of the user control. This data could be in the Cloud,
in third party’s location, in mobile phone or other devices.
 To identify the location of evidence is considered as one of
the biggest challenges an investigator can face in order to
collect the evidence.
 In addition, IoT data might be located in different countries
and be mixed with other users information, which means
different countries regulations are involved

Lifespan limitation of Digital Media Storage:


 Because of limited storage in IoT devices, the lifespan of
data in IoT devices is short and data can be easily
overwritten, resulting in the possibility of evidence being
lost.
 Therefore, one of the challenges is the period of survival of
the evidence in IoT devices before it is overwritten. Private cloud:
 Transferring the data to a local Hub or to the Cloud could be  The cloud infrastructure is provisioned for exclusive use by a
an easy solution to solve this challenge. However, it presents single organization comprising multiple consumers (e.g.,
business units).
18
 It may be owned, managed, and operated by the organization,  Cost Reduction, especially for large infrastructure setups as in
a third party, or some combination of them, and it may exist Data Centers
on or off premises.
Network Function Virtualization:
Community cloud: Background:
 The cloud infrastructure is provisioned for exclusive use by
a specific community of consumers from organizations that
have shared concerns (e.g., mission, security requirements,
policy, and compliance considerations).
 It may be owned, managed, and operated by one or more of
the organizations in the community, a third party, or some
combination of them, and it may exist on or off premises.

Public cloud:
 The cloud infrastructure is provisioned for open use by the
general public.
 It may be owned, managed, and operated by a business,
academic, or government organization, or some combination
of them.
 It exists on the premises of the cloud provider. Enabling Solution Component
 Hypervisor
Hybrid cloud:  A technology that allows sharing of hardware resources
 The cloud infrastructure is a composition of two or more of a single machine by multiple guest Operating Systems
distinct cloud infrastructures (private, community, or public) (OS)
that remain unique entities, but are bound together by  Results in multiple Virtual Machines (VMs) on same
standardized or proprietary technology that enables data and physical machine.
application portability (e.g., cloud bursting for load
balancing between clouds).

Enabling Technologies
Software Defined Networking:
Background:

Architecture:

Main Concepts:

Key Benefits:
 Software-driven control / Programmability
 Simplified Network Equipment available as COTS
 Standardized management of Network Equipment
→interoperability

19
Service Chaining  Intellectual property (IP) includes inventions, designs, and
artistic, musical, and literary works
 Covert channels: A covert channel is an unauthorized and
unintended communication path that enables the exchange of
information. Covert channels can be accomplished through
inappropriate use of storage mechanisms, as example
 Encryption involves scrambling messages so that they cannot
be read by an unauthorized entity, even if they are intercepted
 Traffic analysis is a form of confidentiality breach that can be
accomplished by analyzing the volume, rate, source, and
destination of message traffic, even if it is encrypted
SDN/NFV in the Data Center  Inference is usually associated with database security.
 NFV Data Center Inference is the ability of an entity to use and correlate
 Used by service providers to host communications and information protected at one level of security to uncover
networking services information that is protected at a higher security level\
 Services can be loaded as cloud-based software on
commercial off-the-shelf (COTS) server hardware Integrity requires that the following three principles are
 Applications are hosted in data center so they could be met:
accessed via cloud  Modifications are not made to data by unauthorized personnel
 SDN can work in tandem with NFV or processes.
 Traffic Steering in an NFV Data Center.  Unauthorized modifications are not made to data by authorized
personnel or processes.
 The data is internally and externally consistent, i.e. the internal
information is consistent both among all sub-entities and with
the real/external-world.

Availability ensures the reliable and timely access to cloud


data or cloud computing resources by the appropriate
personnel. Availability guarantees that
 the systems are functioning properly when needed.
Security Topics for Cloud Computing  In addition, this concept guarantees that the security services of
the cloud system are in working order.
 A denial-of-service attack is an example of a threat against
availability.

The reverse of confidentiality, integrity, and availability is


disclosure, alteration, and destruction (DAD).

AAAA
 Authentication is the testing or reconciliation of evidence of a
user’s identity. It establishes the user’s identity and ensures that
users are who they claim to be.
 Authorization refers to rights and privileges granted to an
individual or process that enable access to computer resources
and information assets.
 Auditing: To maintain operational assurance, organizations use
two basic methods: system audits and monitoring. These
Cloud Information Security Objectives methods can be employed by the cloud customer, the cloud
provider, or both, depending on asset architecture and
 Seven complementary principles that support information deployment
assurance are:  A system audit is a one-time or periodic event to evaluate
 Confidentiality, Integrity, Availability (CIA Triad), and security.
 Authentication, Authorization, Auditing, and  Monitoring refers to an ongoing activity that examines
Accountability (AAAA) either the system or the users, such as intrusion detection
 These 7 principles are summarized in the following slides.  An audit trail or log is a set of records that collectively
provide documentary evidence of different cloud
Confidentiality, Integrity, Availability (CIA) operations
CIA - A way to think about security trade-offs.  Accountability is the ability to determine the actions and
 Confidentiality refers to the need to keep confidential behaviors of a single individual within a cloud system
sensitive data such as customer information, passwords, or  Accountability is related to the concept of nonrepudiation,
financial data. wherein an individual cannot successfully deny the
 Integrity refers to keeping data or messages correct. performance of an action
 Availability refers to making data available to those who  Audit trails and logs support accountability.
need it.
Cloud Security Design Principles
Confidentiality in cloud systems is related to the areas of The following 11 security design principles
intellectual property rights, covert channels, traffic analysis,  Least privilege
encryption, and inference:  Separation of duties

20
 Defense in depth  Network security can limit communication between resources
 Fail safe using segmentation and access controls.
 Economy of mechanism  The compute layer can secure access to virtual machines either
 Complete mediation on-premises or in the cloud by closing certain ports.
 Open design  Application layer security ensures that applications are secure
 Least common mechanism and free of security vulnerabilities.
 Psychological acceptability  Data layer security controls access to business and customer
 Weakest link data, and encryption to protect data.
 Leveraging existing components

Least Privilege:
 The principle of least privilege maintains that an individual,
process, or other type of entity should be given the minimum
privileges and resources for the minimum period of time
required to complete a task.
 This approach reduces the opportunity for unauthorized
access to sensitive information.

Separation of Duties:
Fail Safe:
 Separation of duties requires that completion of a specified
 Fail safe means that if a cloud system fails it should fail to a
sensitive activity or access to sensitive objects is dependent
state in which the security of the system and its data are not
on the satisfaction of a plurality of conditions. For example:
compromised.
 an authorization that requires signatures of more than
 One implementation of this philosophy would be to make a
one individual, or
system default to a state in which a user or process is denied
 the arming of a weapons system that requires two
access to the system.
individuals with different keys
 A complementary rule would be to ensure that when the system
 Thus, separation of duties forces collusion among entities in
recovers, it should recover to a secure state and not permit
order to compromise the system.
unauthorized access to sensitive information.
 In the situation where system recovery is not done
Defense in Depth
automatically, the failed system should permit access only by
 Defense in depth is the application of multiple layers of
the system administrator and not by other users, until security
protection wherein a subsequent layer will provide
controls are reestablished.
protection if a previous layer is breached
 The Information Assurance Technical Framework Forum
Economy of Mechanism
(IATFF), an organization sponsored by the National Security
 Economy of mechanism promotes simple and comprehensible
Agency (NSA), has produced a document titled the
design and implementation of protection mechanisms, so that
“Information Assurance Technical Framework” (IATF) that
unintended access paths do not exist or can be readily identified
provides excellent guidance on the concepts of defense in
and eliminated
depth
 The principle states that Security mechanisms should be as
 Defense in multiple places - Information protection
simple and small as possible
mechanisms placed in a number of locations to protect
 If the design and implementation are simple and small, fewer
against internal and external threats
possibilities exist for errors
 Layered defenses - A plurality of information protection
 The checking and testing process is less complicated so that
and detection mechanisms employed so that an
fewer components need to be tested.
adversary or threat must negotiate a series of barriers to
Complete Mediation
gain access to critical information
 In complete meditation, every request by a subject to access an
 Security robustness - An estimate of the robustness of
object in a computer system must undergo a valid and effective
information assurance elements based on the value of
authorization procedure
the information system component to be protected and
 This mediation must not be suspended or become capable of
the anticipated threats
being bypassed, even when the information system is being
 Deploy KMI/PKI - Use of robust key management
initialized, undergoing shutdown, being restarted, or is in
infrastructures (KMI) and public key infrastructures
maintenance mode
(PKI)
 Deploy intrusion detection systems - Application of
Open Design
intrusion detection mechanisms to detect intrusions,
 There has always been an ongoing discussion about the merits
evaluate information, examine results, and, if necessary,
and strengths of security designs that are kept secret versus
take action.
designs that are open to scrutiny and evaluation by the
community at large.
Cloud Context
 A good example is an encryption system
Defense in depth uses a layered approach to security:
 For most purposes, an open-access cloud system design that
 Physical security such as limiting access to a datacenter to
has been evaluated and tested by a myriad of experts provides a
only authorized personnel.
more secure authentication method than one that has not been
 Identity and access security controlling access to
widely assessed.
infrastructure and change control.
 Perimeter security including distributed denial of service
Least Common Mechanism
(DDoS) protection to filter large-scale attacks before they
 This principle states that in systems with multiple users, the
can cause a denial of service for users.
mechanisms allowing resources shared by more than one user
should be minimized as much as possible.
21
 This principle may also be restrictive because it limits the The Zero-trust methodology
sharing of resources
 Shared access paths can be sources of unauthorized
information exchange and can provide unintentional data
transfers (also known as covert channels)
 Example: If there is a need to access a file by more than one
user, then these users should use separate channels to access
the resource, as this helps to prevent from unforeseen
consequences that could cause security problems
 Thus, the least common mechanism promotes the least
possible sharing of common security mechanisms
 Only a minimum number of protection mechanisms should
be common to multiple users.

Psychological Acceptability
 Psychological acceptability refers to the ease of use and
intuitiveness of the user interface that controls and interacts
with the cloud access control mechanisms
 The principle states that a security mechanism should not
make the resource more complicated to access if the security IAM
mechanisms were not present
 In other words, the principle recognizes the human element
Identity Management (IdM)
in computer security
 User Identities (Unique)
 If security-related software or computer systems are too
 Account Management
complicated to configure, maintain, or operate, the user will
 Authentication
not employ the necessary security mechanisms.
Access Management (AcM)
 Roles and Privileges
Weakest Link  Authorization
 A chain is only as strong as its weakest link
 Access Control
 In context of cloud-systems, the security of a cloud system is
only as good as its weakest component
Why is Identity important?
 Thus, it is important to identify the weakest mechanisms in
 Concept of Identity as a security perimeter
the security chain and layers of defense, and improve them  Is key behind authentication and authorization
so that risks to the system are mitigated to an acceptable
level.
Why IAM (tools and functions)?
 Improve Operational Efficiency
Leveraging Existing Components
 IAM technology and processes can improve efficiency by
 The principle aims to increase cloud system security by
automating user on-boarding and other repetitive tasks
leveraging existing components (e.g., self-service for users requesting password resets
 In many instances, the security mechanisms of a cloud  Regulatory security compliance management
implementation might not be configured properly or used to
 Need to comply with various regulatory, privacy, and data
their maximum capability.
protection requirements.
 Reviewing the state and settings of the security mechanisms
and ensuring that they are operating at their optimum design
Identity as the primary security perimeter
points will greatly improve the security posture of an An identity is how someone or something can be verified and
information system. authenticated and may be associated with:
 User
The shared responsibility model
 Application
 Device
 Other
Four pillars of identity:
 Administration
 Authentication
 Authorization
 Auditing

Common identity attacks


Types of security threats:
 Password-based attacks
 Many password-based attacks employ brute force
techniques to gain unauthorized access, often using a
dictionary.
 Phishing
 hacker sends an email that appears to come from a
reputable source, instructing the user to sign in and
change their password.
 Spear phishing

22
 a variant on phishing. Hackers build databases of  MFA – More than 2 factors used
information about users, which can be used to create  Factors of the same types are not considered as 2FA or
highly credible emails. MFA
 A password-spray attack
 Attacker sprays a commonly used password against Authentication via Passwords
multiple accounts  Type 1 Authentication (Something you know)
 Passwords can be either:
Modern authentication and the role of the identity provider  Static: Same password used at each Logon
 Modern authentication is an umbrella term for authentication  Dynamic: Different password used for each Logon (e.g.
and authorization methods between a client and a server. OTP).
 At the center of modern authentication is the role of the  The changing of passwords can also fall between these two
identity provider (IdP). extremes (e.g monthly, quarterly etc)
 IdP offers authentication, authorization, and auditing
services.
 IdP enables organizations to establish authentication and
authorization policies, monitor user behavior, and more.
 A fundamental capability of an IdP and “modern
authentication” is the support for single sign-on (SSO).
 Microsoft Azure Active Directory is an example of a cloud-
based identity provider.
 Passwords can be stolen from the file-system:
IAM
 Introduction of Hashed Passwords
 IAM architecture encompasses several layers of technology,
 Dictionary Attacks
services, and processes.
 Use of multi-word passwords can be more robust against
 At the core of the deployment architecture is a directory
dictionary attacks as against single word passwords
service (such as LDAP or Active Directory) that acts as a
(which are relatively simpler to break)
repository for the identity, credential, and user attributes of
 Guessing attacks, Social engineering attacks, Sniffing attacks
the organization’s user pool.
 The directory interacts with IAM technology components
such as authentication, user management, provisioning, and
identity services that support the standard IAM practice and
processes within the organization.

Elements of an Authentication System

Authentication via Tokens


Tokens, in the form of small, hand-held devices, are used to provide
passwords. The following are the four basic types of tokens:
 Static password tokens:
 Owners authenticate themselves to the token by typing in a
secret password.
 If the password is correct, the token authenticates the owner
to an information system.
 Synchronous dynamic password tokens, clock-based
 The token generates a new, unique password value at fixed
time intervals that is synchronized with the same
password on the authentication server (this password is
the time of day encrypted with a secret key).
 The unique password is entered into a system or
workstation along with an owner’s PIN.
 The authentication entity in a system or workstation knows
an owner’s secret key and PIN, and the entity verifies that
the entered password is valid and that it was entered
during the valid time window.
 Synchronous dynamic password tokens, counter-based
 The token increments a counter value that is synchronized
with a counter in the authentication server.
Authentication Factors  The counter value is encrypted with the user’s secret key
 Authentication can be based on the following three factor inside the token and this value is the unique password that
types: is entered into the system authentication server.
 Type 1 - Something you know, such as a personal  The authentication entity in the system or workstation
identification number (PIN) or password knows the user’s secret key and the entity verifies that the
 Type 2 - Something you have, such as an ATM card or entered password is valid by performing the same
smart card encryption on its identical counter value.
 Type 3 - Something you are (physically), such as a  Asynchronous tokens, challenge-response
fingerprint or retina scan
 2FA – Two factors are employed
23
 A workstation or system generates a random challenge  Acceptability refers to considerations of privacy,
string, and the owner enters the string into the token invasiveness, and psychological and physical comfort
along with the proper PIN. when using the system. For example, a concern with
 The token performs a calculation on the string using the retina scanning systems might be the exchange of body
PIN and generates a response value that is then entered fluids on the eyepiece.
into the workstation or system.  Generating a certificate at Certification Authority
 The authentication mechanism in the workstation or
system performs the same calculation as the token
using the owner’s PIN and challenge string and
compares the result with the value entered by the
owner. If the results match, the owner is authenticated.

Challenge-Response

Evolving Attacks and Defense Systems

Authentication via Memory Cards and Smart Cards


Type 2 Authentication (Something you have)
 Memory cards provide nonvolatile storage of information, Authentication Factors: Pros and Cons
but they do not have any processing capability Summary of strengths and weaknesses of different
 A memory card stores encrypted passwords and other authentication factors
related identifying information.
 An ATM card is an example of memory cards.
 Smart cards provide even more capability than memory
cards by incorporating additional processing power on the
cards
 These credit-card-size devices comprise microprocessor
and memory.
 Are used to store digital signatures, private keys,
passwords, and other personal information.

Authentication via Biometrics


Type 3 authentication(something you are)
 In biometrics, identification is a one-to-many search of an
individual’s characteristics from a database of stored images
 There are three main performance measures in biometrics:
 False rejection rate (FRR) or Type I Error -The Implementing IdM
percentage of valid subjects that are falsely rejected. Typical undertakings in putting identity management
 False acceptance rate (FAR) or Type II Error -The in place include the following:
percentage of invalid subjects that are falsely accepted.  Establishing a database of identities and credentials
 Crossover error rate (CER) - The percentage at which  Managing users’ access rights
the FRR equals the FAR. The smaller the CER, the  Enforcing security policy
better the device is performing.  Developing the capability to create and modify accounts
 In addition to the accuracy of the biometric systems,  Setting up monitoring of resource accesses
Enrollment time, Throughput rate and Acceptability are also  Installing a procedure for removing access rights
other important measures  Providing training in proper procedures
 Enrollment Time is the time that it takes to initially
register with a system by providing samples of the
biometric characteristic to be evaluated. An acceptable
enrollment time is around two minutes
 The throughput rate is the rate at which the system
processes and identifies or authenticates individuals.
Acceptable throughput rates are in the range of 10
subjects per minute.

24
Tickets
 Each trusted site has a unique master key that it shares with the
KDC
 The master key allows each site to talk to the KDC safely
 In addition, the KDC can cryptographically “package”
temporary keys using the master keys so that one site can
safely forward the right keys to another site.

The concept of directory services and Active Directory


 A directory is a hierarchical structure that stores information
about objects on the network.
 A directory service stores directory data and makes it
available to network users, administrators, services, and
applications.
 The best-known service of this kind is Active Directory Extensions to Basic KDC:
Domain Services (AD DS), a central component in  To combat security problems, the protocol incorporates extra
organizations with on-premises IT infrastructure. data in key distribution messages, notably message
 Azure Active Directory is the evolution of identity and authentication codes, time stamps, and the names of senders
access management solutions, providing organizations an and recipients
Identity as a Service (IDaaS) solution for all their apps  In 1978, Needham and Schroeder published a simple protocol
across cloud and on-premises. to efficiently address forgery problems faced by the KDC.
 This Needham-Schroeder (NS) protocol incorporates nonces
The concept of Federated Services and a challenge response to detect forged or replayed messages.
Simplification method of federation scenario:
 The website uses the authentication services of IdP-A. KDC with NS Extensions
 The user authenticates with IdP-B.
 IdP-A has a trust relationship configured with IdP-B.
 When the user’s credentials are passed to the website, the
website trusts the user and allows access.

Challenge-Response in NS Protocol

Kerberos and Crypto Tokens


 Kerberos provides a mechanism to authenticate and share
temporary secret keys between cooperating processes.
 Enables Indirect authentication with a Key Distribution
Center (KDC).
 KDC issues tickets for authentication to different services
(e.g. a mail server, print server etc).

25
Kerberos Authentication Server  An XML-based, open-standard data format for
exchanging authentication and authorization data between
parties
 In particular, used between an identity provider (IdP) and
a service provider (SP)
 SAML is a product of the OASIS* Security Services Technical
Committee.
SAML Principles:
 SAML Roles: the specification defines three roles:
 the principal (typically a user),
 the Identity provider (IdP), and
 the service provider (SP)
SAML Use Case:
 Principal requests a service from the service provider
 Service provider requests and obtains an identity assertion from
the identity provider
 On the basis of this assertion, the service provider can make an
Authenticating to a Kerberized Server access control decision i.e. it can decide whether to perform
some service for the connected principal
 Before delivering the identity assertion to the SP, the IdP may
request some information from the principal – such as a user
name and password – in order to authenticate the principal
 SAML does not specify the method of authentication at the
identity provider; it may use a username and password, or other
form of authentication, including multi-factor authentication.
 One identity provider may provide SAML assertions to many
service providers. Similarly, one SP may rely on and trust
assertions from many independent IdPs.
Web Browser SSO using SAML
Ticket Granting Ticket  The primary SAML use case is the Web Browser Single Sign-
Kerberos KDC with 2-step ticket granting process On (SSO), where a user using a user agent (usually a web
browser) requests a web resource protected by a SAML service
provider
What is SSO?
 Single sign-on (SSO) is a property of access control of multiple
related, yet independent, software systems.
 With this property, a user logs in with a single ID and password
to gain access to a connected system or systems without using
different usernames or passwords, or in some configurations
seamlessly sign on at each system.
 This is typically accomplished using the Lightweight Directory
Access Protocol (LDAP) and stored LDAP databases on
(directory) servers.
Message flow:

IAM Protocols and Standards for Cloud Services


IAM for Cloud Services
 Lot of Enterprises are embracing and adopting Cloud
Services
 Multiple IAM Standards and Protocols help organizations
implement efficient User Access Management practices in
the cloud
 Offer a SSO experience and avoid duplication of identity,
attributes and/or credentials → SAML
 Support automatic (De-)Provisioning of User Accounts →
SPML SPML
 Enforce privilege and entitlement-based Access Control →  Service Provisioning Markup Language
XACML  XML-based framework developed by OASIS
 Integrate applications / cloud services without sharing  Used to provision user accounts and profiles with the
credentials → OAuth 2.0 cloud service
 E.g. Authorize cloud service X to access my data in cloud  Enables “just-in-time provisioning” to create accounts for
service Y without disclosing my credentials to X. new users in real time (instead of pre-registering user
accounts)
SAML  SPML helps achieve the below twin objectives:
 Security Assertion Markup Language (SAML, pronounced  Automate IT tasks for user provisioning
sam-el)
26
 Enables interoperability between different provisioning received Decision.
systems using standard SPML interfaces Policy The system entity that acts as a source of
SPML Principles PIP Information attribute values (i.e. a resource, subject,
 SPML Roles: Point environment)
 Requesting Authority (RA) Point where the XACML access
 The client in SPML Policy Retrieval
PRP authorization policies are stored, typically
 Provisioning Service Point (PSP) Point
a database or the filesystem.
 listens to the request from the RA, processes it,
and returns a response to the RA OAuth (2.0)
 Provisioning Service Target (PST)  Auth in OAuth could imply Authentication, but means
 the actual resource on which the action is taken Authorization.
 E.g. an LDAP directory that stores an  OAuth (Open Authorization) is an open standard for access
organization's user accounts, or a ticketing system delegation
used to issue access tickets.  commonly used as a way for Internet users to grant
Message Flow: websites or applications access to their information on
other websites but without giving them the passwords
 This mechanism is used by companies such as Amazon,
Google, Facebook, Microsoft, and Twitter to permit the
users to share information about their accounts with third-
party applications or websites
 OAuth enables following use cases
 Delegated access control:
 I, the user, delegate another user or service access to
the resource I own. For instance via OAuth, I grant
Twitter (the service) the ability to post on my
Facebook wall (the resource).
 Handling the password anti-pattern:
XACML  Whenever you want to integrate 2 services together,
 eXensible Access Control Markup Language in a traditional, legacy model you have to provide
 an OASIS, general-purpose, XML-based standard service B with your user credentials on service A so
 defines a declarative fine-grained, attribute-based that service B can pretend to be you with Service A.
access control policy language and architecture This has many risks of course. Using OAuth
 Also defines a processing model describing how to eliminates the issues with these patterns and lets the
evaluate access requests according to the rules defined user control what service B can do on behalf of the
in policies user with service A.
 XACML is primarily an attribute-based access control
system (ABAC), also known as a policy-based access
control (PBAC) system
 Attributes associated with a user or resource are inputs
into the decision of whether a given user may access a
given resource in a particular way
 Role-based access control (RBAC) can also be
implemented in XACML as a specialization of ABAC.

XACML Architecture

Abbr. Term Description


Policy Admin Point which manages access
PAP
point authorization policies
Point which evaluates access requests
Policy Decision
PDP against authorization policies before
Point
issuing access decisions
Point which intercepts user's access
Policy request to a resource, makes a decision
OAuth Vs SAML Vs Open ID
PEP Enforcement request to the PDP to obtain the access
 OpenID Connect is built on the OAuth 2.0 protocol and uses an
Point decision (i.e. access to the resource is
additional JSON Web Token (JWT), called an ID token
approved or rejected), and acts on the
27
 Standardizes areas that OAuth 2.0 leaves up to choice,  Authenticated access/ RBAC to Registry
such as scopes and endpoint discovery.  Network ACL for access of control plane APIs of
 It is specifically focused on user authentication and is container management solution (e.g. Kubernetes).
widely used to enable user logins on consumer
websites and mobile apps. Role of CSP
 SAML is independent of OAuth, relying on an exchange of  The Cloud Service Provider (CSP) offers Infrastructure
messages to authenticate in XML SAML format, as opposed Security across all shared-responsibility models
to JWT  There are a variety of compliance frameworks that can serve as
 It is more commonly used to help enterprise users sign a roadmap for security of the cloud environment
in to multiple applications using a single login.  These standards are designed to assure consistency and security
 OpenID is specifically designed as an authentication for consumers
protocol and OAuth for authorization  ISO/IEC 27017 and ISO/IEC 27018 are frameworks
designed for cloud computing providers for the protection
of their clients
 The first focuses primarily on security controls, the
second more on privacy concerns
 The Service Organization Control (SOC) is a standard of
compliance that has three types of certification, named
SOC 1, SOC 2 and SOC 3
 SOC 1 is primarily meant for banks, investment
firms and other such companies that house financial
data, and SOC 2 is for non-financial companies that
house or process data, which could happen to be
financial or otherwise
 It’s this latter certification (SOC 2) that software and
cloud providers often use to verify their technology
controls and processes
 Obtaining a SOC 2 certification is a rigorous process,
since a third-party CPA firm comes to the vendor’s
datacenter site and performs an assessment of their
availability and security stance.

Virtualization Security Management


 The important thing to remember from a security perspective is
Infrastructure Security that there is a more significant impact when a host OS with
Infrastructure and Application Security Layers user applications and interfaces is running outside of a VM at a
level lower than the other VMs (i.e., a Type 2 architecture)
Defense in Depth  Because of its architecture, the Type 2 environment increases
the potential risk of attacks
 For example, a laptop running VMware with a Linux VM on a
Windows XP system inherits the attack surface of both OSs,
plus the virtualization code (VMM).

Infrastructure Security
 Perimeter Security to protect your “virtual network” via
combination of:
 DDoS mitigation solutions
 Firewall services (Network Firewalls and Web
Application Firewalls)
 VPN services Hypervisor Risks
 Network Security  The ability of the hypervisor to provide the necessary isolation
 Network segmentation (e.g. hub and spoke vnets, during an attack greatly determines how well the virtual
Network Service Groups) machines can survive risks
 Use of security rules to allow or deny network traffic  Ideally, software code operating within a defined VM would
 Can be associated to a subnet or a network interface not be able to communicate or affect code running either on the
 Host Security physical host itself or within a different VM;
 End-point protection services (e.g. anti-malware)  However, several issues, such as bugs in the software, or
 Disk encryption limitations to the virtualization implementation, may put this
 Update Management isolation at risk
 Container Security
 Container Registry with Signed Container Images
28
 Major vulnerabilities inherent in the hypervisor consist of
rogue hypervisor rootkits, external modification to the
hypervisor, and VM escape.

VM Security Practices
 Hardening the Host OS and limiting physical access to the
host
 Hardening the VM
 Hardening the Hypervisor
 Implement only one primary function per VM
 Use Unique NICs for Sensitive VMs
 Secure VM Remote Access.

4Cs of Cloud Native Security


Containers are packages of software that contain all of the
necessary elements to run in any environment. Contains all
libraries and dependencies required for the application. OS is
virtualized. User mode of the OS is included with the
containerized application.
The Code layer benefits from strong base (Cloud, Cluster,
Container) security layers. You cannot safeguard against poor
security standards in the base layers by addressing security at the Docker architecture
Code level.

 Docker uses a client-server architecture.


 The Docker daemon
 The Docker daemon (dockerd) listens for Docker API
requests
Cluster and Container Security  Manages Docker objects such as images, containers,
networks, and volumes.
 Builds, runs, and distributes containers
 The Docker client
 The Docker client talks to the Docker daemon
 The Docker client and daemon can run on the same
system
 The Docker client can communicate with more than one
daemon.
 The Docker client and daemon communicate using a REST
What are Containers? API, over UNIX sockets or a network interface
 A software container is a standardized package of software.  Docker registries
 Everything needed for the software to run is inside the  A Docker registry stores Docker images.
container.  Docker Hub is a public registry that anyone can use.
 The software code, runtime, system tools, system libraries,  Docker is configured to look for images on Docker Hub
and settings are all inside a single container. by default
 Docker objects
Virtual Machine vs Containers  IMAGES: An image is a read only template with
 Although containers are quite the hip and trending server instructions
virtualization technology, virtual machines (VMs) still  for creating a Docker container.
dominate among deployed technologies.  CONTAINERS: A container is a
 Linux, on the other hand, has been making significant  runnable instance of an image.
inroads in its support of networking constructs. As Linux
matures and is found in more places, its networking features Example for running a Docker container
have also grown

29
 Control the privileges associated with containers using the
Kubernetes Pod security policies
 Kubernetes is a portable, extensible, open source platform  Restricting network access
for managing containerized workloads and services  Application authors can restrict which pods in other
 Facilitates both declarative configuration and automation namespaces may access pods and ports within their
 Kubernetes provides you with a framework to run namespaces.
distributed systems resiliently. It takes care of scaling and
failover for your application, provides deployment patterns, Container Security
and more.
 Service discovery and load balancing Area of Concern Recommendation
 Storage orchestration Container Vulnerability As part of an image build step, you
 Automated rollouts and rollbacks Scanning and OS should scan your containers for
 Automatic bin packing Dependency Security known vulnerabilities.
 Self-healing Image Signing and Sign container images to maintain a
 Secret and configuration management. Enforcement system of trust for the content of
your containers
K8s Cluster Security Disallow privileged users When constructing containers, create
users inside of the containers that
Area of Concern Recommendation have the least level of operating
Network access to All access to the Kubernetes control system privilege necessary in order
API Server (Control plane is not allowed publicly on the to carry out the goal of the container
plane) internet and is controlled by network
access control lists restricted to the set Platform Security Features in Microsoft Azure
of IP addresses needed to administer the Application Security
cluster.  Most applications are designed and deployed using micro-
Network access to Nodes should be configured to only services architecture and REST APIs
Nodes (nodes) accept connections (via network access  REST APIs are designed to be STATELESS
control lists) from the control plane on  Requires secure approach for session management
the specified ports, and accept
connections for services in Kubernetes  OWASP Top 10 vulnerabilities
of type NodePort and LoadBalancer. If  One of the best compilation of vulnerabilities that impact
possible, these nodes should not be web-applications
exposed on the public internet entirely.  Must be verified in every application before deployed
Kubernetes access Each cloud provider needs to grant a  Most applications rely on a variety of security assets (like
to Cloud Provider different set of permissions to the certificates, API keys, passwords and other secrets
API Kubernetes control plane and nodes. It  A secure key vault on the cloud is useful to store and
is best to provide the cluster with cloud manage access to these secrets.
provider access that follows the
principle of least privilege for the Area of Recommendation
resources it needs to administer. The concern
Kops documentation provides Access over If your code needs to communicate by TCP,
information about IAM policies and TLS only perform a TLS handshake with the client ahead
roles. of time. With the exception of a few cases,
Access to etcd Access to etcd (the datastore of encrypt everything in transit. Going one step
Kubernetes) should be limited to the further, it's a good idea to encrypt network traffic
control plane only. Depending on your between services. This can be done through a
configuration, you should attempt to process known as mutual TLS authentication or
use etcd over TLS. More information mTLS which performs a two sided verification of
can be found in the etcd documentation. communication between two certificate holding
etcd Encryption Wherever possible it's a good practice services.
to encrypt all storage at rest, and since Limiting port Wherever possible, only expose the ports on your
etcd holds the state of the entire cluster ranges of service that are absolutely
(including Secrets) its disk should communication essential for communication or metric gathering
especially be encrypted at rest 3rd Party It is a good practice to regularly scan your
Dependency application's third party libraries for
Cluster Security Security known security vulnerabilities. Each
 Protecting a cluster from accidental or malicious access can programming language has a tool for
be done via: performing this check automatically
 Passing all API calls via Authentication and Static Code Most languages provide a way for a snippet of
Authorization Analysis code to be analyzed for any
 Encrypting all API communication in the cluster is potentially unsafe coding practices. Whenever
with TLS possible you should perform
 Controlling the runtime capabilities of a workload can be checks using automated tooling that can scan
done via: codebases for common security
 Defining Resource quota limits to limit the amount of errors.
CPU, memory, or persistent disk a namespace can
allocate, and also control how many pods, services, or
volumes exist in each namespace
30
Dynamic There are a few automated tools that you can run  Administrative: policies and procedures designed to
probing attacks against your service to try some of the well clearly show how the entity will comply with the act
known service attacks. These include SQL  Physical: controlling physical access to protect
injection, CSRF, and XSS. One of the most against inappropriate access to protected data
popular dynamic analysis tools is the OWASP  Technical: controlling access to computer systems
Zed Attack proxy tool. and enabling covered entities to protect
communications containing PHI transmitted
Data Security electronically over open networks from being
 Data storage on the cloud is governed by various local laws intercepted by anyone other than the intended
 HIPAA, GDPR, SOC recipient
 Cloud data storage and access must:
 Support Physical Isolation GDPR
 Support Backup, Recovery, Retention and Disposal General Data Protection Regulation (GDPR)
Rules, that can be configured as per organizational  A regulation in EU law on data protection and privacy in the
policies European Union (EU) and the European Economic Area (EEA)
 Support Authentication and Authorization of all access  The GDPR is an important component of EU privacy law and
 Support Encryption of confidential data of human rights law, in particular Article 8(1) of the Charter of
 Support secure transfer of data when required Fundamental Rights of the European Union
 Includes both File-based Storage and Databases  It also addresses the transfer of personal data outside the EU
 Database Firewalls, Database Audit Logs and EEA areas
 The GDPR's primary aim is to enhance individuals' control and
Data Privacy Compliance Frameworks, Cloud rights over their personal data and to simplify the regulatory
environment for international business
Forensics
 The GDPR was adopted on 14 April 2016 and became
enforceable beginning 25 May 2018
HIPAA
 Health Insurance Portability and Accountability Act of 1996
GDPR Organization
(HIPAA)
The GDPR 2016 has eleven chapters, concerning:
 Also known as the Kennedy–Kassebaum Act
 General provisions, principles, rights of the data subject, duties
 Is a United States federal statute enacted by the 104th United
of data controllers or processors, transfers of personal data to
States Congress and signed into law by President Bill
third countries, supervisory authorities, cooperation among
Clinton on August 21, 1996
member states, remedies, liability or penalties for breach of
 Modernized the flow of healthcare information, stipulates
rights, and miscellaneous final provisions
how personally identifiable information maintained by the
 Duties of Data Controllers or Processors:
healthcare and healthcare insurance industries should be
 Must clearly disclose any data collection
protected from fraud and theft, and addressed some
 Pseudonymisation is a required process for stored data
limitations on healthcare insurance coverage.
 Ttransforms personal data in such a way that the resulting
data cannot be attributed to a specific data subject without
Data Privacy via HIPAA
the use of additional information
 HIPAA prohibits healthcare providers and healthcare
 Records of processing activities have to be maintained
businesses, called covered entities, from disclosing protected
 Controllers and processors of personal data must put in
information to anyone other than a patient and the patient's
place appropriate technical and organizational measures to
authorized representatives without their consent
implement the data protection principles
 With limited exceptions, it does not restrict patients from
 Transfer of Personal Data to Third Countries:
receiving information about themselves
 Chapter V of the GDPR forbids the transfer of the personal
 It does not prohibit patients from voluntarily sharing their
data of EU data subjects to countries outside of the EEA
health information however they choose, nor – if they
— known as third countries — unless appropriate
disclose medical information to family members, friends, or
safeguards are imposed, or the third country's data
other individuals not a part of a covered entity – legally
protection regulations are formally considered adequate
require them to maintain confidentiality
by the European Commission.
HIPAA Privacy and Security Rule
PCI-DSS
 The HIPAA Privacy Rule is composed of national
 Payment Card Industry Data Security Standard (PCI DSS)
regulations for the use and disclosure of Protected Health
 Is an information security standard for organizations that
Information (PHI) in healthcare treatment, payment and
handle credit cards from the major card schemes
operations by covered entities.
 The standard was created to increase controls around
 The effective compliance date of the Privacy Rule was
cardholder data to reduce credit card fraud
April 14, 2003
 Validation of compliance is performed annually or quarterly,
 The Final Rule on Security Standards was issued on
by a method suited to the volume of transactions handled:
February 20, 2003
 Self-Assessment Questionnaire (SAQ) — smaller volumes
 It took effect on April 21, 2003, with a compliance date
 external Qualified Security Assessor (QSA) — moderate
of April 21, 2005, for most covered entities
volumes; involves an Attestation on Compliance (AOC)
 The Security Rule complements the Privacy Rule
 firm-specific Internal Security Assessor (ISA) — larger
 While the Privacy Rule pertains to all Protected Health
volumes; involves issuing a Report on Compliance (ROC)
Information (PHI) including paper and electronic, the
Security Rule deals specifically with Electronic
PCI-DSS Requirements
Protected Health Information (EPHI)
 Twelve requirements for compliance, organized into six
 It lays out three types of security safeguards required for
logically related groups
compliance:
31
 Build and Maintain a Secure Network and System
 Protect Cardholder Data Traditional Vs Cloud Forensics
 Maintain a Vulnerability Management Program  Cloud forensics is a blend of digital forensics and cloud
 Implement Strong Access Control Measures computing.
 Regularly Monitor and Test Networks  It involves investigating crimes that are committed using the
 Maintain an Information Security Policy cloud.
 Traditional computer forensics is a process by which media is
collected at the crime scene, or where the media was obtained;
it includes the practice of preserving the data, the validation of
said data, and the interpretation, analysis, documentation, and
presentation of the results in the courtroom.
 In most traditional computer forensics, any evidence that has
been discovered within the media will be under the control of
the relevant law enforcement. This is where the divide between
cloud and traditional forensics begins.
 In the cloud, the data can potentially exist anywhere on earth,
and potentially outside of your law enforcement jurisdiction.
This can result in control of the evidence (and the process of
validating it) becoming incredibly challenging.

Cloud Forensics Overview


 Cloud forensics combines the realities of cloud computing with
digital forensics, which focuses on collecting media from a
PCI-DSS Adherence Steps
cloud environment.
Three steps for adhering to the PCI-DSS:
 This requires investigators to work with multiple computing
 Assess — identifying all locations of cardholder data, taking
assets, such as virtual and physical servers, networks, storage
an inventory of IT assets and business processes for payment
devices, applications, and much more.
card processing and analyzing them for vulnerabilities that
 For most of these situations, the cloud environment will remain
could expose cardholder data.
live and capable of change.
 Repair — fixing identified vulnerabilities, securely
 Despite this wide array of different assets and jurisdiction
removing any unnecessary cardholder data storage, and
challenges, the end result must stay the same: evidence must be
implementing secure business processes.
presented in a court of law.
 Report — documenting assessment and remediation details,
and submitting compliance reports to the acquiring bank and
Top 5 Cloud Forensics Challenges
card brands (or other requesting entity, in case of a service
 The chief concern for any cloud forensics investigator is the
provider).
preservation of evidence, especially against tampering by any
third parties. This is what allows evidence to be admissible in
Cloud Forensics court.
 In SaaS and PaaS cloud models, customers are dependent on
cloud service providers for access to any usage logs as they do
not have access to the physical hardware (let alone control over
it).
 In some instances, cloud service providers have been known to
hide logs from customers or hold policies that state logs cannot
be collected.
 This is a strange business practice, given how concerned
most consumers are with control over their data, privacy,
and anonymity online, but it is an obstacle faced by
consumers nonetheless.
 It is because of this that maintaining a clear chain of
What is Cyber Forensics custody in a cloud infrastructure is extremely difficult. In
Cyber Forensics is the application of investigation and analysis traditional forensics, investigators would have complete
techniques to gather and preserve evidence from a particular control of the evidence concerned.
computing device in a way that is suitable for presentation in a  In cloud forensics, the investigators may not have full control
court of law. over who the cloud service provider allows to collect evidence.
 If the person(s) allowed aren’t properly trained, the chain of
Cloud Forensics: Introduction custody or evidence may be inadmissible in court.
 Lot of difference between traditional computer forensics and  This could lead to companies or individuals' entire case
cloud forensics being thrown out, even if they were an entirely innocent
 While the cloud is becoming more widely used by victim of a damaging cloud based crime.
companies across the globe, few of these companies have  As cloud servers are often located in multiple different counties,
included cloud forensics in their cyber-security investments the data required by forensic investigators can be as well.
 Many companies still mistakenly believe that traditional  This immediately presents the investigators with the
forensics is enough. obstacle of legal jurisdiction.
 However, without investment into cloud forensics,  Cloud services can also be reluctant to help you when it comes
businesses could find themselves unable to prosecute to conducting an investigation.
attackers, collect evidence on what actually happened, and
or have their case fully presented in court.

32
 After all, what may be an issue for you might not be an
issue at all for them, and your investigation could
further cost them time and money.

Cloud Forensics Tool Capabilities


4 Capabilities Required for Cloud Forensics Tools :
 Forensic data collection—Tools must be able to identify,
label, record, and acquire data from the cloud.
 Elastic, static, and live forensics—To meet the elastic
nature of clouds, tools must be able to expand and contract
their data storage capabilities as the demand for services
changes.
 Evidence segregation—Clouds are set up for multi-tenancy,
meaning many different unrelated businesses and users share
the same applications and storage space.,So forensics tools
must be able to separate each customer’s data.
 Investigations in virtualized environments—Because cloud
operations typically run in a virtual environment, forensics
tools should have the capability to examine virtual systems.

Although most cloud architecture is composed of virtual


machines, the actual cloud is much more complex. The failover
capability is necessary in case a VM fails, and there are
virtualized switches and routers along with multi tenant and
multi-cloud environments.

Cloud Forensics Tools


 In the early days of the cloud, very few tools designed for
cloud forensics were available, but many digital, network,
and e discovery tools were used to handle collecting and
analyzing data from the cloud.
 Some vendors with integrated tools that can be applied to
cloud forensics include the following:
 Guidance Software EnCase eDiscovery and its incident
response and EnCase Cybersecurity tools
 AccessData Digital Forensics Incident Response services
and AD eDiscovery can collect cloud data from Office 365,
SharePoint, and OneDrive for Business
 Specific forensics tools for the cloud are FROST for
OpenStack IaaS platforms, F-Response’s cloud server utility,
and Magnet AXIOM’s Cloud module.

33

You might also like