Enterprise, Io T&Cloud Security Fundamentals
Enterprise, Io T&Cloud Security Fundamentals
2
An enterprise is neither responsible nor in a BYOD: Mobile Devices:
position to update the anti-virus signatures on the Most mobile devices are cellular smartphones or tablets.
external system or make sure the end system is Commonly implemented security measures include using a
patched Mobile Device Management (MDM) solution.
the level of trust should be none with the highest
level of monitoring and protection implemented.
Application Owner: BYOD: Personal Computers:
Third party has access to a system on the internal Some enterprises are leveraging virtualization in a "trust no
network and the data it processes one" model where the only way to access anything is through a
There must be a level of trust virtual desktop environment.
The enterprise more than likely signed a business Other (generally smaller) organizations are allowing employees
contract to enable this relationship with a contract to bring their own PCs to access enterprise assets, with no
in place, there are legal protections provided for virtualization and balancing access with risk.
the enterprise. Limit the access to all the data that has been assessed at a
risk level of high and above, or to a level the enterprise's
System Owner: risk tolerance will allow
Similar to a business partner, however, the
contractor may seem more like an employee Security as a Process
They reside on-site and perform the job functions Security is a process that requires the integration of
of a full-time staff member security into business processes to ensure enterprise risk is
The more access granted, the more security minimized to an acceptable level.
mechanisms must be in place to reduce the risk of
elevated privileges Risk Analysis
Data Owner: Risk analysis is the process of assessing the components of risk;
Has significant level of access to the enterprise threats, impact, and probability as it relates to an asset, in our
data. case enterprise data.
As an internal employee, trust level is the most A simple risk analysis output may be the decision to spend
trusted. capital to protect an asset based on value of the asset and the
With this access level, there is great responsibility scope of impact if the risk is not mitigated.
not only for the data owner, but also for the It is the method to properly implement security architecture for
enterprise. enterprise initiatives.
If the data is decided to have little value, then the
security mechanisms can be reduced Threat Assessment
Automation scripts and applications: A threat is anything that can act negatively towards the
Unique, as no human interaction involved many enterprise assets
times the permissions are incorrectly configured It may be a person, virus, malware, or a natural disaster
and allow scripts the ability to launch interactive Once a threat is defined, the attributes of threats must be
logons, and shell access equivalent to a standard identified and documented
user. The documentation of threats should include the type of threat,
If authentication is required the credentials are identified threat groupings, motivations if any, and methods of
sometimes embedded in the script. actions.
These factors contribute to the trust level of the
script and automation scripts can be trusted, but Impact Assessment
not like an internal user. Impact is the outcome of threats acting against the enterprise.
Types of Impacts: Immediate and Residual
Defining Policies and Standards Immediate impacts are rather easy to determine
The policies that will guide a secure access and use of the Residual impacts are longer term and often known later.
enterprise data. Impact analysis needs to be thorough and complete.
The standards that ensure a consistent application of policy.
Probability Assessment
BYOD Initiative Probability is the likelihood of the Risk to mature.
Bring your own laptop, cell, and tablet are a few of the new Probability data is as difficult, if not more difficult, to find than
initiatives threat data.
This model is being used by many enterprises to reduce their Probability and Impact are equally important to decide whether
IT budgets (or not) to handle a threat.
Data access typically occurs through systems owned by the
enterprise Assessing Risk
There are two methods to analyze and present risk: qualitative
and quantitative.
a quantitative risk analysis will use descriptive labels like in
any qualitative method.
Qualitative risk analysis provides a perspective of risk in
levels with labels such as Critical, High, Medium, and
Low.
The enterprise must still define what each level means in a
general financial perspective
There is more financial and mathematical basis involved in a
quantitative analysis.
3
Quantitative risk analysis is an in-depth assessment of how employees should use these technologies securely
what the monetary loss would be to the enterprise if the and responsibly.
identified risk were realized Example: The technology use policy may require
Enterprises with a mature risk office will undertake this employees to use company-provided email accounts for
type of analysis to drive priority budget items or find business communication and prohibit the use of personal
areas to increase insurance, effectively transferring devices for work-related tasks.
business risk.
The cost to mitigate would be less than the loss Remote Access Policy:
expectancy over a determined period of time. This is A remote access policy defines the requirements and
simple return on investment (ROI) calculation guidelines for accessing the organization's network and
Annual loss expectancy (ALE): The ALE is the resources from outside the corporate network, such as
calculation of what the financial loss would be to the through VPNs or remote desktop services.
enterprise if the threat event was to occur for a single Example: The policy might mandate the use of multi-
year period. factor authentication for remote access and specify which
Cost of protection (COP): The COP is the capital types of devices are allowed to connect remotely.
expense associated with the purchase or
implementation of a security mechanism to mitigate or 5. Data Classification Policy:
reduce the risk scenario. This policy categorizes data based on its sensitivity and
importance to the organization. It typically includes
guidelines for handling, storing, and transmitting data
Security Policies and Standards according to its classification level.
Policy versus standard Example: Data may be classified as "public," "internal use
Policy dictates what must be done, whereas Standard only," "confidential," or "highly confidential," with
states how it gets done.A policy's intent is to address corresponding restrictions on access and encryption
behaviors and state principles for IT interaction with requirements.
the enterprise.
Standards focus on configuration and implementation 6. Data Handling Policy:
based on what is outlined in policy. A data handling policy outlines procedures for accessing,
Role of Tools: Tools need to be implemented to measure processing, storing, and sharing data securely. It includes
compliance and provide enforcement of policies and guidelines for protecting data throughout its lifecycle,
standards. from creation to disposal.
Typical set of security policies includes: Example: The policy may require employees to use
Information security policy encryption when transmitting sensitive data and specify
Acceptable use policy which employees have access to certain types of
Technology use policy information.
Remote access policy
Data classification policy Data Retention Policy:
Data handling policy This policy establishes guidelines for how long different
Data retention policy types of data should be retained and when it should be
Data destruction policy securely disposed of. It ensures compliance with legal and
regulatory requirements while minimizing the risk of
Enterprise Policies retaining unnecessary data.
Information Security Policy: Example: The policy may dictate that customer
This policy outlines the organization's approach to transaction records must be retained for seven years
safeguarding its information assets. It includes before they can be securely deleted.
directives on protecting data from unauthorized access,
ensuring the integrity of data, and maintaining the Data Destruction Policy:
availability of information systems. A data destruction policy outlines procedures for securely
Example: The information security policy may include and permanently disposing of data when it is no longer
requirements for regular password updates, encryption needed. It typically includes methods for data sanitization
of sensitive data, and guidelines for reporting security to prevent unauthorized recovery.
incidents. Example: The policy may require the use of software-
based data wiping tools or physical destruction (e.g.,
Acceptable Use Policy: shredding) of storage devices before they are disposed of
An acceptable use policy defines the acceptable ways or recycled.
in which employees may use company resources,
including computers, networks, and the internet. It sets Enterprise Standards
guidelines for responsible use and outlines Typical set of security Standards includes:
consequences for violating those guidelines. Wireless Network Security Standard
Example: The policy might prohibit employees from Enterprise Monitoring Standard
accessing social media sites during work hours or Enterprise Encryption Standard
downloading unauthorized software onto company System Hardening Standard
computers.
4
includes measures to protect against unauthorized Standard firewalls simply check for the policy allowing the
access, data interception, and network disruptions. source IP, destination IP, and TCP/UDP port, without a further
Example measures may include the use of strong deep packet analysis.
encryption protocols such as WPA2 or WPA3, Next Generation Firewalls (NGFW) perform more deep packet
implementation of secure authentication methods like analysis to mitigate malicious traffic masquerading as
EAP-TLS, regular monitoring of wireless network legitimate.
traffic for anomalies, and separation of guest and An NGFW can inspect traffic for data, threats, and web traffic.
internal networks. Single-pass architecture (SP3) integrates multiple threat
prevention disciplines (IPS, anti-malware, URL filtering, etc)
Enterprise Monitoring Standard: into a single streambased engine with a uniform signature
The Enterprise Monitoring Standard defines the format.
procedures and tools used for monitoring the Allows traffic to be fully analyzed in a single pass without the
organization's IT infrastructure and systems. It ensures incremental performance degradation seen in other multi-
that necessary monitoring is in place to detect and function gateways
respond to security incidents, performance issues, and Advantages:
compliance violations. Most significant benefit of the NGFW is awareness due to
Examples could include the deployment of network deeppacket inspection and analysis
intrusion detection systems (NIDS), log monitoring Reduced DMZ complexity - with next generation firewalls,
solutions, security information and event management new technologies become a part of the firewall tier,
(SIEM) platforms, and regular review of monitoring including intrusion prevention, user authorization,
data for signs of suspicious activity. application awareness, and advanced malware mitigation.
Disadvantages:
Enterprise Encryption Standard: This shift in firewall capabilities may add confusion to the
This standard establishes guidelines for implementing role the appliance plays in the overall network protection.
encryption across the organization's data, In comparison to web application and database firewalls,
communications, and storage systems to protect while the next generation firewall provides some coverage
sensitive information from unauthorized access or across these areas today, the available platforms do not
disclosure. have the advanced capabilities of purposefully designed
It may specify the types of data that require encryption web application firewalls or database firewalls.
(e.g., personally identifiable information, financial
data), encryption algorithms and key lengths to be used, NGFW is capable of basic detection and mitigation of common
and procedures for key management and distribution. web application attacks, but lacks the more in-depth coverage
provided by web application firewalls with database
System Hardening Standard: counterparts.
System hardening involves configuring IT systems and
devices to reduce their attack surface and minimize Thus, implementing a NGFW in addition to web application
security vulnerabilities. The System Hardening and database firewalls provides the most comprehensive
Standard provides guidelines for securely configuring coverage for a network.
operating systems, applications, and network devices.
Example practices may include disabling unnecessary NGFW: Application Awareness
services and protocols, applying security patches and
updates regularly, implementing strong password Traditional firewalls only look at the source and destination IP
policies, enabling firewalls, and using host-based addresses and the TCP or UDP port to make a decision to block
intrusion detection/prevention systems (HIDS/HIPS) or permit a packet.
where applicable.
These standards collectively help establish a robust
security posture for an organization by addressing
different aspects of network security, monitoring,
encryption, and system hardening. They serve as a
framework for implementing security controls and best NGFW is able to perform deep packet inspection to also
practices to protect against various threats and risks. decode and inspect the application data in network
communication.
Defence In Depth
When developing an enterprise security strategy, a layered NGFW: Intrusion Prevention
approach is the best method to ensure detection and Intrusion prevention coverage is normally required for every
mitigation of attacks at each tier of the network connection to the enterprise network
infrastructure. With the average cost of an IPS being over $40,000, this adds
up quickly in addition to the support and maintenance costs.
Defence in depth is a military strategy that seeks to delay Simplifies management of IT security and the skillsets required
rather than prevent the advance of an attacker, buying time to operationally support the Solution.
and causing additional casualties by yielding space. Rather One less appliance in the DMZ - increases the performance.
than defeating an attacker with a single, strong defensive
line, defence in depth relies NGFW: Malware mitigation
on the tendency of an attack to lose momentum over time or The newest addition to the features that NGFWs are offering is
as it covers a larger area. advanced malware protection in the form of botnet
identification along with malware analysis in the cloud.
Next Generation Firewalls
5
Performed by a solution built into the firewall, where the Some tools are appliance-based. The decoding and analysis
malware is examined in the cloud, protection developed and happens on the box.
mitigation implemented by the manufacturer. Other vendors provide the service in the cloud.
Several manufacturers in the IDS/IPS and NGFW technology
IDS/IPS areas have made significant progress in providing APT
Intrusion detection and prevention technology has remained detection and mitigation, both on the box and in the cloud.
a mainstay at the network perimeter.
Intrusion detection is a method for detecting an attack but DNS Resolution
taking no action. DNS resolution can make for easy exploitation if there is no
Still have a significant implementation in the internal control on where the mapping information is obtained.
network server segments to passively observe the Hosts are pointed to maliciously controlled Internet servers by
behaviors of internal network users has all the detection manipulating DNS information.
logic of intrusion prevention but without the ability to The method also relies on compromised or specifically built
actively mitigate a threat. DNS servers on the Internet, allowing malware writers to make
Intrusion prevention is similar to intrusion detection, but has up their own, unique and sometimes inconspicuous domain
the capability to disrupt and mitigate malicious traffic by names.
blocking and other methods.
A defense in-depth strategy is best implemented by
including IDS/IPS as an essential network protection
mechanism
6
With the increased growth and acceptance of cloud-based service.
services, e-mail is amongst the first to be leveraged. Disadvantages:
Some enterprises have already moved their e-mail Technically, a debatable solution if web-based email
implementation to the cloud. solution is used.
Enables lower cost and as-a-service implementation.
enterprises have lower control over email security. File Transfer Service
Securing Websites
Internet accessible websites are the most targeted asset on the
Internet due to common web application security issues, such
as SQL injection.
There are several approaches to securing websites, but it is
truly a layered security approach requiring:
7
Secure Coding with any classification model, there should be tiers based
Firewalls on criticality.
IPS System labels applied will serve as an input to the overall
Secure Coding: Utilizing a secure software development security architecture.
lifecycle (SSDLC) is the best method to ensure that secure
coding practices are being followed: System Management
Framework for how the coding process is to be System patching may be based on
completed with testing and validation of the code. Criticality of the system,
Process is iterative for each new instance of code or The severity of the vulnerability, or
modified portions of code. Impact of an unpatched software package
Vulnerabilities identified should be documented and System classification plays a significant role in the patching
tracked through remediation within a centralized cycle of systems and must be integrated in the patch and
vulnerability or defect management solution. vulnerability management processes.
NGFW: NGFW can be leveraged to protect Internet-facing
enterprise websites and applications. File Integrity Monitoring
NGFW can also be used for inspecting and mitigating all One of the methods used to detect changes to a known
illegitimate traffic, such as denial of service attacks, filesystem's files, and in the case of Windows, the registry.
before they reach the web servers. To detect these changes, FIM tools create a hash database of
IPS: the known good versions of files in each filesystem location.
Intrusion prevention may also be implemented at the Tool can then periodically or real-time scan the filesystem
network perimeter to mitigate known attack patterns looking for any changes to the installation including known
for web applications. files and directories.
IPS can provide excellent denial of service protection Manual mode FIM:
and block exploit callbacks. Advantages:
Web Application Firewalls: Least taxing on the system because the scans only
Designed to specifically mitigate attacks against web run when the console initiates the scan either adhoc
applications through pattern and behavioral analysis. or on a schedule.
advanced web application firewalls use another IT knows when the system may have higher memory
component at the database tier of the web applications. and processor utilization and it ideally will not affect
Benefits include: business operations.
Ability to determine if a detected threat warrants Disadvantages:
further investigation; i.e. whether the threat was A caveat to this solution is that changes can go
able to interact with the database or not. undetected for longer periods of time depending on
Attacks that do get past the first layer of the web how often scans are run on schedule.
application firewall can be mitigated at the
database tier of the network architecture. Real-time FIM:
enforce security controls for database access Advantages:
initiated not only by the web application but also All add, delete, and modification actions are detected
by database administrators. in real time allowing for almost immediate ability to
review and remediate.
Network Segmentation Disadvantage:
Before any network segmentation can occur, critical data, But the constant running of the tool may be taxing to
processes, applications, and systems must be identified. a system that is loaded with several agents for
Network segmentation using a firewall is the simplest various purposes.
network-based security control
Alongside, highly recommended security monitoring tools, Application Whitelisting
such as Security Information and Event Management A method to control what applications have permission to run
(SIEM) and File Integrity Monitoring (FIM) should be on a system.
implemented to ensure that in the event of an attack, there is If malicious software is installed on the system, it will not be
monitoring for early detection and timely incident response. able to execute.
Tool can also prevent unapproved application install.
Securing the Systems If the application is not preapproved, the installation can be
Processes and methods covered: blocked
System Classificatio If the installation is successful, the tool can block the
File integrity monitoring (FIM) \ application from running.
Application Whitelisting HIPS
Host-based intrusion prevention system (HIPS) Host-based intrusion prevention system (HIPS) is very similar
Host Firewalls in concept to network intrusion prevention.
System Protection using Anti-virus HIPS leverages being installed on the system it is protecting -->
User account management it has additional awareness of running applications and services.
Host-based intrusion detection uses the same types of detection
System Classification methods as the network-based counterpart.
When securing Enterprise Network, Network Segmentation Primary method is signature-based detection as this is the
plays a key role: easiest method to implement on a host without taxing the
Helps placing systems of high value and criticality in operating system with true behavioral analysis.
segmented areas of the network.
To identify these systems, it is necessary to understand Host Firewall
the important business processes and applications as
8
Host firewall can be a great method to filter traffic to and DLP solutions can:
from the system. Help find data in various locations within the enterprise
Firewall should be considered as another layer of defense Enforce encryption, in some cases
from intrusion attempts against applications, services, and Block insecure transmission, and
the host itself. Block unauthorized copying and storing of data, based
Solution is similar to application whitelisting in regards to upon data classification.
the requirement of knowing what applications are running
and how they must communicate. Data in Storage
Some applications open random ports or have extremely Data can be stored in network shares, databases, document
large ranges of ports. Some host firewalls are able to allow repositories, online storage, and portable storage devices.
dynamic port use, thus alleviating the need to go through the Most DLP solutions have the ability to scan data stores and
exercise of analyzing the application. also provide an agent that can be deployed on end systems to
monitor and prevent unauthorized actions for classified
Anti-virus enterprise data.
Anti-virus is considered as a necessary security mechanism Using DLP, a discovery scan can be initiated to identify data in
for the low-hanging fruit predictable malware. locations.
Anti-virus primarily use two methods to detect malware: Also, it can be used in an ongoing scheduled scan to
Signature: This method looks for known patterns of continuously monitor the data stores for data that should or
malware. should not reside in the data location.
Heuristics: In this method the behavior of potential
malware is analyzed for malicious actions. Data in Use
Typically, anti-virus solutions will install an agent on the Data in use is data that is actively processed within an
endpoint, run scans continuously, and any new file application, process, memory or other location, temporarily for
introduced is scanned immediately. the duration of a function or transaction.
Enterprise data not stored long term, only long enough to
User Account Management - UAM perform a function or transaction.
Accounts on a system are some level of access that may be Data in use can be monitored by an agent installed on the end
the door in for malicious activity. system to permit only certain uses of the data and deny actions
Review of system accounts should be in accordance to the such as storing the data locally or sending the data via e-mail or
system classification and other security policies. other communication method.
Implementation on employee-owned devices introduces
privacy issues because any personal transactions such as online
User Roles and Permissions: banking, medical record lookup, and so on may be detected and
Need for properly defining system users and roles to details of the transaction stored in the DLP database for review.
perform required tasks.
Both for server systems and end-user systems. Data in Transit
UAM Account Auditing: Data in transit is data that is being moved from one system to
To detect rogue accounts on systems, the enterprise another, either locally or remotely, such as file transfer systems,
should perform user account auditing across all e-mail, and web applications.
systems on a regular basis. Various DLP solutions have accounted for this fact and provide
Accounts should be disabled or deleted at the time of solutions capable of intercepting and decrypting
termination as part of a formal process. communications to look for classified data.
Policy Enforcement: Focus of DLP for data in transit is specifically data leaving the
Enforcement may come in the form of an implemented enterprise through egress connections.
tool, but it may also come from the monitoring of user DLP Network: Simplest solution to implement in an enterprise
activity on systems. environment also the quickest method to determine what data is
leaving the network in an insecure manner.
Data Classification Process DLP Email and Web: Email and Internet access are the most
Involves two steps: identification and classification of commonly used enterprise services. Focus more on loss of
enterprise data. enterprise confidential data via emails or web.
Classification is done based on: DLP Discover:
Importance and Is a tool that can scan network shares, document
Impact potential repositories, databases, and other data at rest.
There are many data types that exist in order for the business Requires an account with permissions to be configured, to
to operationally function. allow the scans to open the data stores and inspect for
Data can be located in multiple places both internal and policy matches.
external to the enterprise network, including in employer- DLP Endpoint:
owned and employee-owned assets. DLP Endpoint is an agent-based technology that must be
Data can be at rest, in use or in transit installed on every end point. closest to the end user where
The act of assigning a label to identified data types that the human interaction is the highest and, in theory, where
indicate required protection mechanisms.Driven by business the greatest risk is introduced to enterprise data.
risk and data value. Requires a significant implementation of agents that have to
be installed, managed, and the output operationalized for
Data Loss Prevention meaningful and actionable reporting.
Data Loss Prevention (DLP) is a tool that can enforce
protection of data that has been classified. Data Protection Methods
The primary purpose of DLP is to protect against the Data Protection, using different methods
unauthorized exfiltration of enterprise data. Encryption and Hashing
9
Tokenization While the solution does provide some protection, it is not at
Data Masking the same level as tokenization, encryption, or hashing.
Authorization
Authorization
Encryption and Hashing Granting permissions based on who or what the authorized is:
Both encryption and hashing are typically what is thought of An important part of the enterprise data protection and
when data protection is discussed whether in storage, transit, security program.
or in use by applications This facet of data security highlights the defense in depth
Mostly for data in storage or in transit. mantra of information security.
Encryption is the method of mathematically generating a
cipher text version of clear text data to render it IoT Security: Involved Domains
unrecognizable. Device Security
There are two general types of encryption – symmetric and Securing the IoT Device
asymmetric Challenges: Limited System Resources
Hashing is simpler, but only supports data integrity. Network Security
Encryption can happen at the location of storage,prior to Security the network connecting IoT Devices to
storage, or during the process of storing. Backend Systems
Online encryption is in effect while data is accessible Challenges: Wider range of devices + communication
Offline is when data is not directly accessible such as on protocols + standards
backup tapes, turned off systems, etc. Cloud/ Back-end Systems Security
Data stored in databases can be encrypted via two methods Securing the backend Applications from attacks
First method utilizes the built-in encryption capabilities Firewalls, Security Gateways, IDS/IPS
of the database itself to protect the stored data. Mutual Authentication
Beneficial when attempting to make encryption Device(s) → User(s)
invisible to the applications and processes accessing Passwords, PINs, Multi-factor, Digital Certificates
the data. Encryption
Second method uses encrypting at the application and Data Integrity for data at rest and in transit
process layer. Strong Key Management Processes
10
other over a private connection, when in fact the information like personal data, cryptographic keys, or
attacker controls the entire conversation. credentials.
Fake network message: Attackers could create fake Lastly, devices must support software updates to patch
signaling to isolate/misoperate the devices from the IoT. vulnerabilities and exploits.
Service Layer:
Service discovery. It finds infrastructure that can provide Network Layer Security
the required service and information in an effective This layer of the IoT framework represents the connectivity
way. and messaging between things and cloud services
Service composition. It enables the combination and Communications in the IoT are usually over a combination of
interaction among the connected things. Discovery private and public networks, so securing the traffic is obviously
exploits the relationships of things to find the desired important.
service, and service composition schedules or recreates The primary difficulty arises when you consider the challenges
more suitable services to obtain the most reliable ones. of cryptography on devices with constrained resources.
Trustworthiness management. It aims to understand the An Arduino Uno takes up to 3 min to encrypt a test payload
trusted devices and information provided by other when using RSA 1024 bit keys
services. However an elliptical curve digital signature algorithm with
Service APIs. It provides the interactions between a comparable RSA key length can encrypt the same
services required by users. payload in 0.3 s.
Interface Layer: This indicates that device manufactures cannot use resource
Remote safe configuration, software downloading and constraints as an excuse to avoid security in their products.
updating, security patches, administrator authentication, Another security consideration for the network layer is that
unified security platform, etc. For the security many IoT devices communicate over protocols other than WiFi.
requirements on communications between layers: This means the IoT gateway is responsible for maintaining
Integrity and confidentiality for transmission between confidentiality, integrity, and availability while translating
layers, cross-layer authentication and authorization, between different wireless protocols.
sensitive information isolation, etc.
Service Layer Security
This layer of the framework represents the IoT management
system and is responsible for onboarding devices and users,
applying policies and rules, and orchestrating automation
across devices.
Access control measures to manage user and device identity
and the actions they are authorized to take is critical at this
layer
To achieve nonrepudiation, it is also important to maintain an
audit trail of changes made by each user and device so that it is
impossible to refute actions taken in the system
Big Data Challenges:
Providing clear data use notification so that customers have
visibility and finegrained control of the data sent to the
cloud service
keeping customer data stored in the cloud service
segregated and/or encrypted with customer-provided keys,
and when analyzing data in aggregate across customers,
the data should be anonymized.
11
For the majority of IoT devices, the firmware is Device authentication:
essentially the operating system or the software IoT devices connect to each other, to servers, and to various
underneath the OS other networked devices. Every connected device needs to
Most IoT firmware does not have as many security be authenticated to ensure they do not accept inputs or
protections in place requests from unauthorized parties
Often the vulnerabilities in the firmware cannot be Encryption:
patched. Prevents on-path attacks.
Credential-based attacks: Encryption must be combined with authentication to
IoT devices come with default administrator usernames prevent MITM attacks. Otherwise, the attacker could set
and passwords up separate encrypted connections between one IoT
Well-known, or simple to guess, and often, not very device and another, and neither would be aware that their
secure communications are being intercepted.
In some cases, these credentials cannot be reset Turning off unneeded features:
Often, IoT device attacks occur simply because an Most IoT devices come with multiple features, some of
attacker guesses the right credentials. which may go unused by the owner
On-path attacks (or Man-in-the-Middle attacks) Even when features are not used, they may keep additional
IoT devices are particularly vulnerable to such attacks ports open on the device
because many of them do not encrypt their The more ports an Internet-connected device leaves open,
communications by default the greater the attack surface — often attackers simply
On-path attackers position themselves between two ping different ports on a device, looking for an opening.
parties that trust each other and intercept Turning off unnecessary device features will close these
communications between the two extra ports.
MITM attacks can also happen by Impersonation, where DNS filtering:
a malicious node sets up two sessions (with device and DNS filtering is the process of using the Domain Name
server), impersonating and relaying messages between System to block malicious websites
them Adding DNS filtering as a security measure to a network
Physical hardware-based attacks with IoT devices prevents those devices from reaching out
Many IoT devices, like IoT security cameras, stoplights, to places on the Internet they should not (i.e. an attacker's
and fire alarms, are placed in more or less permanent domain).
positions
An attacker having physical access to an IoT device's IoT Security Framework
hardware can steal its data or take over the device At the heart of the IoT Security Framework are the following key
They could do this by accessing programmatic functions:
interfaces left on the circuit board, such as JTAG Authentication
and RS232 serial connectors Authorization
Some microcontrollers may have disabled these Access Control
interfaces, but could still allow direct reads from Authentication
the attached memory chips if the attacker solders At the heart of the framework is the authentication layer, used to
on new connection pins provide and verify the identity information of an IoT entity.
This approach would affect only one device at a time,
but a physical attack could have a larger effect if the Device identifiers include RFID, shared secret, X.509
attacker gains information that enables them to certificates, the MAC address of the endpoint, or some type of
compromise additional devices on the network. immutable hardware based root of trust
Establishing identity through X.509 certificates provides a
strong authentication system. However, in the IoT domain,
many devices may not have enough memory to store a
certificate or may not even have the required CPU power to
execute the cryptographic operations of validating the X.509
certificates
There exists opportunities for further research in defining
smaller footprint credential types and less compute-intensive
cryptographic constructs and authentication protocols (aka
Lightweight Cryptography)
Authorization
The second layer of this framework is authorization that
controls a device’s access (to network services, back-end
services, data etc)
Device Security With authentication and authorization components, a trust
Software and firmware updates: relationship is established between IoT devices to exchange
IoT devices need to be updated for vulnerability patch or appropriate information.
software update
Credential security: Access Control
IoT device admin credentials should be updated if Role Based Access Control (or RBAC):
possible. Most existing authorization frameworks for computer
It is best to avoid reusing credentials across multiple networks and online services are role based
devices and applications —each device should have a
unique password
12
First, the identity of the user is established and then his Lightweight cryptography is a cryptographic algorithm or
or her access privileges are determined from the user’s protocol tailored for implementation in constrained
role within an organization environments including RFID tags, sensors, contactless smart
That applies to most of existing network authorization cards, healthcare devices, and so on.
systems and protocols (RADIUS, LDAP, IPSec, The traditional cryptography is designed at the application
Kerberos, SSH) layer without regard to the limitations of IoT Devices, making
Rule Based Access Control: it difficult to directly apply the existing cryptography
An administrator may define rules that govern access to primitives to IoT.
a resource They investigated a channel model using the “wiretap channel,”
Rules may be based on conditions, such as time of day in which a transceiver attempts to communicate reliably and
and location securely with a legitimate receiver over a noisy channel, while
Can work in conjunction with RBAC its messages are being eavesdropped by a passive adversary
Attribute Based Access Control (or ABAC): through another noisy channel.
Attributes (e.g. age, location, etc) are used to allow Information-theoretic secure communication was introduced in
access. 1949 by American mathematician Claude Shannon, one of the
Users or devices need to prove their attributes. founders of classical information theory
In ABAC, it is not mandatory to verify the identity of the In Shannon’s wiretap model, he assumed both the main and
user to establish his or her access privileges, just that eavesdropper’s channels to be noiseless.
the user/device possesses the attributes is sufficient. Wyner revisited this problem with relaxed assumptions,
Discretionary access control (or DAC): mainly:
Owners or administrators of the protected system, data The noiseless communication assumption of
or resource set the policies defining who or what is Shannon was relaxed by assuming a possibly noisy
authorized to access the resource main channel and an eavesdropper channel that is a
Not a good method, since these methods are not noisy version of the signal received at the legitimate
centralized and hard to scale receiver.
Cababilities Based Access control (CBAC) Wyner’s results showed that positive secure rates of
Capabilities-Based Access Control (CBAC) is a security communication are achievable, under certain
model that grants permissions to users or processes based on conditions of noise or interference in the channels.
the possession of specific capabilities or tokens rather than Secure communication without the need to share a secret key,
their identity or attributes. In CBAC, access control or what is now called as the key-less security approach
decisions are determined by whether a subject (such as a suggested a new paradigm of secure communication protocols.
user or a process) possesses the necessary capabilities to That is, exploiting properties of the wireless medium (noise or
perform a particular action on a resource. interference or jamming) to satisfy the secrecy constraints.
The key-less security approach can be used in wireless
ACL-based Systems networks to securely exchange the shared-secret key between
ACL = Access Control List two communicating nodes, which can be used for all
A table that can tell the IoT system all access rights each subsequent communications
user/ application has to particular IoT end node.
Most common privileges include the ability to access or Transport Encryption
control an IoT device. TLS/SSL: The transport encryption is done using secure
Challenge with ACL-based Systems: transport protocols such as TLS and SSL
In many architectures, IoT devices operate as “servers”, Both the TLS and SSL are cryptographic protocols that
with clients. Connecting to them to fetch collected data. provide communications security over a network
Server IP and port information is public knowledge => TLS uses TCP and therefore does not encounter packet
no security reordering and packet loss issues.
Minimum security is typically implemented using Datagram Transport Layer Security (DTLS):
<username, password> → an embodiment of IoT ACL- DTLS is developed based on TLS by providing equivalent
based device systems security services, such as confidentiality, authentication,
Approach is not scalable as more users join or are and integrity protection.
revoked. In DTLS, a handshake mechanism is designed to deal with
Complexity of managing the ACL at the device can the packet loss, reordering, and retransmission.
become a bottle-neck DTLS provides three types of authentication: non-
A more scalable approach for IoT is to use “capabilities” authentication, server authentication, and server and client
for enabling “capability-based access” authentication.
A capability is essentially a cryptographic key, that gives Mutual TLS (mTLS): Mutual Transport Layer Security
access to some ability (e.g. to communicate with the (mTLS) is a type of mutual authentication, which is when both
device). sides of a network connection authenticate each other.
TLS is a protocol for verifying the server in a client-server
Implementation Methods connection;
Lightweight Cryptography mTLS verifies both connected devices, instead of just one.
mTLS is important for IoT security because it ensures only
legitimate devices and servers can send commands or request
data.
It also encrypts all communications over the network so
that attackers cannot intercept them.
mTLS requires issuing TLS certificates to all authenticated
devices and servers.
13
A TLS certificate contains the device's public key and Social IoT (Internet of Things) refers to the integration of social
information about who issued the certificate. networking concepts and features into IoT systems. Essentially, it
Showing a TLS certificate to initiate a network involves leveraging the capabilities of IoT devices to interact and
connection can be compared to a person showing their communicate with each other and with users through social media
ID card to prove their identity. platforms or other social networking channels.
14
Trust: Trust refers to the belief or confidence that users have edge manipulation, which can undermine trust
in other users, service providers, or the platform itself within assessments.
the network. Trust can be built through positive experiences,
reliable interactions, and consistent delivery of promises or Dynamic Interaction Trust Model: In this model, trust is
expectations. Trust influences users' willingness to engage, assessed based on the ongoing interactions between entities,
share information, transact, and collaborate within the considering factors such as frequency, recency, and quality of
network. Factors such as reputation, credibility, security interactions.
measures, and past experiences contribute to the Advantages:
establishment and maintenance of trust within social or Real-Time Adaptability: Dynamic models can adapt to
service networks. changes in behavior and relationships over time,
More interaction leads to more trust. providing more accurate and up-to-date trust
Advantage: Higher accuracy, Dynamic updation assessments.
Disadvantages: High-volume traffic analysis, impacted by Resilience: By continuously evaluating interactions,
changes in interaction patterns. dynamic models can detect and respond to changes in
Influence: Influence pertains to the ability of users or entities trustworthiness more effectively.
within the network to impact the opinions, behaviors, and Personalization: Dynamic models can personalize trust
decisions of others. In social networks, influence can be assessments based on individual preferences and
measured through metrics such as followers, likes, shares, experiences.
retweets, and comments. Influential users or entities may Disadvantages:
have a significant reach and persuasive power, allowing them Computational Overhead: Constantly updating trust
to shape discussions, trends, and perceptions within the assessments based on real-time interactions can impose
network. Identifying influential individuals or sources can be computational overhead, especially in systems with
valuable for targeting marketing campaigns, spreading high transaction volumes.
messages, and driving user engagement and adoption. Algorithm Complexity: Designing effective algorithms
Recommendation: Recommendations involve suggesting or for dynamic trust assessment can be complex,
endorsing specific content, products, services, or actions to requiring careful consideration of factors such as trust
users based on their preferences, interests, or behaviors decay rates and weighting of interaction attributes.
within the network. Recommendations can be personalized or Data Requirements: Dynamic models rely on a
algorithmically generated, leveraging user data, browsing continuous stream of interaction data, which may not
history, social connections, and collaborative filtering always be readily available or reliable.
techniques. Effective recommendations can enhance user
experience, satisfaction, and retention by providing relevant Hybrid Interaction Trust Model: A hybrid model combines
and timely suggestions that align with users' needs and multiple approaches to trust assessment, leveraging the
preferences. Recommendations also contribute to user strengths of different methods to provide more robust and
engagement, discovery, and exploration within the network, accurate trust evaluations.
fostering a sense of community and trust. Advantages:
Influence is the tool that triggers Trust, Recommendation is the Comprehensive Evaluation: Hybrid models can
method for propagation of influence. incorporate a diverse range of trust factors, including
Advantage: Higher accuracy reputation, behavior, direct interactions, and contextual
Disadvantage: Generates high traffic volume, requires higher information, leading to more comprehensive trust
processing. assessments.
Robustness: By combining multiple trust assessment
Interaction Trust model classification methods, hybrid models can mitigate the limitations of
Let's delve into each type of interaction-based trust model: individual approaches and provide more resilient trust
Graph-Based Interaction Trust Model: In this model, trust evaluations.
relationships are represented as a graph where nodes Flexibility: Hybrid models can be tailored to specific
represent entities (users, devices, services) and edges use cases and system requirements, allowing for
represent interactions or relationships between them. greater flexibility in trust assessment.
Advantages: Disadvantages:
Scalability: Graph structures are highly scalable, Complexity: Integrating multiple trust assessment
making them suitable for modeling complex methods into a coherent framework can increase model
relationships in large-scale systems. complexity and implementation challenges.
Flexibility: Graph-based models can capture diverse Data Integration: Hybrid models may require
types of interactions and relationships, allowing for integrating data from disparate sources, which can be
a nuanced understanding of trust dynamics. challenging due to differences in data formats, quality,
Network Analysis: Graph-based models facilitate and reliability.
network analysis techniques, enabling the Algorithm Selection: Choosing the appropriate
identification of influential nodes, communities, and algorithms and weighting schemes for different trust
patterns within the trust network. factors in a hybrid model requires careful consideration
Disadvantages: and may involve trade-offs between competing
Complexity: Managing and analyzing large graphs objectives.
can be computationally intensive and complex.
Interpretability: Understanding trust relationships Ratings trust Models: In a rating model, trust is assessed based
within a graph may be challenging, especially in on explicit ratings or feedback provided by users about their
networks with many nodes and edges. experiences with other entities in the system. Users typically
Vulnerability to Attacks: Graph-based models may rate entities on predefined criteria such as reliability,
be vulnerable to attacks such as Sybil attacks or competence, and integrity.
Advantages:
15
Transparency: Rating models provide transparent limitations of individual approaches and provide more
feedback to users, enabling them to make informed resilient trust evaluations.
decisions about whom to trust. Flexibility: Cross-integrated models can be customized
User Empowerment: Users have direct input into the to specific use cases and system requirements,
trust assessment process through their ratings, allowing for greater flexibility in trust assessment.
giving them a sense of control and ownership over
their trust decisions. Disadvantages:
Accountability: Entities are incentivized to maintain Complexity: Integrating multiple trust assessment
high trust ratings to attract positive feedback from methods into a coherent framework can increase model
users, fostering accountability and trustworthiness. complexity and implementation challenges.
Disadvantages: Data Integration: Cross-integrated models may require
Bias and Manipulation: Rating systems may be integrating data from disparate sources, which can be
susceptible to bias or manipulation, as users can challenging due to differences in data formats, quality,
artificially inflate or deflate ratings for strategic and reliability.
purposes. Algorithm Selection: Choosing the appropriate
Limited Context: Ratings may not capture the full algorithms and weighting schemes for different trust
context of interactions or relationships between factors in a cross-integrated model requires careful
entities, leading to potentially biased or incomplete consideration and may involve trade-offs between
trust assessments. competing objectives.
Cold Start Problem: New entities may struggle to
establish trust ratings initially, as they lack a Attack Scenarios for SIoT Trust Models
sufficient history of interactions to generate Slandering (or Bad-Mouthing) Attack: In a slandering
meaningful ratings. attack, malicious entities deliberately spread false or
negative information about other entities in the system to
Opinion Model: In an opinion model, trust is assessed based undermine their reputation and trustworthiness. This type of
on the opinions or recommendations of trusted individuals or attack aims to discredit targeted entities and manipulate the
sources within a community or network. Users rely on the trust assessment process.
judgments and experiences of others to inform their trust Example: In an online marketplace, a seller may
decisions. engage in slandering by posting fake negative reviews
Advantages: about competitors to deter customers from purchasing
Social Validation: Opinions from trusted sources their products.
provide social validation and reassurance to users, Impact: Slandering attacks can erode trust in the
helping them navigate complex trust decisions. system by misleading users and damaging the
Efficiency: Opinion models can accelerate trust reputation of targeted entities. They can also disrupt
assessment by leveraging the collective wisdom of fair competition and undermine the integrity of trust
the community, rather than relying solely on mechanisms.
individual experiences or interactions.
Expertise Recognition: Users can identify and trust Sybil Attack: A Sybil attack involves a malicious entity
opinion leaders or experts within a domain, creating multiple fake identities (Sybil nodes) to gain
enhancing the quality and relevance of trust disproportionately high influence or control over a network.
recommendations. These fake identities are used to manipulate trust
Disadvantages: mechanisms, such as reputation systems or voting processes,
Dependency on Sources: Opinion models rely on the by artificially inflating the attacker's perceived
availability and credibility of trusted sources, which trustworthiness.
may not always be reliable or objective. Example: In a peer-to-peer network, a single malicious
Echo Chambers: Opinion models may reinforce user creates multiple fake accounts to control a
existing biases or echo chambers within a significant portion of the network's resources or
community, leading to the amplification of certain influence the selection of specific peers for interactions.
opinions and marginalization of others. Impact: Sybil attacks can compromise the security and
Limited Diversity: Opinion models may overlook fairness of decentralized systems by allowing attackers
diverse perspectives and experiences, particularly if to manipulate trust mechanisms and gain undue
they disproportionately rely on a small subset of advantages. They can also undermine the accuracy and
influential sources. reliability of trust assessments by introducing fake or
biased information.
Cross-Integrated Model: A cross-integrated model
combines multiple trust assessment methods, such as ratings, On-Off Attack: An On-Off attack, also known as a flip-
opinions, and other factors, to generate more comprehensive flop attack, involves a malicious entity repeatedly
and accurate trust evaluations. This model integrates data alternating between cooperative and non-cooperative
from diverse sources to provide a holistic view of behaviors to deceive other entities and manipulate their
trustworthiness. trust. The attacker switches between trustworthy and
Advantages: untrustworthy states strategically to exploit trust
Comprehensive Evaluation: Cross-integrated models mechanisms or gain unfair advantages.
consider a wide range of trust factors, including Example: In a collaborative online platform, a user
ratings, opinions, behavioral data, and contextual may intermittently contribute valuable insights and
information, leading to more comprehensive and then deliberately provide misleading or harmful
accurate trust assessments. information to confuse other participants and
Robustness: By combining multiple trust assessment manipulate their trust.
methods, cross-integrated models can mitigate the
16
Impact: On-Off attacks can disrupt trust Digital Forensic Process
relationships and undermine the stability and
effectiveness of collaborative systems by creating
uncertainty and mistrust among participants. They
can also exploit vulnerabilities in trust mechanisms
by exploiting the unpredictability of the attacker's
behavior.
17
Traditional Forensics Vs IoT Forensics challenges related to securing the chain of evidence and to
There are several aspects of differences and similarity between prove the evidence has not been changed or modified
traditional and IoT forensics
In terms of evidence sources, traditional evidence could be Lack of Individual Identity:
computers, mobile devices, servers or gateways. In IoT Even though the investigators find an evidence in the Cloud
forensics, the evidence could be home appliances, cars, tags that prove a particular IoT device in crime scene is the cause of
readers, sensor nodes, medical implants in humans or the crime, it does not mean this evidence could lead to
animals, or other IoT devices. identification of the criminal.
In terms of Jurisdiction and Ownership, there are no
differences, it could be individuals, groups, companies, Lack of Security:
governments, etc. Evidence in IoT devices could be changed or deleted because
In terms of evidences data types, IoT data type could be any of lack of security, which could make these evidence not solid
possible format, it could be a proprietary format for a enough to be accepted in a court of law
particular vendor. However, in traditional forensics, data
types are mostly electronic documents or standard file Variety of Device Types:
formats. In identification phase of forensics, the digital investigator
In terms of networks, the network boundaries are not as needs to identify and acquire the evidence from a digital crime
clear as the traditional networks, increasing in the blurry scene.
boundary lines. Usually, evidence source is types of a computing system such
as computer and/or a mobile phone.
IoT Forensics However, in IoT, the source of evidence could be objects like a
IoT technology is a combination of many technology zones: smart refrigerator or smart coffee maker
IoT zone, Network zone and Cloud zone. The device could be turned-off because it could have run out of
These zones can be the source of IoT Digital Evidences battery, which makes its chances to be found difficult,
Evidence can be collected from a smart IoT device or a especially if the IoT devices is very small, in hidden places or
sensor, from an internal network such as a firewall or a looks like a traditional device.
router, or from outside networks such as Cloud or an Carrying the device to the lab and finding a space could be
application. another challenge that investigators face
Based on these zones, IoT Forensics covers three aspects in Extracting evidence from these devices is considered another
term of forensics: Cloud forensics, network forensics and challenge as most of the manufacturers adopt different
device level forensics. platforms, operating systems and hardware.
Most of IoT devices have the ability to (directly or indirectly)
connect through applications to share their resources in the Lifecycle Changes in Data Formats:
Cloud, with all valuable data that is stored in the Cloud → The format of the data that is generated by IoT devices is not
Cloud Forensics. identical to what is saved in the Cloud.
Different kinds of networks that IoT devices use to send and Data processing using analytic and translation functions in
receive data. It could be home networks, industrial networks, different places is likely before being stored in the Cloud.
LANs, MANs and WANs. For instance, if an incident occurs Hence, in order to be accepted in a court of law, the data form
in IoT devices, all logs from network devices through which should be returned to its original format before performing
the traffic flows could be potential evidence analysis.
Device Level Forensics include all potential digital evidence
that can be collected from IoT devices like graphics, audio, Cloud Computing
video. Videos and graphics from CCTV camera or audios Cloud computing is a model for enabling ubiquitous, convenient, on-
from Amazon Echo, can be great examples of digital demand network access to a shared pool of configurable computing
evidences in the device level forensics. resources (e.g., networks, servers, storage, applications, and services)
that can be rapidly provisioned and released with minimal
Challenges in IoT Forensics management effort or service provider interaction.
Data Location:
Most of the IoT data is spread in different locations, which Service Model
are out of the user control. This data could be in the Cloud,
in third party’s location, in mobile phone or other devices.
To identify the location of evidence is considered as one of
the biggest challenges an investigator can face in order to
collect the evidence.
In addition, IoT data might be located in different countries
and be mixed with other users information, which means
different countries regulations are involved
Public cloud:
The cloud infrastructure is provisioned for open use by the
general public.
It may be owned, managed, and operated by a business,
academic, or government organization, or some combination
of them.
It exists on the premises of the cloud provider. Enabling Solution Component
Hypervisor
Hybrid cloud: A technology that allows sharing of hardware resources
The cloud infrastructure is a composition of two or more of a single machine by multiple guest Operating Systems
distinct cloud infrastructures (private, community, or public) (OS)
that remain unique entities, but are bound together by Results in multiple Virtual Machines (VMs) on same
standardized or proprietary technology that enables data and physical machine.
application portability (e.g., cloud bursting for load
balancing between clouds).
Enabling Technologies
Software Defined Networking:
Background:
Architecture:
Main Concepts:
Key Benefits:
Software-driven control / Programmability
Simplified Network Equipment available as COTS
Standardized management of Network Equipment
→interoperability
19
Service Chaining Intellectual property (IP) includes inventions, designs, and
artistic, musical, and literary works
Covert channels: A covert channel is an unauthorized and
unintended communication path that enables the exchange of
information. Covert channels can be accomplished through
inappropriate use of storage mechanisms, as example
Encryption involves scrambling messages so that they cannot
be read by an unauthorized entity, even if they are intercepted
Traffic analysis is a form of confidentiality breach that can be
accomplished by analyzing the volume, rate, source, and
destination of message traffic, even if it is encrypted
SDN/NFV in the Data Center Inference is usually associated with database security.
NFV Data Center Inference is the ability of an entity to use and correlate
Used by service providers to host communications and information protected at one level of security to uncover
networking services information that is protected at a higher security level\
Services can be loaded as cloud-based software on
commercial off-the-shelf (COTS) server hardware Integrity requires that the following three principles are
Applications are hosted in data center so they could be met:
accessed via cloud Modifications are not made to data by unauthorized personnel
SDN can work in tandem with NFV or processes.
Traffic Steering in an NFV Data Center. Unauthorized modifications are not made to data by authorized
personnel or processes.
The data is internally and externally consistent, i.e. the internal
information is consistent both among all sub-entities and with
the real/external-world.
AAAA
Authentication is the testing or reconciliation of evidence of a
user’s identity. It establishes the user’s identity and ensures that
users are who they claim to be.
Authorization refers to rights and privileges granted to an
individual or process that enable access to computer resources
and information assets.
Auditing: To maintain operational assurance, organizations use
two basic methods: system audits and monitoring. These
Cloud Information Security Objectives methods can be employed by the cloud customer, the cloud
provider, or both, depending on asset architecture and
Seven complementary principles that support information deployment
assurance are: A system audit is a one-time or periodic event to evaluate
Confidentiality, Integrity, Availability (CIA Triad), and security.
Authentication, Authorization, Auditing, and Monitoring refers to an ongoing activity that examines
Accountability (AAAA) either the system or the users, such as intrusion detection
These 7 principles are summarized in the following slides. An audit trail or log is a set of records that collectively
provide documentary evidence of different cloud
Confidentiality, Integrity, Availability (CIA) operations
CIA - A way to think about security trade-offs. Accountability is the ability to determine the actions and
Confidentiality refers to the need to keep confidential behaviors of a single individual within a cloud system
sensitive data such as customer information, passwords, or Accountability is related to the concept of nonrepudiation,
financial data. wherein an individual cannot successfully deny the
Integrity refers to keeping data or messages correct. performance of an action
Availability refers to making data available to those who Audit trails and logs support accountability.
need it.
Cloud Security Design Principles
Confidentiality in cloud systems is related to the areas of The following 11 security design principles
intellectual property rights, covert channels, traffic analysis, Least privilege
encryption, and inference: Separation of duties
20
Defense in depth Network security can limit communication between resources
Fail safe using segmentation and access controls.
Economy of mechanism The compute layer can secure access to virtual machines either
Complete mediation on-premises or in the cloud by closing certain ports.
Open design Application layer security ensures that applications are secure
Least common mechanism and free of security vulnerabilities.
Psychological acceptability Data layer security controls access to business and customer
Weakest link data, and encryption to protect data.
Leveraging existing components
Least Privilege:
The principle of least privilege maintains that an individual,
process, or other type of entity should be given the minimum
privileges and resources for the minimum period of time
required to complete a task.
This approach reduces the opportunity for unauthorized
access to sensitive information.
Separation of Duties:
Fail Safe:
Separation of duties requires that completion of a specified
Fail safe means that if a cloud system fails it should fail to a
sensitive activity or access to sensitive objects is dependent
state in which the security of the system and its data are not
on the satisfaction of a plurality of conditions. For example:
compromised.
an authorization that requires signatures of more than
One implementation of this philosophy would be to make a
one individual, or
system default to a state in which a user or process is denied
the arming of a weapons system that requires two
access to the system.
individuals with different keys
A complementary rule would be to ensure that when the system
Thus, separation of duties forces collusion among entities in
recovers, it should recover to a secure state and not permit
order to compromise the system.
unauthorized access to sensitive information.
In the situation where system recovery is not done
Defense in Depth
automatically, the failed system should permit access only by
Defense in depth is the application of multiple layers of
the system administrator and not by other users, until security
protection wherein a subsequent layer will provide
controls are reestablished.
protection if a previous layer is breached
The Information Assurance Technical Framework Forum
Economy of Mechanism
(IATFF), an organization sponsored by the National Security
Economy of mechanism promotes simple and comprehensible
Agency (NSA), has produced a document titled the
design and implementation of protection mechanisms, so that
“Information Assurance Technical Framework” (IATF) that
unintended access paths do not exist or can be readily identified
provides excellent guidance on the concepts of defense in
and eliminated
depth
The principle states that Security mechanisms should be as
Defense in multiple places - Information protection
simple and small as possible
mechanisms placed in a number of locations to protect
If the design and implementation are simple and small, fewer
against internal and external threats
possibilities exist for errors
Layered defenses - A plurality of information protection
The checking and testing process is less complicated so that
and detection mechanisms employed so that an
fewer components need to be tested.
adversary or threat must negotiate a series of barriers to
Complete Mediation
gain access to critical information
In complete meditation, every request by a subject to access an
Security robustness - An estimate of the robustness of
object in a computer system must undergo a valid and effective
information assurance elements based on the value of
authorization procedure
the information system component to be protected and
This mediation must not be suspended or become capable of
the anticipated threats
being bypassed, even when the information system is being
Deploy KMI/PKI - Use of robust key management
initialized, undergoing shutdown, being restarted, or is in
infrastructures (KMI) and public key infrastructures
maintenance mode
(PKI)
Deploy intrusion detection systems - Application of
Open Design
intrusion detection mechanisms to detect intrusions,
There has always been an ongoing discussion about the merits
evaluate information, examine results, and, if necessary,
and strengths of security designs that are kept secret versus
take action.
designs that are open to scrutiny and evaluation by the
community at large.
Cloud Context
A good example is an encryption system
Defense in depth uses a layered approach to security:
For most purposes, an open-access cloud system design that
Physical security such as limiting access to a datacenter to
has been evaluated and tested by a myriad of experts provides a
only authorized personnel.
more secure authentication method than one that has not been
Identity and access security controlling access to
widely assessed.
infrastructure and change control.
Perimeter security including distributed denial of service
Least Common Mechanism
(DDoS) protection to filter large-scale attacks before they
This principle states that in systems with multiple users, the
can cause a denial of service for users.
mechanisms allowing resources shared by more than one user
should be minimized as much as possible.
21
This principle may also be restrictive because it limits the The Zero-trust methodology
sharing of resources
Shared access paths can be sources of unauthorized
information exchange and can provide unintentional data
transfers (also known as covert channels)
Example: If there is a need to access a file by more than one
user, then these users should use separate channels to access
the resource, as this helps to prevent from unforeseen
consequences that could cause security problems
Thus, the least common mechanism promotes the least
possible sharing of common security mechanisms
Only a minimum number of protection mechanisms should
be common to multiple users.
Psychological Acceptability
Psychological acceptability refers to the ease of use and
intuitiveness of the user interface that controls and interacts
with the cloud access control mechanisms
The principle states that a security mechanism should not
make the resource more complicated to access if the security IAM
mechanisms were not present
In other words, the principle recognizes the human element
Identity Management (IdM)
in computer security
User Identities (Unique)
If security-related software or computer systems are too
Account Management
complicated to configure, maintain, or operate, the user will
Authentication
not employ the necessary security mechanisms.
Access Management (AcM)
Roles and Privileges
Weakest Link Authorization
A chain is only as strong as its weakest link
Access Control
In context of cloud-systems, the security of a cloud system is
only as good as its weakest component
Why is Identity important?
Thus, it is important to identify the weakest mechanisms in
Concept of Identity as a security perimeter
the security chain and layers of defense, and improve them Is key behind authentication and authorization
so that risks to the system are mitigated to an acceptable
level.
Why IAM (tools and functions)?
Improve Operational Efficiency
Leveraging Existing Components
IAM technology and processes can improve efficiency by
The principle aims to increase cloud system security by
automating user on-boarding and other repetitive tasks
leveraging existing components (e.g., self-service for users requesting password resets
In many instances, the security mechanisms of a cloud Regulatory security compliance management
implementation might not be configured properly or used to
Need to comply with various regulatory, privacy, and data
their maximum capability.
protection requirements.
Reviewing the state and settings of the security mechanisms
and ensuring that they are operating at their optimum design
Identity as the primary security perimeter
points will greatly improve the security posture of an An identity is how someone or something can be verified and
information system. authenticated and may be associated with:
User
The shared responsibility model
Application
Device
Other
Four pillars of identity:
Administration
Authentication
Authorization
Auditing
22
a variant on phishing. Hackers build databases of MFA – More than 2 factors used
information about users, which can be used to create Factors of the same types are not considered as 2FA or
highly credible emails. MFA
A password-spray attack
Attacker sprays a commonly used password against Authentication via Passwords
multiple accounts Type 1 Authentication (Something you know)
Passwords can be either:
Modern authentication and the role of the identity provider Static: Same password used at each Logon
Modern authentication is an umbrella term for authentication Dynamic: Different password used for each Logon (e.g.
and authorization methods between a client and a server. OTP).
At the center of modern authentication is the role of the The changing of passwords can also fall between these two
identity provider (IdP). extremes (e.g monthly, quarterly etc)
IdP offers authentication, authorization, and auditing
services.
IdP enables organizations to establish authentication and
authorization policies, monitor user behavior, and more.
A fundamental capability of an IdP and “modern
authentication” is the support for single sign-on (SSO).
Microsoft Azure Active Directory is an example of a cloud-
based identity provider.
Passwords can be stolen from the file-system:
IAM
Introduction of Hashed Passwords
IAM architecture encompasses several layers of technology,
Dictionary Attacks
services, and processes.
Use of multi-word passwords can be more robust against
At the core of the deployment architecture is a directory
dictionary attacks as against single word passwords
service (such as LDAP or Active Directory) that acts as a
(which are relatively simpler to break)
repository for the identity, credential, and user attributes of
Guessing attacks, Social engineering attacks, Sniffing attacks
the organization’s user pool.
The directory interacts with IAM technology components
such as authentication, user management, provisioning, and
identity services that support the standard IAM practice and
processes within the organization.
Challenge-Response
24
Tickets
Each trusted site has a unique master key that it shares with the
KDC
The master key allows each site to talk to the KDC safely
In addition, the KDC can cryptographically “package”
temporary keys using the master keys so that one site can
safely forward the right keys to another site.
Challenge-Response in NS Protocol
25
Kerberos Authentication Server An XML-based, open-standard data format for
exchanging authentication and authorization data between
parties
In particular, used between an identity provider (IdP) and
a service provider (SP)
SAML is a product of the OASIS* Security Services Technical
Committee.
SAML Principles:
SAML Roles: the specification defines three roles:
the principal (typically a user),
the Identity provider (IdP), and
the service provider (SP)
SAML Use Case:
Principal requests a service from the service provider
Service provider requests and obtains an identity assertion from
the identity provider
On the basis of this assertion, the service provider can make an
Authenticating to a Kerberized Server access control decision i.e. it can decide whether to perform
some service for the connected principal
Before delivering the identity assertion to the SP, the IdP may
request some information from the principal – such as a user
name and password – in order to authenticate the principal
SAML does not specify the method of authentication at the
identity provider; it may use a username and password, or other
form of authentication, including multi-factor authentication.
One identity provider may provide SAML assertions to many
service providers. Similarly, one SP may rely on and trust
assertions from many independent IdPs.
Web Browser SSO using SAML
Ticket Granting Ticket The primary SAML use case is the Web Browser Single Sign-
Kerberos KDC with 2-step ticket granting process On (SSO), where a user using a user agent (usually a web
browser) requests a web resource protected by a SAML service
provider
What is SSO?
Single sign-on (SSO) is a property of access control of multiple
related, yet independent, software systems.
With this property, a user logs in with a single ID and password
to gain access to a connected system or systems without using
different usernames or passwords, or in some configurations
seamlessly sign on at each system.
This is typically accomplished using the Lightweight Directory
Access Protocol (LDAP) and stored LDAP databases on
(directory) servers.
Message flow:
XACML Architecture
Infrastructure Security
Perimeter Security to protect your “virtual network” via
combination of:
DDoS mitigation solutions
Firewall services (Network Firewalls and Web
Application Firewalls)
VPN services Hypervisor Risks
Network Security The ability of the hypervisor to provide the necessary isolation
Network segmentation (e.g. hub and spoke vnets, during an attack greatly determines how well the virtual
Network Service Groups) machines can survive risks
Use of security rules to allow or deny network traffic Ideally, software code operating within a defined VM would
Can be associated to a subnet or a network interface not be able to communicate or affect code running either on the
Host Security physical host itself or within a different VM;
End-point protection services (e.g. anti-malware) However, several issues, such as bugs in the software, or
Disk encryption limitations to the virtualization implementation, may put this
Update Management isolation at risk
Container Security
Container Registry with Signed Container Images
28
Major vulnerabilities inherent in the hypervisor consist of
rogue hypervisor rootkits, external modification to the
hypervisor, and VM escape.
VM Security Practices
Hardening the Host OS and limiting physical access to the
host
Hardening the VM
Hardening the Hypervisor
Implement only one primary function per VM
Use Unique NICs for Sensitive VMs
Secure VM Remote Access.
29
Control the privileges associated with containers using the
Kubernetes Pod security policies
Kubernetes is a portable, extensible, open source platform Restricting network access
for managing containerized workloads and services Application authors can restrict which pods in other
Facilitates both declarative configuration and automation namespaces may access pods and ports within their
Kubernetes provides you with a framework to run namespaces.
distributed systems resiliently. It takes care of scaling and
failover for your application, provides deployment patterns, Container Security
and more.
Service discovery and load balancing Area of Concern Recommendation
Storage orchestration Container Vulnerability As part of an image build step, you
Automated rollouts and rollbacks Scanning and OS should scan your containers for
Automatic bin packing Dependency Security known vulnerabilities.
Self-healing Image Signing and Sign container images to maintain a
Secret and configuration management. Enforcement system of trust for the content of
your containers
K8s Cluster Security Disallow privileged users When constructing containers, create
users inside of the containers that
Area of Concern Recommendation have the least level of operating
Network access to All access to the Kubernetes control system privilege necessary in order
API Server (Control plane is not allowed publicly on the to carry out the goal of the container
plane) internet and is controlled by network
access control lists restricted to the set Platform Security Features in Microsoft Azure
of IP addresses needed to administer the Application Security
cluster. Most applications are designed and deployed using micro-
Network access to Nodes should be configured to only services architecture and REST APIs
Nodes (nodes) accept connections (via network access REST APIs are designed to be STATELESS
control lists) from the control plane on Requires secure approach for session management
the specified ports, and accept
connections for services in Kubernetes OWASP Top 10 vulnerabilities
of type NodePort and LoadBalancer. If One of the best compilation of vulnerabilities that impact
possible, these nodes should not be web-applications
exposed on the public internet entirely. Must be verified in every application before deployed
Kubernetes access Each cloud provider needs to grant a Most applications rely on a variety of security assets (like
to Cloud Provider different set of permissions to the certificates, API keys, passwords and other secrets
API Kubernetes control plane and nodes. It A secure key vault on the cloud is useful to store and
is best to provide the cluster with cloud manage access to these secrets.
provider access that follows the
principle of least privilege for the Area of Recommendation
resources it needs to administer. The concern
Kops documentation provides Access over If your code needs to communicate by TCP,
information about IAM policies and TLS only perform a TLS handshake with the client ahead
roles. of time. With the exception of a few cases,
Access to etcd Access to etcd (the datastore of encrypt everything in transit. Going one step
Kubernetes) should be limited to the further, it's a good idea to encrypt network traffic
control plane only. Depending on your between services. This can be done through a
configuration, you should attempt to process known as mutual TLS authentication or
use etcd over TLS. More information mTLS which performs a two sided verification of
can be found in the etcd documentation. communication between two certificate holding
etcd Encryption Wherever possible it's a good practice services.
to encrypt all storage at rest, and since Limiting port Wherever possible, only expose the ports on your
etcd holds the state of the entire cluster ranges of service that are absolutely
(including Secrets) its disk should communication essential for communication or metric gathering
especially be encrypted at rest 3rd Party It is a good practice to regularly scan your
Dependency application's third party libraries for
Cluster Security Security known security vulnerabilities. Each
Protecting a cluster from accidental or malicious access can programming language has a tool for
be done via: performing this check automatically
Passing all API calls via Authentication and Static Code Most languages provide a way for a snippet of
Authorization Analysis code to be analyzed for any
Encrypting all API communication in the cluster is potentially unsafe coding practices. Whenever
with TLS possible you should perform
Controlling the runtime capabilities of a workload can be checks using automated tooling that can scan
done via: codebases for common security
Defining Resource quota limits to limit the amount of errors.
CPU, memory, or persistent disk a namespace can
allocate, and also control how many pods, services, or
volumes exist in each namespace
30
Dynamic There are a few automated tools that you can run Administrative: policies and procedures designed to
probing attacks against your service to try some of the well clearly show how the entity will comply with the act
known service attacks. These include SQL Physical: controlling physical access to protect
injection, CSRF, and XSS. One of the most against inappropriate access to protected data
popular dynamic analysis tools is the OWASP Technical: controlling access to computer systems
Zed Attack proxy tool. and enabling covered entities to protect
communications containing PHI transmitted
Data Security electronically over open networks from being
Data storage on the cloud is governed by various local laws intercepted by anyone other than the intended
HIPAA, GDPR, SOC recipient
Cloud data storage and access must:
Support Physical Isolation GDPR
Support Backup, Recovery, Retention and Disposal General Data Protection Regulation (GDPR)
Rules, that can be configured as per organizational A regulation in EU law on data protection and privacy in the
policies European Union (EU) and the European Economic Area (EEA)
Support Authentication and Authorization of all access The GDPR is an important component of EU privacy law and
Support Encryption of confidential data of human rights law, in particular Article 8(1) of the Charter of
Support secure transfer of data when required Fundamental Rights of the European Union
Includes both File-based Storage and Databases It also addresses the transfer of personal data outside the EU
Database Firewalls, Database Audit Logs and EEA areas
The GDPR's primary aim is to enhance individuals' control and
Data Privacy Compliance Frameworks, Cloud rights over their personal data and to simplify the regulatory
environment for international business
Forensics
The GDPR was adopted on 14 April 2016 and became
enforceable beginning 25 May 2018
HIPAA
Health Insurance Portability and Accountability Act of 1996
GDPR Organization
(HIPAA)
The GDPR 2016 has eleven chapters, concerning:
Also known as the Kennedy–Kassebaum Act
General provisions, principles, rights of the data subject, duties
Is a United States federal statute enacted by the 104th United
of data controllers or processors, transfers of personal data to
States Congress and signed into law by President Bill
third countries, supervisory authorities, cooperation among
Clinton on August 21, 1996
member states, remedies, liability or penalties for breach of
Modernized the flow of healthcare information, stipulates
rights, and miscellaneous final provisions
how personally identifiable information maintained by the
Duties of Data Controllers or Processors:
healthcare and healthcare insurance industries should be
Must clearly disclose any data collection
protected from fraud and theft, and addressed some
Pseudonymisation is a required process for stored data
limitations on healthcare insurance coverage.
Ttransforms personal data in such a way that the resulting
data cannot be attributed to a specific data subject without
Data Privacy via HIPAA
the use of additional information
HIPAA prohibits healthcare providers and healthcare
Records of processing activities have to be maintained
businesses, called covered entities, from disclosing protected
Controllers and processors of personal data must put in
information to anyone other than a patient and the patient's
place appropriate technical and organizational measures to
authorized representatives without their consent
implement the data protection principles
With limited exceptions, it does not restrict patients from
Transfer of Personal Data to Third Countries:
receiving information about themselves
Chapter V of the GDPR forbids the transfer of the personal
It does not prohibit patients from voluntarily sharing their
data of EU data subjects to countries outside of the EEA
health information however they choose, nor – if they
— known as third countries — unless appropriate
disclose medical information to family members, friends, or
safeguards are imposed, or the third country's data
other individuals not a part of a covered entity – legally
protection regulations are formally considered adequate
require them to maintain confidentiality
by the European Commission.
HIPAA Privacy and Security Rule
PCI-DSS
The HIPAA Privacy Rule is composed of national
Payment Card Industry Data Security Standard (PCI DSS)
regulations for the use and disclosure of Protected Health
Is an information security standard for organizations that
Information (PHI) in healthcare treatment, payment and
handle credit cards from the major card schemes
operations by covered entities.
The standard was created to increase controls around
The effective compliance date of the Privacy Rule was
cardholder data to reduce credit card fraud
April 14, 2003
Validation of compliance is performed annually or quarterly,
The Final Rule on Security Standards was issued on
by a method suited to the volume of transactions handled:
February 20, 2003
Self-Assessment Questionnaire (SAQ) — smaller volumes
It took effect on April 21, 2003, with a compliance date
external Qualified Security Assessor (QSA) — moderate
of April 21, 2005, for most covered entities
volumes; involves an Attestation on Compliance (AOC)
The Security Rule complements the Privacy Rule
firm-specific Internal Security Assessor (ISA) — larger
While the Privacy Rule pertains to all Protected Health
volumes; involves issuing a Report on Compliance (ROC)
Information (PHI) including paper and electronic, the
Security Rule deals specifically with Electronic
PCI-DSS Requirements
Protected Health Information (EPHI)
Twelve requirements for compliance, organized into six
It lays out three types of security safeguards required for
logically related groups
compliance:
31
Build and Maintain a Secure Network and System
Protect Cardholder Data Traditional Vs Cloud Forensics
Maintain a Vulnerability Management Program Cloud forensics is a blend of digital forensics and cloud
Implement Strong Access Control Measures computing.
Regularly Monitor and Test Networks It involves investigating crimes that are committed using the
Maintain an Information Security Policy cloud.
Traditional computer forensics is a process by which media is
collected at the crime scene, or where the media was obtained;
it includes the practice of preserving the data, the validation of
said data, and the interpretation, analysis, documentation, and
presentation of the results in the courtroom.
In most traditional computer forensics, any evidence that has
been discovered within the media will be under the control of
the relevant law enforcement. This is where the divide between
cloud and traditional forensics begins.
In the cloud, the data can potentially exist anywhere on earth,
and potentially outside of your law enforcement jurisdiction.
This can result in control of the evidence (and the process of
validating it) becoming incredibly challenging.
32
After all, what may be an issue for you might not be an
issue at all for them, and your investigation could
further cost them time and money.
33