UNIT III DEFENCES: SECURITY COUNTER
MEASURES
Cryptography in Network Security - Firewalls - Intrusion Detection and Prevention Systems - Network
Management - Databases - Security Requirements of Databases - Reliability and Integrity - Database
Disclosure - Data Mining and Big Data.
I.Cryptography in Network Security:
Cryptography
Cryptography uses codes to protect data and communications so only the
intended receivers can decode and understand them. Consequently,
restricting access to information from outside parties.
"Crypto" indicates "hidden," and "graphy" indicates "writing," respectively.
The techniques used in cryptography to secure data are based on
mathematical principles and a set of rule-based calculations known as
algorithms to modify signals in a way that makes them challenging to
decode.
These algorithms generate cryptographic keys, create digital signatures,
safeguard data privacy, enable online browsing on the Internet, and
ensure the confidentiality of private transactions like credit and debit card
payments.
History of Cryptography
Cryptography started with ciphers, the initial among which was the Caesar
Cipher. Contrasted to modern algorithms for cryptography, ciphers were
much simpler to decode, yet both employed plaintext and keys.
Though simple, the earliest forms of encryption were ciphers. Modern
cryptosystems and algorithms are considerably more advanced. They
employ numerous iterations of ciphers and encrypt the ciphertext of
messages to ensure the most secure data transportation and storage.
Currently used cryptography techniques can potentially be irreversible,
ensuring the message's security forever. The requirement for data to be
safeguarded more securely than ever before has led to the development
of more complex cryptography methods. Most early cryptographic ciphers
and algorithms have been cracked, making them ineffective for data
security.
It would sometimes take years or even decades to figure out the meaning
of a single message, even though it is possible to interpret today's
algorithms. Thus, the competition to develop newer and more powerful
cryptographic techniques continues.
What is The Purpose of Cryptography?
Cryptography aims to keep data and messages private and inaccessible to
possible threats or bad actors. It frequently works invisibly to encrypt and
decrypt the data you send through email, social media, applications, and
website interactions.
There are several uses for symmetric cryptography, including:
o Payment applications and card transactions
o Random number generation
o Verify the sender's signature to be sure they are who they claim
they are
There are several uses for asymmetric cryptography, including:
o Email messages
o SIM card authentication
o Web security
o Exchange of private keys
Types of Cryptography
There are three main types of cryptography:
Symmetric key Cryptography: With the encryption technique, the
sender and the recipient use the same shared key to encrypt and decrypt
messages.
Although symmetric key systems are quicker and easier to use, they have
the drawback of requiring a secure key exchange between the sender and
the receiver. Data Encryption System (DES) is the most widely used
symmetric key encryption method.
Hash Functions: In this algorithm, no key is used. The plain text is used
to produce a hash value that has a fixed length, making it challenging to
retrieve the plain text's information. Hash functions are widely used by
operating systems to encrypt passwords.
Asymmetric Key Cryptography: This approach uses a set of keys to
encrypt and decrypt data. Public keys are used for encryption, whereas
private keys are used for decryption.
The Public Key and Private Key are different from one another. Even if
everyone knows the public key, only the intended recipient may decode
the message since only he can access the private key.
Techniques Used for Cryptography
Cryptography techniques
Cryptography is closely related to the disciplines
of cryptology and cryptanalysis. It includes techniques such as microdots,
merging words with images and other ways to hide information in storage or
transit. In today's computer-centric world, cryptography is most often
associated with scrambling plaintext (ordinary text, sometimes referred to
as cleartext) into ciphertext (a process called encryption), then back again
(known as decryption). Individuals who practice this field are known as
cryptographers.
Modern cryptography concerns itself with the following four objectives:
1. Confidentiality. The information cannot be understood by anyone
for whom it was unintended.
2. Integrity. The information cannot be altered in storage or transit
between sender and intended receiver without the alteration being
detected.
3. Non-repudiation. The creator/sender of the information cannot deny
at a later stage their intentions in the creation or transmission of the
information.
4. Authentication. The sender and receiver can confirm each other's
identity and the origin/destination of the information.
Procedures and protocols that meet some or all the above criteria are known
as cryptosystems. Cryptosystems are often thought to refer only to
mathematical procedures and computer programs; however, they also include
the regulation of human behavior, such as choosing hard-to-guess passwords,
logging off unused systems and not discussing sensitive procedures with
outsiders.
Features of Cryptography
Cryptography has the following features:
o Confidentiality: The only person who can access information is the
one it is intended for, which is the primary feature of cryptography.
o Integrity: Information cannot be altered while it is being stored or
sent from the sender to the intended destination without the
recipient spotting the addition of new information in Cryptography.
o Non-repudiation: The creator/sender of a message cannot deny
his intent to send information at a future point.
o Authentication: The identities of the sender and the recipient have
been confirmed. Furthermore, the information's source and final
destination are confirmed.
o Availability: It also ensures that the required information is
available to authorized users at the appropriate time.
o Key Management: The creation, distribution, storage, and
alteration of cryptographic keys take place in this process.
o Algorithm: Mathematical formulae are used in cryptography to
encrypt and decrypt messages.
o Digital Signatures: A signature that can be applied to messages
to protect the message's authenticity and sender identification.
Cryptographic algorithms
Cryptosystems use a set of procedures known as cryptographic algorithms,
or ciphers, to encrypt and decrypt messages to secure communications
among computer systems, devices and applications.
A cipher suite uses one algorithm for encryption, another algorithm for
message authentication and another for key exchange. This process,
embedded in protocols and written in software that runs on operating systems
(OSes) and networked computer systems, involves the following:
Public and private key generation for data encryption/decryption.
Digital signing and verification for message authentication.
Key exchange.
II.FIREWALLS:
What is a firewall?
Firewalls can be viewed as gated borders or gateways that manage the travel of permitted and
prohibited web activity in a private network. The term comes from the concept of physical
walls being barriers to slow the spread of fire until emergency services can extinguish it. By
comparison, network security firewalls are for web traffic management — typically intended
to slow the spread of web threats.
Firewalls create 'choke points' to funnel web traffic, at which they are then reviewed on a set
of programmed parameters and acted upon accordingly. Some firewalls also track the traffic
and connections in audit logs to reference what has been allowed or blocked.
Firewalls are typically used to gate the borders of a private network or its host devices. As
such, firewalls are one security tool in the broader category of user access control. These
barriers are typically set up in two locations — on dedicated computers on the network or the
user computers and other endpoints themselves (hosts).
How do firewalls work?
A firewall decides which network traffic is allowed to pass through and which traffic is
deemed dangerous. Essentially, it works by filtering out the good from the bad, or the
trusted from the untrusted. However, before we go into detail, it helps to understand
the structure of web-based networks.
Firewalls are intended to secure private networks and the endpoint devices within
them, known as network hosts. Network hosts are devices that ‘talk’ with other hosts
on the network. They send and receive between internal networks, as well as
outbound and inbound between external networks.
Computers and other endpoint devices use networks to access the internet and each
other. However, the internet is segmented into sub-networks or 'subnets' for security
and privacy. The basic subnet segments are as follows:
1. External public networks typically refer to the public/global internet or
various extranets.
2. Internal private network defines a home network, corporate intranets, and
other ‘closed’ networks.
3. Perimeter networks detail border networks made of bastion hosts —
computer hosts dedicated with hardened security that are ready to endure an
external attack. As a secured buffer between internal and external networks,
these can also be used to house any external-facing services provided by the
internal network (i.e., servers for web, mail, FTP, VoIP, etc.). These are more
secure than external networks but less secure than internal. These are not
always present in simpler networks like home networks but may often be
used in organizational or national intranets.
Screening routers are specialized gateway computers placed on a network to
segment it. They are known as house firewalls on the network-level. The two most
common segment models are the screened host firewall and the screened subnet
firewall:
Screened host firewalls use a single screening router between the external
and internal networks. These networks are the two subnets of this model.
Screened subnet firewalls use two screening routers— one known as
an access router between the external and perimeter network, and another
known as the choke router between the perimeter and internal network. This
creates three subnets, respectively.
Both the network perimeter and host machines themselves can house a firewall. To
do this, it is placed between a single computer and its connection to a private
network.
Network firewalls involve the application of one or more firewalls between
external networks and internal private networks. These regulate inbound and
outbound network traffic, separating external public networks—like the global
internet—from internal networks like home Wi-Fi networks, enterprise
intranets, or national intranets. Network firewalls may come in the form of any
of the following appliance types: dedicated hardware, software, and virtual.
Host firewalls or 'software firewalls' involve the use of firewalls on individual
user devices and other private network endpoints as a barrier between
devices within the network. These devices, or hosts, receive customized
regulation of traffic to and from specific computer applications. Host firewalls
may run on local devices as an operating system service or an endpoint
security application. Host firewalls can also dive deeper into web traffic,
filtering based on HTTP and other networking protocols, allowing the
management of what content arrives at your machine, rather than just where it
comes from.
A network firewall requires configuration against a broad scope of connections,
whereas a host firewall can be tailored to fit each machine's needs. However, host
firewalls require more effort to customize, meaning that network-based are ideal for a
sweeping control solution. But the use of both firewalls in both locations
simultaneously is ideal for a multi-layer security system.
Filtering traffic via a firewall makes use of pre-set or dynamically learned rules for
allowing and denying attempted connections. These rules are how a firewall
regulates the flow of web traffic through your private network and private computer
devices. Regardless of type, all firewalls may filter by some combination of the
following:
Source: Where an attempted connection is being made from.
Destination: Where an attempted connection is intended to go.
Contents: What an attempted connection is trying to send.
Packet protocols: What ‘language’ an attempted connection is speaking to
carry its message. Among the networking protocols that hosts use to ‘talk’
with each other, TCP/IP protocols are primarily used to communicate across
the internet and within intranet/sub-networks.
Application protocols: Common protocols include HTTP, Telnet, FTP, DNS,
and SSH.
Source and destination are communicated by internet protocol (IP) addresses and
ports. IP addresses are unique device names for each host. Ports are a sub-level of
any given source and destination host device, similar to office rooms within a larger
building. Ports are typically assigned specific purposes, so certain protocols and IP
addresses using uncommon ports or disabled ports can be a concern.
By using these identifiers, a firewall can decide if a data packet attempting a
connection is to be discarded—silently or with an error reply to the sender—or
forwarded.
Types of firewall
Here are some of the different firewall types and their functions:
1. Packet layer: A packet layer analyzes traffic in the transport protocol layer. At the
transport protocol layer, applications can communicate with each other using
specific protocols: Transmission Control Protocol (TCP) and User Datagram
Protocol (UDP). The firewall examines the data packets at this layer, looking for
malicious code that can infect your network or device. If a data packet is
identified as a potential threat, the firewall gets rid of it.
2. Circuit level: A firewall at the circuit level is positioned as a layer between the
transport layer and the application layer of the TCP/Internet Protocol (TCP/IP)
stack. Thus, they work at the session layer of the Open Systems Interconnection
(OSI) model. In the TCP model, before information can be passed from one cyber
entity to another, there needs to be a handshake. A circuit level firewall examines
the data that passes during this handshake. The information in the data packets
can alert a firewall to potentially harmful data, and the firewall can then discard it
before it infects another computer or system.
3. Application layer: An application layer firewall makes sure that only valid data
exists at the application level before allowing it to pass through. This is
accomplished through a set of application-specific policies that allow or block
communications being sent to the application or those the application sends out.
4. Proxy server: A proxy server captures and examines all information going into or
coming out of a network. A proxy server acts like a separate computer between
your device and the internet. It has its own IP address that your computer
connects to. As information comes in or goes out of the proxy server, it is filtered,
and harmful data is caught and discarded.
5. Software firewalls: The most common kind of software firewall can be found on
most personal computers. It works by inspecting data packets that flow to and
from your device. The information in the data packets is compared against a list
of threat signatures. If a data packet matches the profile of a known threat, it is
discarded.
Importance of firewalls
So, what is the purpose of a firewall and why are they important? Networks without
protection are vulnerable to any traffic that is trying to access your systems. Harmful
or not, network traffic should always be vetted.
Connecting personal computers to other IT systems or the internet opens up a range
of benefits, including easy collaboration with others, combining resources, and
enhanced creativity. However, this can come at the cost of complete network and
device protection. Hacking, identity theft, malware, and online fraud are common
threats users could face when they expose themselves by linking their computers to
a network or the internet.
Once discovered by a malicious actor, your network and devices can easily be
found, rapidly accessed, and exposed to repeated threats. Around-the-clock internet
connections increase the risk of this (since your network can be accessed at any
time).
Proactive protection is critical when using any sort of network. Users can protect
their network from the worst dangers by using a firewall.
What does firewall security do?
What does a firewall do, and what can a firewall protect against? The concept of a
network security firewall is meant to narrow the attack surface of a network to a
single point of contact. Instead of every host on a network being directly exposed to
the greater internet, all traffic must first contact the firewall. Since this also works in
reverse, the firewall can filter and block non-permitted traffic, in or out. Also, firewalls
are used to create an audit trail of attempted network connections for better security
awareness.
Since traffic filtering can be a rule set established by owners of a private network,
this creates custom use cases for firewalls. Popular use cases involve managing the
following:
Infiltration from malicious actors: Undesired connections from an oddly behaving
source can be blocked. This can prevent eavesdropping and advanced persistent
threats (APTs).
Parental controls: Parents can block their children from viewing explicit web
content.
Workplace web browsing restrictions: Employers can prevent employees from
using company networks to access certain services and content, such as social
media.
Nationally controlled intranet: National governments can block internal residents'
access to web content and services that are potentially dissident to a nation's
leadership or its values.
However, firewalls are less effective at the following:
1. Identifying exploits of legitimate networking processes: Firewalls do not
anticipate human intent, so they cannot determine if a ‘legitimate’ connection is
intended for malicious purposes. For example, IP address fraud (IP spoofing) occurs
because firewalls don't validate the source and destination IPs.
2. Prevent connections that do not pass through the firewall: Network-level
firewalls alone will not stop malicious internal activity. Internal firewalls such as host-
based ones will need to be present in addition to the perimeter firewall, to partition
your network and slow the movement of internal ‘fires.’
3. Provide adequate protection against malware: While connections carrying
malicious code can be halted if not allowed, a connection deemed acceptable can
still deliver these threats into your network. If a firewall overlooks a connection as a
result of being misconfigured or exploited, an antivirus protection suite will still be
needed to clean up any malware that enter.
Firewall examples
In practice, the real-world applications of firewalls have attracted both praise and
controversy. While there is a long history of firewall achievements, this security type
must be implemented correctly to avoid exploits. Additionally, firewalls have been
known to be used in ethically questionable ways.
Great Firewall of China, internet censorship
Since around 2000, China has had internal firewall frameworks in place to create its
carefully monitored intranet. By nature, firewalls allow for the creation of a
customized version of the global internet within a nation. They accomplish this by
preventing select services and information from being used or accessed within this
national intranet.
National surveillance and censorship allow for the ongoing suppression of free
speech while maintaining its government's image. Furthermore, China's firewall
allows its government to limit internet services to local companies. This makes
control over things like search engines and email services much easier to regulate in
favor of the government's goals.
China has seen an ongoing internal protest against this censorship. The use of
virtual private networks and proxies to get past the national firewall has allowed
many to voice their dissatisfaction.
Covid-19 U.S. federal agency compromised due to remote work weaknesses
In 2020, a misconfigured firewall was just one of many security weaknesses that led
to an anonymous United States federal agency's breach.
It is believed that a nation-state actor exploited a series of vulnerabilities in the U.S.
agency's cybersecurity. Among the many cited issues with their security, the firewall
in-use had many outbound ports that were inappropriately open to traffic. Alongside
being maintained poorly, the agency's network likely had new challenges with remote
work. Once in the network, the attacker behaved in ways that show clear intent to
move through any other open pathways to other agencies. This type of effort puts
not only the infiltrated agency at risk of a security breach but many others as well.
U.S. power grid operator’s unpatched firewall exploited
In 2019, a United States power grid operations provider was impacted by a Denial-
of-Service (DoS) vulnerability that hackers exploited. Firewalls on the perimeter
network were stuck in a reboot exploit loop for roughly ten hours.
It was later deemed to be the result of a known-but-unpatched firmware vulnerability
in the firewalls. A standard operating procedure for checking updates before
implementation had not yet been implemented causing delays in updates and an
inevitable security issue. Fortunately, the security issue did not lead to any significant
network penetration.
These events underline the importance of regular software updates. Without them,
firewalls are yet another network security system that can be exploited.
How to use firewall protection
Proper setup and maintenance of your firewall are essential to keep your network
and devices protected. Here are some tips to guide your firewall network security
practices:
1. Always update your firewalls as soon as possible: Firmware and software
patches keep your firewall updated against any newly discovered vulnerabilities.
Personal and home firewall users can usually safely update immediately. Larger
organizations may need to check configuration and compatibility across their network
first. However, everyone should have processes in place to update promptly.
2. Use antivirus protection: Firewalls alone are not designed to stop malware and
other infections. These may get past firewall protections, and you'll need a security
solution that's designed to disable and remove them. Kaspersky Total Security can
protect you across your personal devices, and our many business security
solutions can safeguard any network hosts you'll seek to keep clean.
3. Limit accessible ports and hosts with an allow list: Default to connection denial
for inbound traffic. Limit inbound and outbound connections to a strict whitelist of
trusted IP addresses. Reduce user access privileges to necessities. It is easier to
stay secure by enabling access when needed than to revoke and mitigate damage
after an incident.
4. Segmented network: Lateral movement by malicious actors is a clear danger that
can be slowed by limiting cross-communication internally.
5. Have active network redundancies to avoid downtime: Data backups for network
hosts and other essential systems can prevent data loss and productivity during an
incident.
III. Intrusion Detection and Prevention Systems:
What Is an Intrusion Detection and
Prevention System?
An intrusion detection and prevention
system (IDPS) monitors a network for possible threats to
alert the administrator, thereby preventing potential
attacks.
How IDPS Functions
Today’s businesses rely on technology for everything, from
hosting applications on servers to communication. As technology
evolves, the attack surface that cybercriminals have access to
also widens. A 2021 Check Point research reported that there had
been 50% more attacks per week on corporate networks in 2021
as compared to 2020. As such, organizations of all industry
verticals and sizes are ramping up their security posture, aiming
to protect every layer of their digital infrastructure from cyber
attacks.
A firewall is a go-to solution to prevent unwanted and suspicious
traffic from flowing into a system. It is tempting to think that
firewalls are 100% foolproof and no malicious traffic can seep into
the network. Cybercriminals, however, are constantly evolving
their techniques to bypass all security measures. This is where an
intrusion detection and prevention system comes to the rescue.
While a firewall regulates what gets in, the IDPS regulates what
flows through the system. It often sits right behind firewalls,
working in tandem.
An intrusion detection and prevention system is like the baggage
and security check at airports. A ticket or a boarding pass is
required to enter an airport, and once inside, passengers are not
allowed to board their flights until the necessary security checks
have been made. Similarly, an intrusion detection system (IDS)
only monitors and alerts bad traffic or policy violations. It is the
predecessor of the intrusion prevention system (IPS), also known
as an intrusion detection and prevention system. Besides
monitoring and alerting, the IPS also works to prevent possible
incidents with automated courses of action.
Basic functions of an IDPS
An intrusion detection and prevention system offers the following
features:
Basic Functions of an IDPS
Guards technology infrastructure and sensitive
data: No system can exist in a silo, particularly in the
current era of data-driven businesses. Data is constantly
flowing through the network, so the easiest way to attack
or gain access to a system is to hide within the actual
data. The IDS part of the system is reactive, alerting
security experts of such possible incidents. The IPS part of
the system is proactive, allowing security teams to
mitigate these attacks that may cause financial and
reputational damage.
Reviews existing user and security policies: Every
security-driven organization has its own set of user
policies and access-related policies for its applications and
systems. These policies considerably reduce the attack
surface by providing access to critical resources to only a
few trusted user groups and systems. Continuous
monitoring by intrusion detection and prevention systems
ensures that administrators spot any holes in these policy
frameworks right away. It also allows admins to tweak
policies to test for maximum security and efficiency.
Gathers information about network resources: An
IDS-IPS also gives the security team a bird’s-eye view of
the traffic flowing through its networks. This helps them
keep track of network resources, allowing them to modify
a system in case of traffic overload or under-usage of
servers.
Helps meet compliance regulations: All businesses, no
matter the industry vertical, are being increasingly
regulated to ensure consumer data privacy and security.
Predominantly, the first step toward fulfilling these
mandates is to deploy an intrusion detection and
prevention system.
An IDPS works by scanning processes for harmful patterns,
comparing system files, and monitoring user behavior and system
patterns. IPS uses web application firewalls and traffic filtering
solutions to achieve incident prevention.
Types of IDPS
Organizations can consider implementing four types of intrusion
detection and prevention systems based on the kind of
deployment they’re looking for.
IDPS Types
Network-based intrusion prevention system (NIPS):
Network-based intrusion prevention systems monitor
entire networks or network segments for malicious traffic.
This is usually done by analyzing protocol activity. If the
protocol activity matches against a database of known
attacks, the corresponding information isn’t allowed to get
through. NIPS are usually deployed at network boundaries,
behind firewalls, routers, and remote access servers.
Wireless intrusion prevention system
(WIPS): Wireless intrusion prevention systems monitor
wireless networks by analyzing wireless networking
specific protocols. While WIPS are valuable within the
range of an organization’s wireless network, these
systems don’t analyze higher network protocols such as
transmission control protocol (TCP). Wireless intrusion
prevention systems are deployed within the wireless
network and in areas that are susceptible to unauthorized
wireless networking.
Network behavior analysis (NBA) system: While NIPS
analyze deviations in protocol activity, network behavior
analysis systems identify threats by checking for unusual
traffic patterns. Such patterns are generally a result of
policy violations, malware-generated attacks, or
distributed denial of service (DDoS) attacks. NBA systems
are deployed in an organization’s internal networks and at
points where traffic flows between internal and external
networks.
Host-based intrusion prevention system (HIPS):
Host-based intrusion prevention systems differ from the
rest in that they’re deployed in a single host. These hosts
are critical servers with important data or publicly
accessible servers that can become gateways to internal
systems. The HIPS monitors the traffic flowing in and out
of that particular host by monitoring running processes,
network activity, system logs, application activity, and
configuration changes.
The type of IDP system required by an organization depends on
its existing infrastructure and how its plans to scale up in the
future. The techniques used by intrusion detection and prevention
solutions are also an important consideration.
Let’s summarize the types of intrusion detection and prevention
systems.
Types of Activity
IDPS Type Deployed In
Detected
Network- Network boundaries, Network, transport, and
based behind firewalls and application TCP/IP layer
routers and remote activity
access servers
Wireless Within the wireless Wireless protocol activity,
network unauthorized WLAN use
NBA Internal networks Network, transport, and
and at points where application TCP/IP layer
traffic flows between activity with protocol-
internal and external level anomalies
networks
Host- Individual hosts: Host application and
based critical servers or operating system (OS)
publicly accessible activity; network,
servers transport, and
application TCP/IP layer
activity
Intrusion Detection and Prevention
System Techniques with Examples
IDP systems have two levels of broad functionalities — detection
and prevention. At each level, most solutions offer some basic
approaches.
Detection–level functionalities of IDPS
1. Threshold monitoring
The first step of threshold monitoring consists of setting accepted
levels associated with each user, application, and system
behavior. Examples of metrics that are used during threshold
monitoring include the number of failed login attempts, the
number of downloads from a particular source, or even something
slightly more complicated such as the accepted time of access to
a specific resource.
The monitoring system alerts admins and sometimes triggers
automated responses when a threshold is crossed.
Only having threshold monitoring instead of intrusion detection
comes with its own set of problems. More often than not, the
complex infrastructure underlying an organization’s operations
and offerings cannot be filtered down to a few metrics. These
threshold values also tend to vary as the company’s customer
base and services grow. Very stringent implementation of
threshold monitoring, in these cases, can cause a lot of false
positives. A false positive, in the context of IDP solutions, is when
benign activity is identified as suspicious.
2. Profiling
Intrusion detection and prevention systems offer two types of
profiling: user profiling and resource profiling.
User profiling involves monitoring if a user with a particular role or
user group only generates traffic that is allowed. For example,
only a DevOps user can have access to the cloud server hosting
applications. A programmer can only access data in a sandbox
server environment. Short-term user profile monitoring allows
administrators to view recent work patterns while long-term
profiling provides an extended view of resource usage. This
comes in handy while creating a baseline for normal behavior and
for creating a user role itself.
Resource profiling measures how each system, host, and
application consumes and generates data. An application with a
suddenly increased workflow might indicate malicious behavior.
Executable profiling tells administrators what kind of programs
are usually installed and run by individual users, applications, and
systems. For example, a host can be running an application that
accesses only certain files. Any other file or a rogue database
request indicates foul play. This kind of profiling makes it easy to
trace malware, ransomware, or Trojan downloaded by mistake.
Sometimes, profiling may make it difficult to interpret overall
network traffic and the bumps that come along with it. The sweet
spot for profiling lies between profiles that are too broad and
allow bad actors and those too narrow, which hinder productivity.
Prevention–level functionalities of IDPS
1. Stopping the attack
Otherwise known as ‘banishment vigilance’, intrusion prevention
systems prevent incidents before they occur. This is done by
blocking users or traffic originating from a particular IP address. It
also involves terminating or resetting a network connection. For
example, when a particular user is scanning data too frequently,
it makes sense to revoke access until these requests have been
investigated.
2. Security environment changes
This involves changing security configurations to prevent attacks.
An example is the IPS reconfiguring the firewall settings to block a
particular IP address.
3. Attack content modification
Malicious content can be introduced into a system in various
forms. One way of making this content more benign is to remove
the offending segments. A basic example is removing suspicious-
looking attachments in emails. A more intricate example is
repackaging incoming payloads to a common and pre-designed
lot, such as removing unnecessary header information.
Techniques of IDPS
1. Signature-based detection
A signature is a specific pattern in the payload. This specific
pattern can be anything from the sequence of 1s and 0s to the
number of bytes. Most malware and cyberattacks come with their
own identifiable signature. Another example of a signature is
something as simple as the name of the attachment in a
malicious email.
The IDP system maintains a database of known malware
signatures with signature-based detection. Each time new
malware is encountered, this database is updated. The detection
system works by checking the traffic payload against this
database and alerting when there’s a match.
Signature-based detection obviously cannot work if the malware
isn’t previously known. It does not check for the payload’s nature
and cannot give administrators information such as the preceding
request to a malicious response.
2. Anomaly-based detection
Anomaly detection works on threshold monitoring and profiling.
The ‘normal’ behavior of all users, hosts, systems, and
applications is configured. Any deviation from this norm is
considered an anomaly and alerted for. For example, if an email
ID generates hundreds of emails within a few hours, the chances
of that email account being hacked are high.
Anomaly detection is better than signature-based detection when
considering new attacks that aren’t in the signature database.
Creating these baseline profiles takes a lot of time (also known as
the ‘training period’). Even then, the rates of false positives may
be high, especially in dynamic environments.
3. Stateful protocol analysis
Anomaly detection uses host- or network-specific profiles to
determine suspicious activity. Stateful protocol analysis goes one
step further and uses the predefined standards of each protocol
state to check for deviations.
For example, file transfer protocol (FTP) only allows logins when
unauthenticated. Once a session is authenticated, users can view,
create, or modify files based on their permissions. This
information is part of the FTP protocol definition. The intrusion
detection system analyzes if these norms are met. This kind of
stateful protocol analysis makes it easy to keep track of the
authenticator in each session and subsequent activity associated
with this request.
Stateful protocol analysis relies heavily on vendor-driven protocol
definitions. The granular nature means that it is also resource-
intensive, taking up precious bandwidth while tracking
simultaneous sessions. Each of these techniques either ensures
the prevention of incoming attacks or helps administrators spot
security vulnerabilities in their systems. Most IDP solutions offer a
combination of more than one approach.
IV.NETWORK MANAGEMENT:
What is network management?
Network management is the ongoing monitoring, administration, and maintenance of
any networked system of computers. That said, networks have grown beyond the
desktop computer, now encompassing all manner of end devices—mobile devices,
laptops, printers—and the hardware that facilitates their interaction. From design to
implementation, access control, troubleshooting, to replacing equipment, to
managing the end user experience, network management is a very broad set of roles
and responsibilities.
How do networks work?
Generally there are seven “layers” that describe the separate
ways communications take place across a network. These
seven layers act as a visual map to understand what’s going on in
a networking system.
Different types of networks
Depending on your needs, including purpose, cost, availability, and
scalability, networks come in many different arrangements. Some
of most common configurations of networks include:
Local Area Network (LAN). A LAN is a proprietary computer
network that enables designated users to have exclusive
access to the same system connection at a common location,
always within an area of less than a mile and most often
within the same building.
Personal Area Network (PAN). A personal area network
(PAN) is a short-range network topology designed for
peripheral devices (usually 30ft) used by an individual. The
purpose of these types of networks is to transmit data
between devices without being necessarily connected to the
internet.
Wireless Local Area Network (WLAN). Similar to a LAN,
connected devices on these configurations communicate over
wireless (such as Wi-Fi) protocols, rather than physical
connections.
Wide Area Network (WAN). A private network over a much
larger area, A WAN, or SD-WAN, allows LANs and other types
of networks in different geographical regions to communicate
and transmit data.
Virtual Private Network (VPN). A virtual private network
(VPN) offers users an encrypted connection between two
devices that effectively hides data packets while using the
internet
What are examples of network management tasks?
In network management, tasks include:
Pushing software updates to devices across the
network: Depending on the capability of the organization’s IT and
the network management system, updates can be pushed to
devices that are integral to the operation of an enterprise network—
such as routers—as well as end-user devices that include printers
and phones.
Performing network maintenance: Network maintenance
involves performing tasks necessary to fix issues as they occur and
upgrade software and hardware vital for the continued operation of
the network.
Network performance monitoring: Network performance
monitoring is done to ensure optimal performance, continuous
performance of network resources.
Identifying security threats and addressing network
vulnerabilities: Network administrators monitor the network for
signs of potential threats or breaches and use AI tools that alert
them to attacks or possible security risks, which can then be
mitigated or prevented. Types of network security threats include
ransomware and distributed denial of service (DDoS) attacks. Some
examples of network vulnerabilities include hardware that wasn’t
installed properly, insecure passwords and exploitable design flaws
in an operating system.
Enhancing network security: Enhancing network security
includes tasks such as creating firewalls that block suspicious
activity on the network and the enforcement of multifactor
authentication (MFA).
IP address management: Network administrators maintain an
inventory of unavailable and available IP addresses that are needed
for devices that reside on the network. They assign and unassign IP
addresses as devices are provisioned or de-provisioned from the
network. IP addresses are sometimes assigned dynamically through
a dynamic host configuration protocol (DHCP) server, which is often
found in large enterprise networks.
Network provisioning: Network administrators provision a
network infrastructure with IT system resources such as bandwidth
and transport channels (cable, broadband, 5G, LTE, satellite and so
on) to enable access between users, end-user devices, IoT devices,
applications and data at wanted performance levels.
Setting network access controls: This is done to regulate how
devices on the edge and applications in cloud environments access
data via the network. For example, an access control may be in
place to prevent sensitive data from being transferred over the
network into a public cloud environment.
What is a network management protocol?
A network management protocol defines the processes, procedures and
policies for managing, monitoring, and maintaining the network. It is how
network administrators acquire and view information from a network
device regarding availability, network latency, packet/data loss and errors
through a network management system.
A network management system can also collect information from devices
automatically through a network management protocol for automated
tasks such as updating software or performance monitoring. Examples of
network management protocols include:
Simple Network Management Protocol (SNMP): An open
standard protocol that queries each network element and sends
responses to the system for analysis.
Internet Control Message Protocol (ICMP): A TCP/IP network
layer that provides troubleshooting, control and error message
services.
Streaming telemetry: A protocol that transmits key performance
indicators from network devices to the system in real-time.
What are the benefits of network management?
The benefits of network management include:
Network visibility: Network operations and engineering teams use
network management systems for centralized monitoring and
performance visibility of their networks and hybrid cloud
environments.
Unplanned downtime detection and prevention: Network
administrators can use AI monitoring tools to detect potential
outages and either prevent the disruption from taking place or set
failover policies that redirect traffic and resources.
Performance optimization: Through the increased visibility and
access to network performance data that network management
systems provide, network operations and engineering teams can
make informed decisions that result in greater network efficiency,
cost-effectiveness, availability and security. Additionally, a
performance-optimized network is also likely to contribute to an
improved user experience due to decreased latency and response
time and improved availability.
V.DATABASE:
What Is a Database?
Database defined
A database is an organized collection of structured information, or data, typically stored
electronically in a computer system. A database is usually controlled by a database
management system (DBMS). Together, the data and the DBMS, along with the applications
that are associated with them, are referred to as a database system, often shortened to just
database.
Data within the most common types of databases in operation today is typically modeled in
rows and columns in a series of tables to make processing and data querying efficient. The
data can then be easily accessed, managed, modified, updated, controlled, and organized.
Most databases use structured query language (SQL) for writing and querying data.
hat is Structured Query Language (SQL)?
SQL is a programming language used by nearly all relational databases to query, manipulate,
and define data, and to provide access control. SQL was first developed at IBM in the 1970s
with Oracle as a major contributor, which led to implementation of the SQL ANSI standard,
SQL has spurred many extensions from companies such as IBM, Oracle, and Microsoft.
Although SQL is still widely used today, new programming languages are beginning to
appear.
Evolution of the database
Databases have evolved dramatically since their inception in the early 1960s. Navigational
databases such as the hierarchical database (which relied on a tree-like model and allowed
only a one-to-many relationship), and the network database (a more flexible model that
allowed multiple relationships), were the original systems used to store and manipulate data.
Although simple, these early systems were inflexible. In the 1980s, relational
databases became popular, followed by object-oriented databases in the 1990s. More
recently, NoSQL databases came about as a response to the growth of the internet and the
need for faster speed and processing of unstructured data. Today, cloud databases and self-
driving databases are breaking new ground when it comes to how data is collected, stored,
managed, and utilized.
What’s the difference between a database
and a spreadsheet?
Databases and spreadsheets (such as Microsoft Excel) are both convenient ways to store
information. The primary differences between the two are:
How the data is stored and manipulated
Who can access the data
How much data can be stored
Spreadsheets were originally designed for one user, and their characteristics reflect that.
They’re great for a single user or small number of users who don’t need to do a lot of
incredibly complicated data manipulation. Databases, on the other hand, are designed to hold
much larger collections of organized information—massive amounts, sometimes. Databases
allow multiple users at the same time to quickly and securely access and query the data using
highly complex logic and language.
Types of databases
There are many different types of databases. The best database for a specific organization
depends on how the organization intends to use the data.
Relational databases
Relational databases became dominant in the 1980s. Items in a relational database are
organized as a set of tables with columns and rows. Relational database technology
provides the most efficient and flexible way to access structured information.
Object-oriented databases
Information in an object-oriented database is represented in the form of objects, as in
object-oriented programming.
Distributed databases
A distributed database consists of two or more files located in different sites. The
database may be stored on multiple computers, located in the same physical location, or
scattered over different networks.
Data warehouses
A central repository for data, a data warehouse is a type of database specifically designed
for fast query and analysis.
NoSQL databases
A NoSQL, or nonrelational database, allows unstructured and semistructured data to be
stored and manipulated (in contrast to a relational database, which defines how all data
inserted into the database must be composed). NoSQL databases grew popular as web
applications became more common and more complex.
Graph databases
A graph database stores data in terms of entities and the relationships between entities.
OLTP databases. An OLTP database is a speedy, analytic database designed for large
numbers of transactions performed by multiple users.
These are only a few of the several dozen types of databases in use today. Other, less
common databases are tailored to very specific scientific, financial, or other functions. In
addition to the different database types, changes in technology development approaches and
dramatic advances such as the cloud and automation are propelling databases in entirely new
directions. Some of the latest databases include
Open source databases
An open source database system is one whose source code is open source; such
databases could be SQL or NoSQL databases.
Cloud databases
A cloud database is a collection of data, either structured or unstructured, that resides on
a private, public, or hybrid cloud computing platform. There are two types of cloud
database models: traditional and database as a service (DBaaS). With DBaaS,
administrative tasks and maintenance are performed by a service provider.
Multimodel database
Multimodel databases combine different types of database models into a single,
integrated back end. This means they can accommodate various data types.
Document/JSON database
Designed for storing, retrieving, and managing document-oriented
information, document databases are a modern way to store data in JSON format rather
than rows and columns.
Self-driving databases
The newest and most groundbreaking type of database, self-driving databases (also
known as autonomous databases) are cloud-based and use machine learning to
automate database tuning, security, backups, updates, and other routine management
tasks traditionally performed by database administrators.
Learn more about self-driving databases
What is database software?
Database software is used to create, edit, and maintain database files and records, enabling
easier file and record creation, data entry, data editing, updating, and reporting. The software
also handles data storage, backup and reporting, multi-access control, and security. Strong
database security is especially important today, as data theft becomes more frequent.
Database software is sometimes also referred to as a “database management system”
(DBMS).
Database software makes data management simpler by enabling users to store data in a
structured form and then access it. It typically has a graphical interface to help create and
manage the data and, in some cases, users can construct their own databases by using
database software.
What is a database management system
(DBMS)?
A database typically requires a comprehensive database software program known as a
database management system (DBMS). A DBMS serves as an interface between the database
and its end users or programs, allowing users to retrieve, update, and manage how the
information is organized and optimized. A DBMS also facilitates oversight and control of
databases, enabling a variety of administrative operations such as performance monitoring,
tuning, and backup and recovery.
Some examples of popular database software or DBMSs include MySQL, Microsoft Access,
Microsoft SQL Server, FileMaker Pro, Oracle Database, and dBASE.
What is a MySQL database?
MySQL is an open source relational database management system based on SQL. It was
designed and optimized for web applications and can run on any platform. As new and
different requirements emerged with the internet, MySQL became the platform of choice for
web developers and web-based applications. Because it’s designed to process millions of
queries and thousands of transactions, MySQL is a popular choice for ecommerce businesses
that need to manage multiple money transfers. On-demand flexibility is the primary feature of
MySQL.
MySQL is the DBMS behind some of the top websites and web-based applications in the
world, including Airbnb, Uber, LinkedIn, Facebook, Twitter, and YouTube.
Database challenges
Today’s large enterprise databases often support very complex queries and are
expected to deliver nearly instant responses to those queries. As a result,
database administrators are constantly called upon to employ a wide variety of
methods to help improve performance. Some common challenges that they face
include:
Absorbing significant increases in data volume. The explosion of
data coming in from sensors, connected machines, and dozens of other
sources keeps database administrators scrambling to manage and
organize their companies’ data efficiently.
Ensuring data security. Data breaches are happening everywhere
these days, and hackers are getting more inventive. It’s more important
than ever to ensure that data is secure but also easily accessible to
users.
Keeping up with demand. In today’s fast-moving business
environment, companies need real-time access to their data to support
timely decision-making and to take advantage of new opportunities.
Managing and maintaining the database and
infrastructure. Database administrators must continually watch the
database for problems and perform preventative maintenance, as well
as apply software upgrades and patches. As databases become more
complex and data volumes grow, companies are faced with the
expense of hiring additional talent to monitor and tune their databases.
Removing limits on scalability. A business needs to grow if it’s going
to survive, and its data management must grow along with it. But it’s
very difficult for database administrators to predict how much capacity
the company will need, particularly with on-premises databases.
Ensuring data residency, data sovereignty, or latency
requirements. Some organizations have use cases that are better
suited to run on-premises. In those cases, engineered systems that are
pre-configured and pre-optimized for running the database are ideal.
Customers achieve higher availability, greater performance and up to
40% lower cost with Oracle Exadata, according to Wikibon’s recent
analysis (PDF).
Addressing all of these challenges can be time-consuming and can prevent
database administrators from performing more strategic functions.
VI : Data Mining and Big Data.
Big Data:
Big Data refers to the vast amount that can be structured, semi-
structured, and unstructured sets of data ranging in terms of tera-bytes. It
is challenging to process a huge amount of data on a single system that's
why the RAM of our computer stores the interim calculations during the
processing and analyzing. When we try to process such a huge amount of
data, it takes much time to do these processing steps on a single system.
Also, our computer system doesn't work correctly due to overload.
Here we will understand the concept (how much data is produced) with a
live example. We all know about Big Bazaar. We as a customer goes to Big
Bazaar at least once a month. These stores monitor each of its product
that the customers purchase from them, and from which store location
over the world. They have a live information feeding system that stores all
the data in huge central servers. Imagine the number of Big bazaar stores
in India alone is around 250. Monitoring every single item purchased by
every customer along with the item description will make the data go
around 1 TB in a month.
What does Big Bazaar do with that data:
We know some promotions are running in Big Bazaar on some items. Do
we genuinely believe Big Bazaar would just run those products without
any full back-up to find those promotions would increase their sales and
generate a surplus? That is where Big Data analysis plays a vital role.
Using Data Analysis techniques, Big Bazaar targets its new customers as
well as existing customers to purchase more from its stores.
Big data comprises of 5Vs that is Volume, Variety, Velocity, Veracity, and Value.
volume: In Big Data, volume refers to an amount of data that can be
huge when it comes to big data.
Variety: In Big Data, variety refers to various types of data such as web
server logs, social media data, company data.
Velocity: In Big Data, velocity refers to how data is growing with respect
to time. In general, data is increasing exponentially at a very fast rate.
Veracity: Big Data Veracity refers to the uncertainty of data.
Value: In Big Data, value refers to the data which we are storing, and
processing is valuable or not and how we are getting the advantage of
these huge data sets.
How to Process Big Data:
A very efficient method, known as Hadoop, is primarily used for Big data
processing. It is an Open-source software that works on a Distributed
Parallel processing method.
The Apache Hadoop methods are comprised of the
given modules:
Hadoop Common:
It contains dictionaries and utilities required by other Hadoop modules.
Hadoop Distributed File System(HDFS):
A distributed file-system which stores data on commodity machine,
supporting very high gross bandwidth over the cluster.
Hadoop YARN:
It is a resource-management Platform responsible for administrating
various resources in clusters and using them for scheduling of user's
application.
Hadoop MapReduce:
It is a programming model for huge-scale data processing.
ADVERTISEMENT
Data Mining:
As the name suggests, Data Mining refers to the mining of huge data
sets to identify trends, patterns, and extract useful information is called
data mining.
In data Mining, we are looking for hidden data but without any idea about
what exactly type of data we are looking for and what we plan to use it for
once you find it. When we discover interesting information, we start
thinking about how to make use of it to boost business.
We will understand the data mining concept with an example:
A Data Miner starts discovering the call records of a mobile network
operator without any specific target from his manager. The manager
probably gives him a significant objective to discover at least a few new
patterns in a month. As he begins extracting the data to discover a
pattern that there are some international calls on Friday (example)
compared to all other days. Now he shares this data with management,
and they come up with a plan to shrink international call rates on Friday
and start a campaign. Call duration goes high, and customers are happy
with low call rates, more customers join, the organization makes more
profit as utilization percentage has increased.
There are various steps involved in Data Mining:
Data Integration:
In step first, Data are integrated and collected from various sources.
Data Selection:
In the first step, we may not collect all the data simultaneously, so in this
step, we select only those data which are left, and we think it is useful for
data mining.
Data Cleaning:
In this step, the information we have collected is not clean and may
consist of errors, noisy or inconsistent data, missing values. So we need to
implement various strategies to get rid of such problems.
Data Transformation:
The data even after cleaning is not prepared for mining, so we need to
transform them into structures for mining. The methods used to achieve
this are aggregation, normalization, smoothing, etc.
Data Mining:
Once the data has transformed, we are ready to implement data mining
methods on data to extract useful data and patterns from data sets.
Techniques like clustering association rules are among the many various
techniques used for data mining.
Pattern Evaluation:
Patten evaluation contains visualization, removing random patterns,
transformation, etc. from the patterns we generated.
Decision:
It is the last step in data mining. It helps users to make use of the
acquired user data to make better data-driven decisions.
Difference Between Data Mining and Big Data:
ADVERTISEMENT
Data Mining Big Data
It primarily targets an analysis of data to It primarily targets the data
extract useful information. relationship.
It can be used for large volume as well as low It contains a huge volume of data.
volume data.
It is a method primarily used for data analysis. It is a whole concept than a brief term.
It is primarily based on Statistical Analysis, It is primarily based on data analysis,
generally target prediction, and finding generally target prediction, and finding
business factors on a small scale. business factors on a large scale.
It uses the following data types e.g., It uses the following data types e.g.,
Structured data, relational, and dimensional Structured, Semi-Structured, and
database. unstructured data.
It expresses what about the data. It refers to why of the data.
It is the closest view of the data. It is a broad view of the data.
It is primarily used for strategic decision- It is primarily used for Dashboards and
making purposes. predictive measures.
VII: Security Requirements of Databases:
What Is Database Security?
Database security is a set of practices and technologies used to
protect database management systems from malicious
cyberattacks and unauthorized use. Database security is a
complex task that combines several information security
disciplines—application security, data security, and endpoint
security.
The goal of database security is to protect against misuse, data
corruption, and intrusion, not only of the data in the database, but
of the data management system itself and applications that
access the database. Another aspect of database security is
protecting and hardening the physical or virtual server hosting
the database, and the surrounding computing and network
environment.
Control methods of Database Security
Database Security means keeping sensitive information safe and
prevent the loss of data. Security of data base is controlled by
Database Administrator (DBA).
The following are the main control measures are used to provide
security of data in databases:
1. Authentication
2. Access control
3. Inference control
4. Flow control
5. Database Security applying Statistical Method
6. Encryption
These are explained as following below.
1. Authentication :
Authentication is the process of confirmation that whether the
user log in only according to the rights provided to him to perform
the activities of data base. A particular user can login only up to
his privilege but he can’t access the other sensitive data. The
privilege of accessing sensitive data is restricted by using
Authentication.
By using these authentication tools for biometrics such as retina
and figure prints can prevent the data base from
unauthorized/malicious users.
2. Access Control :
The security mechanism of DBMS must include some provisions
for restricting access to the data base by unauthorized users.
Access control is done by creating user accounts and to control
login process by the DBMS. So, that database access of sensitive
data is possible only to those people (database users) who are
allowed to access such data and to restrict access to
unauthorized persons.
The database system must also keep the track of all operations
performed by certain user throughout the entire login time.
3. Inference Control :
This method is known as the countermeasures to statistical
database security problem. It is used to prevent the user from
completing any inference channel. This method protect sensitive
information from indirect disclosure.
Inferences are of two types, identity disclosure or attribute
disclosure.
4. Flow Control :
This prevents information from flowing in a way that it reaches
unauthorized users. Channels are the pathways for information to
flow implicitly in ways that violate the privacy policy of a
company are called convert channels.
5. Database Security applying Statistical Method :
Statistical database security focuses on the protection of
confidential individual values stored in and used for statistical
purposes and used to retrieve the summaries of values based on
categories. They do not permit to retrieve the individual
information.
This allows to access the database to get statistical information
about the number of employees in the company but not to access
the detailed confidential/personal information about the specific
individual employee.
6. Encryption :
This method is mainly used to protect sensitive data (such as
credit card numbers, OTP numbers) and other sensitive numbers.
The data is encoded using some encoding algorithms.
An unauthorized user who tries to access this encoded data will
face difficulty in decoding it, but authorized users are given
decoding keys to decode data.
What are the challenges of database security?
Security concerns for internet-based attacks are some of the most persistent
challenges to database security. Hackers devise new ways to infiltrate databases
and steal data almost daily. You must ensure your database security measures are
strong enough to withstand these attacks and avoid a security breach.
Some cybersecurity threats can be difficult to detect, like phishing scams in which
user credentials are compromised and used without permission. Malware
and ransomware are also common cybersecurity threats.
Another critical challenge for database security is making sure that your employees,
partners, and contractors with database access don’t abuse their credentials.This is
sometimes referred to as an insider threat. These exfiltration vulnerabilities are
difficult to guard against because authorized users with legitimate access can take
sensitive data for their own purposes. Edward Snowden’s compromise of the NSA is
a good example of this challenge.
Organizations like yours must also make sure that users with legitimate access to
database systems and applications are only privy to the protected data that they
need for work. Otherwise, there’s greater potential for them to compromise data
security.
How you can deploy database security
There are three layers of database security: the database level, the access level,
and the perimeter level. Security at the database level occurs within the database
itself, where the data live. Access layer security focuses on controlling who can
access certain data or systems containing it. Security policy at the perimeter level
determines who can and cannot get into databases. Each level requires unique
security solutions.
Security Level Database Security Solutions
Database Level Masking
Tokenization
Encryption
Access Level Access Control Lists
Permissions
Perimeter Level Firewalls
Virtual Private Networks
Database Security Threats
Many software vulnerabilities, misconfigurations, or patterns of
misuse or carelessness could result in breaches. Here are a
number of the most known causes and types of database
security cyber threats.
Insider Threats
An insider threat is a security risk from one of the following three
sources, each of which has privileged means of entry to the
database:
A malicious insider with ill-intent
A negligent person within the organization who exposes the
database to attack through careless actions
An outsider who obtains credentials through social engineering or
other methods, or gains access to the database’s credentials
An insider threat is one of the most typical causes of database
security breaches and it often occurs because a lot of employees
have been granted privileged user access.
Blog: How Insider Threats Drive Better Data Protection Strategies.
Human Error
Weak passwords, password sharing, accidental erasure or
corruption of data, and other undesirable user behaviors are still
the cause of almost half of data breaches reported.
Exploitation of Database Software Vulnerabilities
Attackers constantly attempt to isolate and target vulnerabilities
in software, and database management software is a highly
valuable target. New vulnerabilities are discovered daily, and all
open source database management platforms and commercial
database software vendors issue security patches regularly.
However, if you don’t use these patches quickly, your database
might be exposed to attack.
Even if you do apply patches on time, there is always the risk
of zero-day attacks, when attackers discover a vulnerability, but it
has not yet been discovered and patched by the database vendor.
Blog: Imperva Protects from New Spring Framework Zero-Day
Vulnerabilities.
SQL/NoSQL Injection Attacks
A database-specific threat involves the use of arbitrary non-SQL
and SQL attack strings into database queries. Typically, these are
queries created as an extension of web application forms, or
received via HTTP requests. Any database system is vulnerable to
these attacks, if developers do not adhere to secure coding
practices, and if the organization does not carry out regular
vulnerability testing.
Buffer Overflow Attacks
Buffer overflow takes place when a process tries to write a large
amount of data to a fixed-length block of memory, more than it is
permitted to hold. Attackers might use the excess data, kept in
adjacent memory addresses, as the starting point from which to
launch attacks.
Denial of Service (DoS/DDoS) Attacks
In a denial of service (DoS) attack, the cybercriminal overwhelms
the target service—in this instance the database server—using a
large amount of fake requests. The result is that the server
cannot carry out genuine requests from actual users, and often
crashes or becomes unstable.
In a distributed denial of service attack (DDoS), fake traffic is
generated by a large number of computers, participating in
a botnet controlled by the attacker. This generates very large
traffic volumes, which are difficult to stop without a highly
scalable defensive architecture. Cloud-based DDoS
protection services can scale up dynamically to address very
large DDoS attacks.
Malware
Malware is software written to take advantage of vulnerabilities or
to cause harm to a database. Malware could arrive through any
endpoint device connected to the database’s network. Malware
protection is important on any endpoint, but especially so on
database servers, because of their high value and sensitivity.
An Evolving IT Environment
The evolving IT environment is making databases more
susceptible to threats. Here are trends that can lead to new types
of attacks on databases, or may require new defensive measures:
Growing data volumes—storage, data capture, and processing
is growing exponentially across almost all organizations. Any data
security practices or tools must be highly scalable to address
distant and near-future requirements.
Distributed infrastructure—network environments are
increasing in complexity, especially as businesses transfer
workloads to hybrid cloud or multi-cloud architectures, making the
deployment, management, and choice of security solutions more
difficult.
Increasingly tight regulatory requirements—the worldwide
regulatory compliance landscape is growing in complexity, so
following all mandates are becoming more challenging.
Cybersecurity skills shortage—there is a global shortage of
skilled cybersecurity professionals, and organizations are finding
it difficult to fill security roles. This can make it more difficult to
defend critical infrastructure, including databases.
How Can You Secure Your Database Server?
A database server is a physical or virtual machine running the
database. Securing a database server, also known as
“hardening”, is a process that includes physical security, network
security, and secure operating system configuration.
Ensure Physical Database Security
Refrain from sharing a server for web applications and database
applications, if your database contains sensitive data. Although it
could be cheaper, and easier, to host your site and database
together on a hosting provider, you are placing the security of
your data in someone else’s hands.
If you do rely on a web hosting service to manage your database,
you should ensure that it is a company with a strong security
track record. It is best to stay clear of free hosting services due to
the possible lack of security.
If you manage your database in an on-premise data center, keep
in mind that your data center is also prone to attacks from
outsiders or insider threats. Ensure you have physical security
measures, including locks, cameras, and security personnel in
your physical facility. Any access to physical servers must be
logged and only granted to authorized individuals.
In addition, do not leave database backups in locations that are
publicly accessible, such as temporary partitions, web folders, or
unsecured cloud storage buckets.
Lock Down Accounts and Privileges
Let’s consider the Oracle database server. After the database is
installed, the Oracle database configuration assistant (DBCA)
automatically expires and locks most of the default database user
accounts.
If you install an Oracle database manually, this doesn’t happen
and default privileged accounts won’t be expired or locked. Their
password stays the same as their username, by default.
An attacker will try to use these credentials first to connect to the
database.
It is critical to ensure that every privileged account on a database
server is configured with a strong, unique password. If accounts
are not needed, they should be expired and locked.
For the remaining accounts, access has to be limited to the
absolute minimum required. Each account should only have
access to the tables and operations (for example, SELECT or
INSERT) required by the user. Avoid creating user accounts with
access to every table in the database.
Regularly Patch Database servers
Ensure that patches remain current. Effective database patch
management is a crucial security practice because attackers are
actively seeking out new security flaws in databases, and
new viruses and malware appear on a daily basis.
A timely deployment of up-to-date versions of database service
packs, critical security hotfixes, and cumulative updates will
improve the stability of database performance.
Disable Public Network Access
Organizations store their applications in databases. In most real-
world scenarios, the end-user doesn’t require direct access to the
database. Thus, you should block all public network access to
database servers unless you are a hosting provider. Ideally, an
organization should set up gateway servers (VPN or SSH tunnels)
for remote administrators.
Encrypt All Files and Backups
Irrespective of how solid your defenses are, there is always a
possibility that a hacker may infiltrate your system. Yet, attackers
are not the only threat to the security of your database. Your
employees may also pose a risk to your business. There is always
the possibility that a malicious or careless insider will gain access
to a file they don’t have permission to access.
Encrypting your data makes it unreadable to both attackers and
employees. Without an encryption key, they cannot access it, this
provides a last line of defense against unwelcome intrusions.
Encrypt all-important application files, data files, and backups so
that unauthorized users cannot read your critical data.
What is Database Reliability?
Database reliability is defined broadly to mean that the database
performs consistently without causing problems. More
specifically, it means that there is accuracy and consistency of
data. For data to be considered reliable, there must be:
Data integrity, which means that all data is the database is
accurate and that there is consistency throughout data. Data
consistency is defined broadly to include the type and amount
of data.
Data safety, which means that only authorized individuals
access the database. Data security also includes preventing
any type of data corruption and ensuring that data is always
accessible. When it comes to data safety, engineers must
ensure that data is accessible even in the event of unforeseen
circumstances, such as emergencies or natural disasters.
Data recoverability, which means there are effective
procedures in place to recover any lost data. This is a key to
database reliability, ensuring that even if other safety
measures fail, there is a system for recovering data.
The Importance of Database Reliability
Organizational databases store a broad range of information,
including customer information, sales information, financial
transactions, vendor information, and employee records. This
information is essential for maintaining the health of
organizations and plays a central role in everything from
competitive strategy to daily logistics. In many ways, data works
as the eyes and ears of the organization, and without it,
organizations lack the necessary information to make informed
decisions. It’s the database that makes this information
accessible and usable.
If an organization’s database is not reliable, consistent, or
accurate, it can lead to making bad or misinformed decisions.
Further, as the database is a central part of organizational
infrastructure, if it goes down, it can lead to substantial issues
throughout the organization. This means that database reliability
is and must remain a central concern for businesses.
Yet, in the current environment, data problems are increasingly
complex, making it continually difficult to create, manage, and
manipulate databases. Given the importance of database
reliability and the increased demands that come with database
management, it’s important that organizations have advanced
and innovative approaches to ensuring database reliability. A
couple of such approaches that organizations should consider
implementing are database reliability engineering and the use of
effective database management systems.
VIII: Reliability and Integrity:
What is Database Reliability?
Database reliability is defined broadly to mean that the database
performs consistently without causing problems. More
specifically, it means that there is accuracy and consistency of
data. For data to be considered reliable, there must be:
Data integrity, which means that all data is the database is
accurate and that there is consistency throughout data. Data
consistency is defined broadly to include the type and amount
of data.
Data safety, which means that only authorized individuals
access the database. Data security also includes preventing
any type of data corruption and ensuring that data is always
accessible. When it comes to data safety, engineers must
ensure that data is accessible even in the event of unforeseen
circumstances, such as emergencies or natural disasters.
Data recoverability, which means there are effective
procedures in place to recover any lost data. This is a key to
database reliability, ensuring that even if other safety
measures fail, there is a system for recovering data.
The Importance of Database Reliability:
Organizational databases store a broad range of information,
including customer information, sales information, financial
transactions, vendor information, and employee records. This
information is essential for maintaining the health of
organizations and plays a central role in everything from
competitive strategy to daily logistics. In many ways, data works
as the eyes and ears of the organization, and without it,
organizations lack the necessary information to make informed
decisions. It’s the database that makes this information
accessible and usable.
If an organization’s database is not reliable, consistent, or
accurate, it can lead to making bad or misinformed decisions.
Further, as the database is a central part of organizational
infrastructure, if it goes down, it can lead to substantial issues
throughout the organization. This means that database reliability
is and must remain a central concern for businesses.
Yet, in the current environment, data problems are increasingly
complex, making it continually difficult to create, manage, and
manipulate databases. Given the importance of database
reliability and the increased demands that come with database
management, it’s important that organizations have advanced
and innovative approaches to ensuring database reliability. A
couple of such approaches that organizations should consider
implementing are database reliability engineering and the use of
effective database management systems.
The Role of the Database Reliability Engineer (DBRE)
First and foremost, the Database Reliability Engineer, or DBRE, is
an enabler that allows other data and software engineers to work
efficiently without causing problems. The DBRE allows engineers
to work within data shares while also ensuring that all data is
protected, reliable and accessible. In addition to this central role
as an enabler, database reliability engineers:
Utilize automation. A big part of database reliability
engineering is automating tasks. Particularly important is
automating safety operations, including failovers, backups,
and back-pressure mechanisms. It’s this critical automation
that lets engineers work quickly and efficiently without having
to worry about losing or messing up data. These measures
help to protect data and to encourage innovation among
engineers.
Conduct risk analysis. Whenever considering automation,
database management, or utilizing new tools, it’s important to
conduct a thorough risk analysis. It’s a DBRE’s role to consider
potential risks and then to make informed decisions.
Make decisions about scaling. It’s the role of the DBRE to
anticipate capacity needs and to make timely decisions about
scaling. Doing so helps to maintain database reliability,
ensuring that the database is meeting organizational needs.
Educate other engineers. Part of the DBRE role is
knowledge sharing and educating other data software
engineers on everything from the database to the
organization’s domain to best practices.
VIII: Data Disclosure:
Data disclosure refers to the unauthorised release or
sharing of sensitive information with unintended parties,
often due to human error or improper handling. This might
involve unintentional actions, such as sending an email
containing sensitive data to the wrong recipient or
mistakenly publishing confidential information on a public
platform.
Data Disclosure:
Data disclosure refers to the unauthorised release or
sharing of sensitive information with unintended parties,
often due to human error or improper handling. This might
involve unintentional actions, such as sending an email
containing sensitive data to the wrong recipient or
mistakenly publishing confidential information on a public
platform.
Example of a data disclosure incident:
In 2019, a healthcare employee inadvertently posted
patient medical records on a public forum while seeking
technical assistance. This incident exposed patient names,
medical conditions, and other confidential data.
Implications of data disclosure:
Data disclosure can lead to privacy violations such as a
violation of the Health Insurance Portability and
Accountability Act (HIPAA), loss of customer trust,
reputational damage, regulatory penalties, and legal
actions. Mishandling sensitive data erodes an
organisation's credibility and can result in significant
financial losses.
Mitigation of data disclosure:
Employee Training: Regularly train staff about
data handling procedures and the importance of
confidentiality.
Data Classification: Label data with levels of
sensitivity to prevent accidental leakage.
Access Controls: Implement strict access controls
to limit who can view and share sensitive data.
Encryption: Encrypt sensitive data at rest and in
transit to prevent unauthorised access.
Data Loss Prevention (DLP) Solutions: Use DLP
tools to monitor and prevent the unauthorised
movement of sensitive data.
Data Breach:
A data breach involves the unauthorised access,
acquisition, or retrieval of sensitive data by malicious
actors. Data breaches are often intentional, targeting
vulnerabilities in an organisation's security infrastructure to
steal valuable information.
Example of a data breach:
In 2020, a social media giant suffered a data breach where
hackers exploited a software vulnerability to access user
accounts. This breach compromised millions of user
profiles, including personal details and private messages.
Implications of a data breach:
Data breaches can result in severe financial losses,
customer mistrust, regulatory fines, legal liabilities, and
long-term reputational damage. Stolen data might be sold
on the dark web, leading to identity theft and fraud.
Mitigation of a data breach:
Vulnerability management: Regularly assess
and patch software vulnerabilities to reduce the
attack surface.
Intrusion Detection and Prevention: Implement
advanced intrusion detection systems to identify
and thwart unauthorised access attempts.
Multi-factor Authentication (MFA): Require
MFA for user accounts to add an extra layer of
security.
Incident Response Plan: Develop a
comprehensive plan to address and contain
breaches promptly.
Cyber Insurance: Consider obtaining cyber
insurance to help mitigate the financial risks
associated with breaches.
Common Types of Information Disclosure Vulnerabilities
1. Error Messages Revealing Sensitive Data
Information leaks can occur when error messages provide
too much detail. Attackers may exploit these messages to
gain insight into the application’s structure or to access
sensitive data.
2. Directory Listing
Inadequate configuration can lead to directory listings
being accessible to anyone. This exposes the internal file
structure of the web application, making it easier for
attackers to identify potential targets.
3. Improperly Secured APIs
APIs often handle sensitive data. When APIs lack proper
authentication and authorization controls, attackers can
intercept and manipulate data transmission.
4. Insecure Configuration Files
Configuration files containing credentials or other
sensitive information should be kept secure. However, if
they are not adequately protected, attackers can access
and exploit this information.
5. Verbose Error Handling
Verbose error messages can inadvertently provide
attackers with insights into the application’s inner
workings, potentially aiding them in crafting targeted
attacks.
Mitigating Information Disclosure Vulnerabilities:
Preventing Information Disclosure Vulnerabilities is
critical to maintaining the security and integrity of web
applications. Here are essential steps to mitigate these
vulnerabilities:
1. Error Handling and Messaging
Implement custom error messages that reveal minimal
information and avoid disclosing sensitive details. Log
errors securely to aid in troubleshooting without
compromising security.
2. Directory Listing Prevention
Disable directory listing in web server configurations to
prevent unauthorized access to directory contents.
3. API Security
Implement strong authentication and authorization
controls for APIs. Use secure protocols like OAuth or
API keys to protect data transmission.
4. Secure Configuration Management
Store configuration files containing sensitive data
outside of the web root directory and restrict access
permissions. Encrypt sensitive information within
configuration files.
5. Minimize Verbose Error Messages
Configure your application to display generic error
messages to users while logging detailed error
information securely for administrators.
UNIT IV :PRIVACY IN CYBERSPACE
Privacy Concepts -Privacy Principles and Policies -Authentication and Privacy - Data Mining
- Privacy on the Web - Email Security - Privacy Impacts of Emerging Technologies
Introduction to Data Privacy
In the past few decades, one topic in information security has
been grabbing everyone’s attention, data privacy. Digitalization
has dematerialized our world; we live in a digital
reality.Historically, people have always been concerned about
their privacy. With the dematerialization of culture, personal data
privacy is evolving into something entirely new.
Let us start by defining what is data privacy?
Data privacy, also called information privacy, is a subset of
security that focuses on personal information. Data privacy
governs how data is collected, shared, and used. Data privacy is
concerned with the proper handling of sensitive information such
as financial data and intellectual property data.
Check our blog on What is Playfair Cipher to unveil how
British & Australian Forces made use of this Technology!
Components of Data Privacy
The components of data privacy include:
Management of data risk
To reduce data risk, companies manage the data acquisition,
filing, modifying, and handling of their data, from the point of
creation to retirement.
Data loss prevention
The data loss prevention (DLP) process includes identifying
confidential data, tracking that data throughout the organization,
and creating and enforcing policies to prevent unauthorized
disclosure of private data.
Password management
Managing passwords involves principles and best practices that
should be adhered to by users when storing passwords.
The 28th of January of every year is International Data Privacy
Day. Its mission is to build awareness and promote proper data
collection, privacy, and protection practices. The International
Data Privacy Day is observed in many countries including the
United States, Canada, Australia, and India. The first Data Privacy
Day was observed in 1981.
Have a look at Cyber Security courses and sign up today!
Why is Data Privacy Important?
Now that we understand what data privacy is, let us learn why
data privacy is vital. Privacy concerns arise through the mass
collection of data. Many organizations are keeping our data due to
the computerized operations in place. Many countries consider
privacy an essential personal right, and data protection
regulations exist to preserve it. Additionally, data privacy is
crucial since believing that their information is being handled
responsibly is essential for people who aspire to flourish online.
In the absence of privacy or controlled access to personal data,
personal information can be misused in many ways:
Under oppressive governments, people are not given the
freedom to reveal their identities.
With the advent of the General Data Protection Regulation
(GDPR,) the main purpose of data privacy has grown even
more.
Data Privacy Principles
The six data protection principles cover the lifecycle of a piece of
personal data from collection, retention, use, and destruction.
Collection purpose and means
Personal data is collected for an intent that is directly related to
the data users’ function or activity. It must also be collected
legally and equitably. When personal data is collected, the
purpose for which the data is used must be disclosed to the data
subjects. Data collection should, of course, be necessary but not
excessive.
Accuracy and retention
Data users must ensure personal data is accurate and should not
be kept longer than necessary.
Use
Private data must be used for the purpose for which the data is
collected or for a directly related purpose. It should not be used
for any other purposes unless voluntary and explicit consent is
obtained from the data subject.
Security
Moreover, data users need to adopt security measures to
safeguard personal data from unauthorized and accidental
access, processing, and loss of use.
Openness
Data users must make personal data policies and practices known
to the public, regarding the types of personal data they hold and
how the data is used.
Data access and corrections
Data subjects have the right to request access to and correction
of their data.
If a data user contravenes these six data protection principles,
then the privacy commissioner may serve an enforcement notice
on it.
Preparing for job interviews? Have a look at our Cyber
Security interview questions to excel in your hiring
journey.
Data Privacy Act
The Congress of the Philippines passed Republic Act No. 10173 in
2012. The Data Privacy Act protects individuals from unauthorized
processing of personal information that is private and not publicly
available.
Data Privacy Laws
A data privacy law specifies how data should be collected, stored,
and shared with third parties. Some of the most widely discussed
privacy laws include GDPR; the European Union’s GDPR is the
most comprehensive law on privacy.
Data privacy laws in the US
Fail to implement and maintain reasonable data security
measures.
Fail to abide by any applicable self-regulatory principles of
the organization’s industry.
Fail to follow a published privacy policy.
Transfer personal information in a manner not disclosed on
the privacy policy.
Make inaccurate privacy and security representations to
consumers and in privacy policies.
Challenges Faced in Data Privacy
Cybercrime
Cybercrime refers to any criminal conduct committed with the aid
of a computer or other electronic equipment connected to the
internet. Individuals or small groups of people with little technical
knowledge and highly organized worldwide criminal groups with
relatively talented developers and specialists can engage in
cybercrime.
Data breaches
A data breach happens when sensitive data falls into the hands of
someone who has no business handling it. So, if a hacker extracts
your credit card credentials, it’s a data breach. But the release of
data can also be unintentional.
Insider threat
It is an act of malicious activity undertaken by users who have
legitimate access to a network, application, or database of an
organization.
Cyber Crimes can be controlled by having Cyber Security
Knowledge. Enroll in our Cyber Security Training and
become a cyber security expert.
Technologies for Data Privacy
Cyber security
Cyber security involves the practice of implementing multiple
layers of security and protection against digital attacks across
computers, devices, systems, and networks.
Encryption
Data of any kind can be kept secret through a process known as
encryption. Scrambling and changing the message to hide it helps
to secure our data. This technique is widely used in data privacy.
Access control
Access control is a security that can be used to regulate who or
what can view or use resources in a computing environment.
Using data loss prevention (DLP) in conjunction with access
control can help prevent sensitive data from leaving the network.
Two-factor authentication
Two-factor authentication (2FA) adds a second method of identity
verification to secure your accounts. The most common 2FA uses
a unique one-time code with every login attempt. This code is tied
to your account and generated by a token. Consequently, hackers
have a much harder time accessing personal accounts.
Want to know how much a cyber security professional
earns in India? Have a look at our blog on Cyber Security
Salary in India now!
Data Privacy vs Data Security
Data Privacy Data Security
Data privacy means being sensitive Data security refers to the process of
to personal information based on protecting data from unauthorized access
collected data. and corruption.
It concentrates on how to meet the It prevents the exploitation of stolen data.
standards when collecting, It includes features such as network
processing, sharing, archiving, and access, cryptography, and information
deleting data. systems.
Eg., protected health information, Eg., access control, backup and recovery,
geolocation, and financial and tokenization
transactions
II. -Authentication and Privacy:
What is Authentication?
Authentication is a term that refers to the process of proving that some fact or some
document is genuine. In computer science, this term is typically associated with
proving a user’s identity. Usually, a user proves their identity by providing their
credentials, that is, an agreed piece of information shared between the user and the
system.
Authentication with Username and Password
Username and password combination is the most popular authentication
mechanism, and it is also known as password authentication.
A well-known example is accessing a user account on a website or a service
provider such as Facebook or Gmail. Before you can access your account, you must
prove you own the correct login credentials. Services typically present a screen that
asks for a username along with a password. Then, they compare the data inserted
by the user with the values previously stored in an internal repository.
If you enter a valid combination of these credentials, the service provider will allow
you to continue and will give you access to your account.
While the username may be public, like for example, an email address, the password
must be confidential. Due to its confidentiality, passwords must be protected from
steals by cybercriminals. In fact, although usernames and passwords are widely
used on the internet, they are notorious for being a weak security mechanism that
hackers exploit regularly.
The first way to protect them is by enforcing password strength, that is, a level of
complexity so that malicious attackers cannot easily guess them. As a rule of thumb,
a complex combination of lowercase and uppercase letters, numbers, and special
characters results in a strong password. Otherwise, a poor combination of characters
leads to a weak password.
End users notoriously tend to use weak passwords. In an annual report
from SplashData, an internet security firm, they identified the 25 most common
passwords. The list, based on millions of passwords exposed by data breaches,
shows that millions of users rely on passwords like "123456" and "password" to
authenticate.
Passwords are also an issue when not securely stored. For example, in a recent
news report, Facebook was shown to have stored millions of Instagram passwords in
plain text. Passwords should always be stored using best practices, such
as hashing.
Authentication Factors
A specific category of credentials, like username and password, are usually
said an authentication factor. Even if password authentication is the most
well-known type of authentication, other authentication factors exist. There
are three types of authentication factors typically classified as follows:
Eg: password,smartphone
Passwordless Authentication
As the name says, passwordless authentication is an authentication mechanism that
doesn’t use a password. The primary motivation for this type of authentication is to
mitigate password fatigue, that is the effort required for the user to remember and
keep secure a strong password.
Removing the need to memorize passwords also helps to make phishing attacks
useless.
You can do passwordless authentication with any authentication factor based on
what you have and what you are. For example, you can let the user access a service
or an application by sending a code via email or through facial recognition.
Importance of Authentication
Cyberattacks are a critical threat to organizations today. As
more people work remotely and cloud computing becomes
the norm across industries, the threat landscape has
expanded exponentially in recent years. As a result, 94% of
enterprise organizations have experienced a data
breach—and 79% were breached in the last two years,
according to a recent study by the Identity Defined Security
Alliance (IDSA).
Additionally, research by Cybersecurity Insiders found
that 90% of survey respondents experienced phishing
attacks in 2020, and another 29% experienced credential
stuffing and brute force attacks—resulting in significant
helpdesk costs from password resets.
With global cybercrime costs expected to grow by
15% per year over the next five years, reaching $10.5 trillion
USD annually by 2025, it’s more important than ever for
organizations to protect themselves.
As a result, authentication has become an increasingly
important mitigation strategy to reduce risk and protect
sensitive data. Authentication helps organizations and users
protect their data and systems from bad actors seeking to
gain access and steal (or exploit) private information. These
systems can include computer systems, networks, devices,
websites, databases, and other applications and services.
Organizations that invest in authentication as part of an
identity and access management (IAM) infrastructure
strategy enjoy multiple benefits, including:
Limiting data breaches
Reducing and managing organizational costs
Achieving regulatory compliance
How Does Authentication
Work?
Basic authentication involves proving a user is who they say
they are through authentication methods such as a username
and password, biometric information such as facial
recognition or fingerprint scans, and phone or text
confirmations (which are most often used as part of two-
factor authentication methods).
But how does authentication work on the backend?
For identity authentication with a login and password (the
most common form of authentication), the process is fairly
straightforward:
1. The user creates a username and password to log in to
the account they want to access. Those logins are then
saved on the server.
2. When that user goes to log in, they enter their unique
username and password and the server checks those
credentials against the ones saved in its database. If
they match, the user is granted access.
Keep in mind that many applications use cookies to
authenticate users after the initial login so they don’t have to
keep signing in to their account every time. Each period
during which a user can log in without having to re-
authenticate is called a session. In order to keep a session
open, an app will do two things when the user logs in the first
time:
1. Create a token (a string of unique characters) that is
tied to the account.
2. Assign a cookie to the browser with the token attached.
When the user goes to load a secure page, the app will check
the token in the browser cookie and compare it to the one in
its database. If they match, the user maintains access
without having to re-enter their credentials.
Eventually, the app destroys the token on the server, causing
the user’s session to timeout. The advantage of this type of
authentication is that it creates a streamlined user
experience and saves time for the user. However, it also
means that the device or browser the user is logged in on is
vulnerable if it falls into the wrong hands.
Types of Authentication
Single-Factor Authentication
Single-factor authentication (SFA) or one-factor
authentication involves matching one credential to gain
access to a system (i.e., a username and a password).
Although this is the most common and well-known form of
authentication, it is considered low-security and the
Cybersecurity and Infrastructure Security Agency (CISA)
recently added it to its list of Bad Practices.
The main weakness is that single-factor authentication
provides just one barrier. Hackers only need to steal the
credentials to gain access to the system. And practices such
as password reuse, admin password sharing, and relying on
default or otherwise weak passwords make it that much
easier for hackers to guess or obtain them.
Two-Factor Authentication
Two-factor authentication (2FA) adds a second layer of
protection to your access points. Instead of just one
authentication factor, 2FA requires two factors of
authentication out of the three categories:
Something you know (i.e., username and password)
Something you have (e.g., a security token or smart
card)
Something you are (e.g., TouchID or other biometric
credentials)
Keep in mind that although a username and password are
two pieces of information, they are both knowledge factors,
so they are considered one factor. In order to qualify as two-
factor authentication, the other authentication method must
come from one of the other two categories.
2FA is more secure because even if a user’s password is
stolen, the hacker will have to provide a second form of
authentication to gain access—which is much less likely to
happen.
Three-Factor Authentication
Three-factor authentication (3FA) requires identity-confirming
credentials from three separate authentication factors (i.e.,
one from something you know, one from something you
have, and one from something you are). Like 2FA, three-
factor authentication is a more secure authentication process
and adds a third layer of access protection to your accounts.
Multi-Factor Authentication
Multi-factor authentication (MFA) refers to any process that
requires two or more factors of authentication. Two-factor
and three-factor authentication are both considered multi-
factor authentication.
Single Sign-On Authentication
Single sign-on (SSO) authentication allows users to log in and
access multiple accounts and applications using just one set
of credentials. We see this most commonly in practice with
companies like Facebook or Google, which allow users to
create and sign in to other applications using their Google or
Facebook credentials. Basically, applications outsource the
authentication process to a trusted third party (such as
Google), which has already confirmed the user’s identity.
SSO can improve security by simplifying username and
password management for users, and it makes logging in
faster and easier. It can also reduce helpdesk time focused
on resetting forgotten passwords. Plus, administrators can
still centrally control requirements like MFA and password
complexity, and it can be easier to retire credentials after a
user leaves the organization.
One-Time Password
A one-time password (OTP) or one-time PIN (sometimes
called a dynamic password) is an auto-generated password
that is valid for one login session or transaction. OTP is often
used for MFA. For instance, a user will start to log in with
their username and password, which then triggers the
application to send an OTP to their registered phone or email.
The user can then input that code to complete the
authentication and sign in to their account.
Passwordless Authentication
Passwordless authentication, as the name suggests,
doesn’t require a password or other knowledge-based
authentication factor. Typically, the user will enter their ID
and will then be prompted to authenticate through a
registered device or token. Passwordless authentication is
often used in conjunction with SSO and MFA to improve the
user experience, reduce IT administration and complexity,
and strengthen security.
Certificate-Based Authentication
Certificate-based authentication (CBA) uses a digital
certificate to identify and authenticate a user, device, or
machine. A digital certificate, also known as a public-key
certificate, is an electronic document that stores the public
key data, including information about the key, its owner, and
the digital signature verifying the identity. CBA is often used
as part of a two-factor or multi-factor authentication process.
Biometrics
Biometric authentication relies on biometrics like fingerprints,
retinal scans, and facial scans to confirm a user’s identity. To
do this, the system must first capture and store the biometric
data. And then when the user goes to log in, they present
their biometric credentials and the system compares them to
the biometric data in their database. If they match, they’re
in.
PRIVACY:
Privacy is the right of individuals to keep certain aspects of their personal
lives and information confidential or protected from intrusion. It
encompasses controlling access to one's personal information,
maintaining confidentiality in communications, and having the ability to
make choices about how data is collected, used, and shared.
In the digital age, privacy concerns have become increasingly important
due to the vast amount of personal information that individuals generate
and share online. This includes data such as browsing history, social
media activity, location information, and personal preferences. Issues like
data breaches, surveillance, identity theft, and unauthorized use of
personal information have raised awareness about the need for strong
privacy protections.
Various laws and regulations have been enacted to safeguard individuals'
privacy rights, such as the European Union's General Data Protection
Regulation (GDPR) and the California Consumer Privacy Act (CCPA) in the
United States. These regulations typically require organizations to be
transparent about their data practices, obtain consent for data collection
and processing, and provide individuals with options to control their
personal information.
Additionally, individuals can take steps to protect their privacy online,
such as using encryption, regularly updating privacy settings, being
cautious about sharing sensitive information, and using privacy-focused
tools and services.
Overall, privacy is essential for maintaining autonomy, dignity, and
personal security in today's interconnected world. It's a fundamental
human right that needs to be respected and protected at both individual
and societal levels
III.DATA MINING :
What is Data Mining?
Typically, when someone talks about “mining,” it involves people wearing
helmets with lamps attached to them, digging underground for natural
resources. And while it could be funny picturing guys in tunnels mining for
batches of zeroes and ones, that doesn't exactly answer “what is data
mining.”
Data mining is the process of analyzing enormous amounts of information
and datasets, extracting (or “mining”) useful intelligence to help
organizations solve problems, predict trends, mitigate risks, and find new
opportunities. Data mining is like actual mining because, in both cases,
the miners are sifting through mountains of material to find valuable
resources and elements.
Data mining also includes establishing relationships and finding patterns,
anomalies, and correlations to tackle issues, creating actionable
information in the process. Data mining is a wide-ranging and varied
process that includes many different components, some of which are even
confused for data mining itself. For instance, statistics is a portion of the
overall data mining process, as explained in this data mining vs.
statistics article.
Data Mining Steps
When asking “what is data mining,” let’s break it down into the steps data
scientists and analysts take when tackling a data mining project.
1. Understand Business
What is the company’s current situation, the project’s objectives, and
what defines success?
2. Understand the Data
Figure out what kind of data is needed to solve the issue, and then collect
it from the proper sources.
3. Prepare the Data
Resolve data quality problems like duplicate, missing, or corrupted data,
then prepare the data in a format suitable to resolve the business
problem.
4. Model the Data
Employ algorithms to ascertain data patterns. Data scientists create, test,
and evaluate the model.
We are living in an information-rich, data-driven world. While it’s
comforting to know there’s a plethora of readily available knowledge, the
sheer volume creates challenges. The more information available, the
longer it can find the useful insights you need.
That’s why today we’re discussing data mining. We’ll be exploring all
aspects of data mining, including what it means, its stages, data mining
techniques, the benefits it offers, data mining tools, and more. Let’s kick
things off with a data mining definition, then tackle data mining concepts
and techniques.
We will now begin by understanding what is data mining.
What is Data Mining?
Typically, when someone talks about “mining,” it involves people wearing
helmets with lamps attached to them, digging underground for natural
resources. And while it could be funny picturing guys in tunnels mining for
batches of zeroes and ones, that doesn't exactly answer “what is data
mining.”
Data mining is the process of analyzing enormous amounts of information
and datasets, extracting (or “mining”) useful intelligence to help
organizations solve problems, predict trends, mitigate risks, and find new
opportunities. Data mining is like actual mining because, in both cases,
the miners are sifting through mountains of material to find valuable
resources and elements.
Data mining also includes establishing relationships and finding patterns,
anomalies, and correlations to tackle issues, creating actionable
information in the process. Data mining is a wide-ranging and varied
process that includes many different components, some of which are even
confused for data mining itself. For instance, statistics is a portion of the
overall data mining process, as explained in this data mining vs.
statistics article.
Additionally, both data mining and machine learning fall under the general
heading of data science, and though they have some similarities, each
process works with data in a different way. If you want to know more
about their relationship, read up on data mining vs. machine learning.
Data mining is sometimes called Knowledge Discovery in Data, or KDD.
Data Mining History
For millennia, people have excavated places to find hidden mysteries.
"Knowledge discovery in databases" refers to the act of sifting through
data to uncover hidden relationships and forecast future trends. In the
1990s, the phrase "data mining" was invented. Data mining emerged from
the convergence of three scientific disciplines: artificial intelligence,
machine learning, and statistics.
Artificial intelligence is the human-like intelligence demonstrated by
software and machines, machine learning is the term used to describe
algorithms that can learn from data to create predictions, and statistics is
the numerical study of data correlations.
Data mining takes advantage of big data's infinite possibilities and
inexpensive processing power. Processing power and speed have grown
significantly in the recent decade, allowing the globe to undertake rapid,
easy, and automated data analysis.
Data Mining Steps
When asking “what is data mining,” let’s break it down into the steps data
scientists and analysts take when tackling a data mining project.
1. Understand Business
What is the company’s current situation, the project’s objectives, and
what defines success?
2. Understand the Data
Figure out what kind of data is needed to solve the issue, and then collect
it from the proper sources.
3. Prepare the Data
Resolve data quality problems like duplicate, missing, or corrupted data,
then prepare the data in a format suitable to resolve the business
problem.
4. Model the Data
Employ algorithms to ascertain data patterns. Data scientists create, test,
and evaluate the model.
5. Evaluate the Data
Decide whether and how effective the results delivered by a particular
model will help meet the business goal or remedy the problem.
Sometimes there’s an iterative phase for finding the best algorithm,
especially if the data scientists don’t get it quite right the first time. There
may be some data mining algorithms shopping around.
6. Deploy the Solution
Give the results of the project to the people in charge of making decisions.
To extend our learning on what data mining is, we will next look at the
benefits
Examples of Data Mining
The following are a few real-world examples of data:
Shopping Market Analysis
In the shopping market, there is a big quantity of data, and the user must
manage enormous amounts of data using various patterns. To do the
study, market basket analysis is a modeling approach.
Market basket analysis is basically a modeling approach that is based on
the notion that if you purchase one set of products, you're more likely to
purchase another set of items. This strategy may help a retailer
understand a buyer's purchasing habits. Using differential analysis, data
from different businesses and consumers from different demographic
groups may be compared.
Weather Forecasting Analysis
For prediction, weather forecasting systems rely on massive amounts of
historical data. Because massive amounts of data are being processed,
the appropriate data mining approach must be used.
Stock Market Analysis
In the stock market, there is a massive amount of data to be analyzed. As
a result, data mining techniques are utilized to model such data in order
to do the analysis.
Intrusion Detection
Well, data mining can assist to enhance intrusion detection by focusing on
anomaly detection. It assists an analyst in distinguishing between unusual
network activity and normal network activity.
Fraud Detection
Traditional techniques of fraud detection are time-consuming and difficult
due to the amount of data. Data mining aids in the discovery of relevant
patterns and the transformation of data into information.
Surveillance
Well, video surveillance is utilized practically everywhere in everyday life
for security perception. Because we must deal with a huge volume of
acquired data, data mining is employed in video surveillance.
Financial Banking
With each new transaction in computerized banking, a massive amount of
data is expected to be created. By identifying patterns, causalities, and
correlations in corporate data, data mining may help solve business
challenges in banking and finance.
What Are the Benefits of Data Mining?
Since we live and work in a data-centric world, it’s essential to get as
many advantages as possible. Data mining provides us with the means of
resolving problems and issues in this challenging information age. Data
mining benefits include:
It helps companies gather reliable information
It’s an efficient, cost-effective solution compared to other data
applications
It helps businesses make profitable production and operational
adjustments
Data mining uses both new and legacy systems
It helps businesses make informed decisions
It helps detect credit risks and fraud
It helps data scientists easily analyze enormous amounts of data
quickly
Data scientists can use the information to detect fraud, build risk
models, and improve product safety
It helps data scientists quickly initiate automated predictions of
behaviors and trends and discover hidden patterns
Challenges of Implementation in Data Mining
Because data handling technology is always improving, leaders confront
additional obstacles in addition to scalability and automation, as
mentioned below:
Distributed Data
Real-world data saved on several platforms, such as databases, individual
systems, or the Internet, cannot be transferred to a centralized repository.
Regional offices may have their own servers to store data, but storing
data from all offices centrally will be impossible. As a result, tools and
algorithms for mining dispersed data must be created for data mining.
Complex Data
It takes a long time and money to process big amounts of complicated
data. Data in the real world is structured, unstructured,semi-structured,
and heterogeneous forms, including multimedia such as photos, music,
video, natural language text, time series, natural, and so on, making it
challenging to extract essential information from many sources in LAN and
WAN.
Domain Knowledge
It is simpler to dig some information with domain expertise, without which
collecting useful information from data might be tough.
Data Visualization
The first interaction that presents the result correctly to the client is data
visualization. The information is conveyed with unique relevance based on
its intended use. However, it is difficult to accurately address the
information to the end-user. To make the information relevant, effective
output information, input data, and complicated data perception methods
must be used.
Incomplete Data
Large data amounts might be imprecise or unreliable owing to
measurement equipment problems. Customers that refuse to disclose
their personal information may result in incomplete data, which may be
updated owing to system failures, resulting in noisy data, making the data
mining procedure difficult.
Security and Privacy
Decision-making techniques necessitate security through data exchange
for people, organizations, and the government. Private and sensitive
information about individuals is gathered for customer profiles in order to
better understand user activity trends. Illegal access and the
confidentiality of the information are significant issues here.
Higher Costs
The expenses linked with purchasing and maintaining strong servers,
software, and hardware for handling massive amounts of data might be
too expensive.
Performance Issues
The performance of a data mining system is determined by the methods
and techniques utilized, which might have an impact on data mining
performance. Large database volumes, data flow, and data mining
challenges can all contribute to the development of parallel and
distributed data mining methods.
User Interface
If the knowledge uncovered via data mining technologies is engaging and
clear to the user, it will be beneficial. Mining findings from appropriate
visualisation data interpretation may assist comprehend customer
requirements. Users can utilize the data mining process to discover trends
and present and optimize data mining requests depending on the results.
Data Mining Applications
Data mining is a useful and versatile tool for today’s competitive
businesses. Here are some data mining examples, showing a broad range
of applications.
Banks
Data mining helps banks work with credit ratings and anti-fraud systems,
analyzing customer financial data, purchasing transactions, and card
transactions. Data mining also helps banks better understand their
customers’ online habits and preferences, which helps when designing a
new marketing campaign.
Healthcare
Data mining helps doctors create more accurate diagnoses by bringing
together every patient’s medical history, physical examination results,
medications, and treatment patterns. Mining also helps fight fraud and
waste and bring about a more cost-effective health resource management
strategy.
Marketing
If there was ever an application that benefitted from data mining, it’s
marketing! After all, marketing’s heart and soul is all about targeting
customers effectively for maximum results. Of course, the best way to
target your audience is to know as much about them as possible. Data
mining helps bring together data on age, gender, tastes, income level,
location, and spending habits to create more effective personalized loyalty
campaigns. Data marketing can even predict which customers will more
likely unsubscribe to a mailing list or other related service. Armed with
that information, companies can take steps to retain those customers
before they get the chance to leave!
IV. Privacy on the web:
People use websites for several important tasks such as banking,
shopping, entertainment, and paying their taxes. In doing so, they are
required to share personal information with those sites. Users place a
certain level of trust in the sites they share their data with. If that
information fell into the wrong hands, it could be used to exploit users, for
example by profiling them, targeting them with unwanted ads, or even
stealing their identity or money.
Modern browsers already have a wealth of features to protect users'
privacy on the web, but that's not enough. To create a trustworthy and
privacy-respecting experience, developers need to educate their site
users in good practices (and enforce them). Developers should also create
sites that collect as little data from users as possible, use the data
responsibly, and transport and store it securely.
Privacy and its relationship with security
It is hard to talk about privacy without also talking about security — they are closely related,
and you can't really create privacy-respecting websites without good security. Therefore, we
shall define both.
Privacy refers to the act of giving users the right to control how their data is collected,
stored, and used, and not using it irresponsibly. For example, you should clearly
communicate to your users what data you are collecting, who it will be shared with, and how
it will be used. Users must be given a chance to consent to your terms of data usage, have
access to all of their data that you are storing, and delete it if they no longer wish you to
have it. You must also comply with your own terms: nothing erodes user trust like having
their data used and shared in ways they never consented to. And this isn't just ethically
wrong; it could be against the law. Many parts of the world now have legislation that
protects consumer privacy rights (for example the EU's GDPR).
Security is the act of keeping private data and systems protected against unauthorized
access. This includes both company (internal) data, and user and partner (external) data. It is
no use having a robust privacy policy that makes your users trust you if your security is weak
and malicious parties can steal their data anyway.
Personal and private information
Personal information is any information that describes a user. Examples include:
Physical attributes such as height, gender expression, weight, hair color, or age
Postal address, email address, phone number, or other contact information
Passport number, bank account, credit card, social security number, or other official
identifiers
Health information such as medical history, allergies, or ongoing conditions
Usernames and passwords
Hobbies, interests, or other personal preferences
Biometric data such as fingerprints or facial recognition data
Private information is any information that users do not want shared publicly and must be
kept private (i.e., information that is accessible only by a certain group of authorized users).
Some private data is private by law (for example medical data), and some is private more by
personal preference.
V: Email Security
What is Email Security?
Email (short for electronic mail ) is a digital method by using it we
exchange messages between people over the internet or other
computer networks. With the help of this, we can send and receive
text-based messages, often an attachment such as documents,
images, or videos, from one person or organization to another.
It was one of the first applications developed for the internet and
has since become one of the most widely used forms of digital
communication. It has an essential part of personal and professional
communication, as well as in marketing, advertising, and customer
support.
In this article, we will understand the concept of email security,
how we can protect our email, email security policies, and email
security best practices, and one of the features of email is an email
that we can use to protect the email from unauthorized access.
Email Security:
Basically, Email security refers to the steps where we protect the
email messages and the information that they contain from
unauthorized access, and damage. It involves ensuring the
confidentiality, integrity, and availability of email messages, as well
as safeguarding against phishing attacks, spam, viruses, and
another form of malware. It can be achieved through a combination
of technical and non-technical measures.
Steps to Secure Email:
We can take the following actions to protect our email.
Choose a secure password that is at least 12 characters
long, and contains uppercase and lowercase letters, digits,
and special characters.
Activate the two-factor authentication, which adds an
additional layer of security to your email account by
requiring a code in addition to your password.
Use encryption, it encrypts your email messages so that
only the intended receiver can decipher them. Email
encryption can be done by using the programs like PGP or
S/MIME.
Keep your software up to date. Ensure that the most recent
security updates are installed on your operating system
and email client.
Beware of phishing scams: Hackers try to steal your
personal information by pretending as someone else in
phishing scams. Be careful of emails that request private
information or have suspicious links because these are the
resources of the phishing attack.
Choose a trustworthy email service provider: Search
for a service provider that protects your data using
encryption and other security measures.
Use a VPN: Using a VPN can help protect our email by
encrypting our internet connection and disguising our IP
address, making it more difficult for hackers to intercept
our emails.
Upgrade Your Application Regularly: People now
frequently access their email accounts through apps,
although these tools are not perfect and can be taken
advantage of by hackers. A cybercriminal might use a
vulnerability, for example, to hack accounts and steal data
or send spam mail. Because of this, it’s important to
update your programs frequently.
Email Security Policies
The email policies are a set of regulations and standards for
protecting the privacy, accuracy, and accessibility of email
communication within the organization. An email security policy
should include the following essential components:
Appropriate Use: The policy should outline what
comprises acceptable email usage inside the organization,
including who is permitted to use email, how to use it, and
for what purpose email we have to use.
Password and Authentication: The policy should require
strong passwords and two-factor authentication to ensure
that only authorized users can access email accounts.
Encryption: To avoid unwanted access, the policy should
mandate that sensitive material be encrypted before being
sent through email.
Virus Protection: The policy shall outline the period and
timing of email messages and attachment collection.
Retention and Detection: The policy should outline how
long email messages and their attachments ought to be
kept available, as well as when they should continue to be
removed.
Training: The policy should demand that all staff members
take a course on email best practices, which includes how
to identify phishing scams and other email-based threats.
Incident Reporting: The policy should outline the
reporting and investigation procedures for occurrences
involving email security breaches or other problems.
Monitoring: The policy should outline the procedures for
monitoring email communications to ensure that it is being
followed, including any logging or auditing that will be
carried out.
Compliance: The policy should ensure compliance with all
essential laws and regulations, including the health
Insurance rules, including the health portability and
accountability act and the General Data Protection
Regulation (GDPR)(HIPPA).
Enforcement: The policy should specify the consequences
for violating the email security policy, including disciplinary
action and legal consequences if necessary.
Hence, organizations may help safeguard sensitive information and
lower the risk of data breaches and other security incidents by
creating an email security strategy.
Now, Let’s look at how to enable the confidential mode in our Gmail
account. With Gmail.com, there is a feature called confidential
mode that we may use to safeguard our email.These are the steps
to use this feature:
Step 1: On your computer, go to Gmail and click compose as
shown in the below screenshot.
Step 2: If you have already enabled confidential mode for an
email, click Edit in the bottom right corner of the window to add an
expiration date and a passcode. These setting impact both the
message text and any attachments.
you select “No SMS passcode,” recipients using the Gmail app will
be able to open it directly and those who don’t use Gmail will
receive an email with a passcode.
On the other hand, if you select the “SMS passcode” recipients will
get a passcode by a text message for that you have to provide the
recipient’s phone number.
Why Is Email Security
Important?
Email has been the primary communication tool in the workplace for over
two decades. Research shows the average employee receives over 120
emails a day. This provides opportunities for cybercriminals to steal
valuable information using business email compromise (BEC) attacks,
phishing campaigns, and other methods.
An astounding 94% of cyberattacks start with malicious email messages.
According to the FBI’s Internet Crime Complaint Center (IC3), cybercrime
costs the US more than $12.5 billion per year, of which $2.9 billion were
related to business email compromise (BEC) or email account compromise
(EAC). The negative consequences of email-based attacks can include
significant financial loss, data loss, and reputational damage.
How Secure Is Email?
Email is designed to be as open and accessible as possible. It allows
people in an organization to communicate with other employees, with
people in other organizations, and with other third parties. The problem is
that this openness is exploited by attackers. From spam campaigns to
malware, phishing attacks, and business email compromise, attackers take
advantage of email security weaknesses. Since most organizations rely on
email to do business, attackers misuse email to steal sensitive information.
Because email is an open format, anyone who can intercept it can view it,
which increases email security concerns. This becomes a problem when
organizations send sensitive and confidential information via email. Without
special protective measures, attackers can intercept email messages and
easily read their contents. Over the years, organizations have stepped up
their email security measures to make it more difficult for attackers to
access sensitive and confidential information, and use emails for nefarious
purposes.
Common Threats to Email
Security
Phishing
Phishing attacks are the most prevalent and common threat to email
security. One of the earliest phishing attacks was the Nigerian Prince
Scam. Today this type of attack is easy to spot, but over time, phishing
attacks have become more sophisticated. Attackers send more
sophisticated emails with more plausible excuses and scams.
Phishing attacks can be either generic or targeted. Also known as spear
phishing attacks, these targeted attacks are well researched and designed
to trick specific individuals or groups who have special privileges or access
to valuable information.
Quishing
Quishing is QR code phishing. When a malicious URL is hidden behind a
QR code, the link becomes an image file, not a clickable element.
Traditional email security systems like secure email gateways (SEGs) and
even the most modern email security solutions scan for suspicious links in
the email body of the message to prevent phishing attacks (relying on
domain reputation and other indicators), but may overlook embedded URLs
within images or file attachments.
Quishing campaigns present a unique challenge to defenders. By
embedding the phishing link within a QR code, the threat is effectively
concealed, rendering security measures ineffective and allowing malicious
emails to slip through and reach the inbox of targeted end users.
Malware
Email is an ideal delivery mechanism for malware. Malware can be directly
attached to emails, embedded in documents that are shared as
attachments, or shared through cloud-based storage. Once malware is
installed on a user’s computer, it can steal sensitive information or encrypt
files.
Spam
Unsolicited bulk email, also known as spam, is a common type of
unsolicited email that often contains advertisements for goods and
services, but can spread malware, trick recipients into giving away personal
information, and result in financial loss. Spammers often use software
programs called “harvesters” to gather information from websites,
newsgroups, and other online services where users identify themselves by
email address.
Spam wastes resources and productivity, and can cause significant
damage to organizations, making it critical to filter and block spam emails
before they reach corporate email accounts.
Data Loss
Email accounts can contain vast amounts of confidential information. They
can also be used to access cloud-based infrastructure and other online
services. An attacker can use these accounts to gain access to sensitive
information, making email account credentials a common target for
attacks.
Additionally, information in email accounts could be inadvertently disclosed
by an employee who includes an unauthorized party in an email chain or
falls victim to a phishing attack.
Authentication Attacks on Email Servers
Sometimes, the email server itself can become the target of attackers.
Attackers typically use brute force attacks or credential stuffing to gain
access to an email server. This grants them access to all email messages
and attachments stored in the server, and allows them to perform
convincing phishing attacks by impersonating email users.
VI: Privacy Impacts of Emerging Technologies:
The privacy impacts of emerging technologies are significant and
multifaceted, encompassing various aspects such as data collection,
surveillance, and ethical considerations. Emerging technologies like
artificial intelligence (AI), facial recognition, biometric data collection, and
the Internet of Things (IoT) raise concerns about data privacy and
security. These technologies have the potential to gather vast amounts of
personal data, leading to privacy breaches and challenges in protecting
individuals' rights to privacy. The rapid development of AI algorithms, for
instance, relies heavily on data, raising questions about the sources of
data, consent for data use, and potential algorithmic biases that can
infringe on privacy rights. Facial recognition technology, while offering
quick identification benefits, also poses privacy risks and civil liberty
concerns, necessitating stringent regulations to protect individuals'
privacy rights. Additionally, the IoT, with its interconnected devices,
collects and relays user data, creating vulnerabilities that can be
exploited, highlighting the need for robust security measures and privacy
regulations.
Some examples of emerging technologies that pose privacy concerns
include:
1. Artificial Intelligence (AI): AI systems that analyze vast amounts
of data raise significant privacy concerns due to the collection and
processing of personal data5.
2. Facial Recognition Technology: Facial recognition technology,
while offering quick identification benefits, raises privacy and civil
liberty concerns, necessitating stringent regulations to protect
individuals' privacy rights25.
3. Biometric Data Collection: Technologies that collect biometric
data, such as fingerprints, iris scans, and DNA profiles, for
authentication and identification purposes, raise privacy concerns
regarding the safeguarding of sensitive personal information5.
These technologies, while offering various benefits, also bring about
challenges related to data privacy, security, and ethical considerations
that need to be carefully addressed to protect individuals' privacy rights in
the digital age.
Emerging technologies such as artificial intelligence (AI),
blockchain, and the Internet of Things (IoT) are having a
significant impact on cyber security.
1. Artificial Intelligence: AI is being used to create
more advanced and sophisticated cyber attacks. AI-
powered malware can evade traditional security
measures, making it harder to detect and defend
against. Additionally, AI-based systems can be used
to automate the process of identifying
vulnerabilities and identifying potential attacks. On
the other hand, AI is also being used to improve
cyber security, for example, through the
development of AI-based intrusion detection
systems and threat intelligence platforms.
2. Blockchain: Blockchain technology is being used to
improve the security of transactions and protect
against data breaches. Blockchain's decentralized
architecture and cryptographic security make it
difficult for attackers to tamper with data, and the
immutability of the records on the blockchain make
it difficult to conceal any breaches. However,
blockchain is still an emerging technology, and it
may have its vulnerabilities that could be exploited
by attackers.
3. Internet of Things: IoT devices are becoming
increasingly common in homes and businesses,
and they can be a major security concern. Many IoT
devices have poor security, and they can be easily
compromised by attackers. This can lead to data
breaches and the compromise of other systems on
the network. Additionally, the sheer number of IoT
devices and the amount of data they generate can
make it difficult to detect and respond to security
incidents. Organizations should ensure that their
IoT devices are properly secured, including using
strong passwords, keeping software up-to-date,
and monitoring for unusual activity.
Overall, emerging technologies are bringing new
opportunities and challenges for cyber security. As these
technologies continue to evolve, it's important for
organizations to stay informed and adapt their security
strategies to stay ahead of the latest threats. The team at
Optimum Origens have decades of experience in
implementing IT processes to bring your business into any
of the major compliance framewor