0% found this document useful (0 votes)
141 views8 pages

OpenStack Security Benchmark Overview

This paper presents a security benchmark for OpenStack, addressing the need for comprehensive security measures across the entire cloud supply chain. It defines a benchmark based on the Center for Internet Security (CIS) guidelines and introduces the Moon Cloud platform for continuous security assurance evaluation. The benchmark focuses on key attributes such as confidentiality, integrity, and access control, and is applied to a real OpenStack deployment at the University of Milan.

Uploaded by

hatterhates17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views8 pages

OpenStack Security Benchmark Overview

This paper presents a security benchmark for OpenStack, addressing the need for comprehensive security measures across the entire cloud supply chain. It defines a benchmark based on the Center for Internet Security (CIS) guidelines and introduces the Moon Cloud platform for continuous security assurance evaluation. The benchmark focuses on key attributes such as confidentiality, integrity, and access control, and is applied to a real OpenStack deployment at the University of Milan.

Uploaded by

hatterhates17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

2017 IEEE 10th International Conference on Cloud Computing

A Security Benchmark for OpenStack

Marco Anisetti, Claudio A. Ardagna, Filippo Gaudenzi Ernesto Damiani


Dipartimento di Informatica ETISALAT BT Innovation Center
Università degli Studi di Milano Khalifa University
Crema, Italy, 26013 Abu Dhabi, UAE
Email: [email protected] Email: [email protected]

Abstract—The cloud computing paradigm entails a radical evaluation based on trustworthy and verifiable evidence
change in IT provisioning, which must be understood and collected at all layers of the cloud stack.
correctly applied especially when security requirements are
In this paper, we consider a practical scenario that focuses
considered. Security requirements do not cover anymore just
the application itself, but involve the whole cloud supply on the assurance evaluation of OpenStack, a major open
chain from the hosting infrastructure to the final applications. source cloud infrastructure. To this aim, we i) define a
This scenario requires, on one side, new security mechanisms security benchmark for OpenStack that is an instantiation
protecting the cloud against misbehaviors/malicious attacks and refinement of the CIS benchmark in [9] on the basis of
and, on the other side, a continuous and adaptive assurance
the OpenStack security guidelines in [10], and ii) describe its
process evaluating the observed cloud security behavior against
the expected one. In this paper, we focus on the evaluation evaluation by means of an assurance platform called Moon
of the security assurance of OpenStack, a major open source Cloud (https://moon-cloud.eu). The latter platform supports
cloud infrastructure. We first define a security benchmark for continuous evaluation of the proposed security benchmark,
OpenStack, inspired by Center for Internet Security (CIS) and is applied to a real OpenStack deployment at the Univer-
benchmark for cloud infrastructures. We then present a
sity of Milan. The contribution of this paper is twoforld. We
platform, called Moon Cloud, for cloud security assurance
evaluation, showing an application of our benchmark and first propose a security benchmark for OpenStack focusing
platform to the in-production OpenStack deployment of the on confidentiality, integrity, and access control attributes,
University of Milan. and then present an assurance and compliance evaluation
Keywords-Assurance, Benchmark, Cloud, OpenStack, Secu- platform supporting continuous monitoring of OpenStack
rity deployments against the proposed benchmark.
The remaining of this paper is organized as follows.
I. I NTRODUCTION Section II discusses the OpenStack security benchmark.
The cloud computing model provides unprecedented op- Section III defines our assurance evaluation process. Sec-
portunities that are changing the IT landscape. The cloud tion IV describes Moon Cloud, the platform for continuous
provides an environment where huge benefits in terms of assurance evaluation. Section V presents the application of
competitiveness, performance, as well as economical and our benchmark to the in-production OpenStack deployment
organizational enhancements can be achieved. On the other of the University of Milan. Section VI discusses the related
side, security and trust concerns are among the most signif- work and Section VII presents our concluding remarks.
icant barriers preventing the migration of users and service
providers to the cloud [1], [2]. Users and service providers II. S ECURITY B ENCHMARK FOR O PEN S TACK
are in fact concerned about the new threats and risks they
need to face when moving to the cloud [3], and increasingly A security benchmark is a set of (standard) recommen-
refuse to take full responsibility over security and privacy dations against which the security strength of different
breaches of their services. systems can be compared. The recommendations are coupled
Traditional security mechanisms and controls offered by with auditing activities specifying how to collect data for
cloud service providers conflict with the lack of evidence evaluating the recommendations. The result of a security
about their operation and effectiveness. In the last few benchmark evaluation is a score that represents the security
years, cloud service providers as well as researchers have strength of a specific product/service/deployment; the higher
spent a lot of effort in designing and developing security the score, the more secure the product/service/deployment.
assurance solutions and guidelines to fill in this security The Center for Internet Security (CIS) provided a series
gaps [4], [5]. This effort brought to the definition of different of security benchmark for different solutions [9], ranging
audit, certification, and compliance standards and techniques from applications such as Microsoft Word or MySQL, to
increasing cloud transparency and trustworthiness [6], [7], Operating Systems such as Window Server or Ubuntu, and
[8], [2]. Current assurance techniques provide continuous recently addressing cloud products such as AWS or Docker.

2159-6190/17 $31.00 © 2017 IEEE 294


DOI 10.1109/CLOUD.2017.45
Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
The work in this paper starts from the observation that Open- [R2] Create and Enforce Account and Password Man-
Stack, though universally recognized as the most important agement Policies.
open source cloud infrastructure (IaaS), does not come with • Enable PAM, LDAP, or similar authentication systems
a specific security benchmark, but is just coupled with some for every host and allow certificate-based authentica-
high-level best practices proposed by OSSG (OpenStack tion only. Use certificate rotation and minimize root
Security Group). We therefore define a security benchmark accesses. Disactivate users after long inactivity (Virtual)
for OpenStack as an instantiation and refinement of the • Use password policies for OpenStack users, disactivate
generic CIS benchmark for IaaS systems in [11] on the basis users after long inactivity (Cloud)
of the OpenStack security guidelines in [10]. To this aim, [R3] Use a Central Directory for Authentication and
we consider the OpenStack core services: i) Keystone is the Authorization.
identity service providing authentication and authorization • Use an OS authentication system that is bound to
for all users and services, ii) Nova is the compute service LDAP/kerberos/active directory/freeipa ((Virtual)
managing the life cycle of compute instances, iii) Glance is • Enable LDAP for all domains in Keystone (Cloud)
the image service providing a virtual machine disk image • Deploy VMs authenticated through a centralized au-
repository, iv) Neutron is the networking service providing thentication system (User)
network connectivity as a service, v) Cinder is the block
[R4] Configure Firewalls to Restrict Access.
storage offering persistent block storage, and vi) Swift is the
• Enable iptables, minimize allowed IPs and ports to nec-
object storage to store and retrieve arbitrary unstructured
essary services only, do not manually tamper iptables
data objects.
once configured (Virtual)
Our security benchmark for OpenStack first maps the
• Do not deploy any VM associated with security groups
three profiles (Virtual, Cloud, End User) identified by the
that allow public access on specific ports (User)
CIS benchmark [11] on OpenStack core services to address
its peculiarities, including the concepts of shared responsi- [R5] Use TLS/SSL where Possible.
bility and cloud layers, as follows. • All services and communications (MySQL, rabbitmq,
LDAP, OpenStack services) should be accessible over
• Virtual: this profile pertains to all physical nodes where
encrypted channels only (Virtual, Cloud)
OpenStack services are installed, specifically address-
• Every application offered by VMs should offer services
ing hardware configurations, Linux OSs, virtualization
over TLS/SSL channels (User)
systems, and system service configurations.
• Cloud: this profile pertains to OpenStack services and [R6] Do Not Use Default Self-Signed Certificates. All
add-ons. It involves OpenStack core services, the admin certificates should be signed by a certification authority
operations and configurations set. (Virtual, Cloud, User)
• User: this profile pertains to OpenStack user usage.
[R7] Configure Centralized Remote Logging.
It addresses how users can secure their OpenStack
• Store all logs from system and OpenStack in two
projects, including VMs, virtual storage and network
different remote logging systems (Virtual, Cloud)
configurations.
• Set up their own remote logging system for their
We then instantiated the generic CIS benchmark rec- applications (User)
ommendations and added some new recommendations on
[R8] Maintain Time Synchronization Services. All
the OpenStack core services on the basis of the three
nodes/VMs should have the time syncronization system
defined profiles. Following CIS approach, our recommen-
enabled and should use the same network-time server list
dations mostly cover confidentiality, integrity, and access
(Virtual, Users)
control attributes, while they can be extended to assess
any security attributes. Table I presents an overview of [R9] Review and Minimize Hypervisor Attack Surface.
our OpenStack Security Benchmark (OSB) presenting the Identify and run a security benchmark against the used
security recommendations (field Recommendation), a com- hypervisors, disable memory de-duplication (Virtual)
parison between recommendations supported by CIS and [R10] Review and Minimize Virtual Machine Manager
OSB (field Benchmark), and the corresponding profiles over Attack Surface.
which the properties insist (field Profile). Finally, starting • Execute vulnerability scans of the virtual machine
from the best practices proposed by OSSG and following monitor (i.e., QEMU/KVM) (Virtual)
a strict architectural analysis of OpenStack, we defined for • Execute vulnerability scans of OpenStack services and
each recommendation the corresponding security control(s). APIs (Cloud)
[R1] Patch Levels. Continuously check the version of [R11] Use Templates to Deploy Virtual Machines. Execute
installed software including OpenStack services (Virtual, vulnerability scans of public images on Glance and check if
Cloud, User) signature verification is enabled (Cloud)

295

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
Table I
O PEN S TACK S ECURITY B ENCHMARK (OSB) ADDRESSING H OSTS , O PEN S TACK CORE SERVICES (K EYSTONE , N OVA , G LANCE , N EUTRON AND
C INDER , H ORIZON ) AND USER CONFIGURATIONS

Profile
# Recommendation Benchmark
Virtual Cloud User
R1 Maintain Current Patch Levels CIS, OSB   
R2 Create and Enforce Account and Password Management Policies CIS, OSB   ×
R3 Use a Central Directory for Authentication and Authorization CIS, OSB   
R4 Configure Firewalls to Restrict Access CIS, OSB ×  
R5 Use TLS/SSL where Possible CIS, OSB   
R6 Do Not Use Default Self-Signed Certificates CIS, OSB   
R7 Configure Centralized Remote Logging CIS, OSB   
R8 Maintain Time Synchronization Services CIS, OSB  × 
R9 Review and Minimize Hypervisor Attack Surface CIS, OSB  × ×
R10 Review and Minimize Virtual Machine Manager Attack Surface CIS, OSB   ×
R11 Use Templates to Deploy Virtual Machines CIS, OSB ×  ×
R12 Disconnect unauthorized devices from Virtual Machines CIS, OSB  × ×
R13 Disable MAC Address Changes and Promiscuous Node on Guests CIS, OSB  × ×
R14 Ensure Network Isolation through VLANs CIS, OSB   ×
R15 Port Groups Should not be Configured to Reserved VLANs CIS, OSB ×  ×
R16 Secure Access to Cloud Application Programming Interfaces CIS, OSB   ×
R17 Encrypt Data at Rest CIS, OSB   ×
R18 Establish Appropriate Resource Isolation CIS, OSB   
R19 Evaluate Denial of Service Scenarios and Mitigations CIS, OSB   ×
R20 Do Not Use or Set Guest Customization Passwords CIS, OSB   ×
R21 Evaluate and Minimize Cloud Architecture Dependencies CIS, OSB   
− Align Infrastructure Security Controls with Tenant Requirements CIS − − −
− Segment and Restrict User and Server Workload Communication CIS − − −
− Restrict User-to-User Workload Communication CIS − − −
− Deploy Anti-Malware Solution to End User Workloads CIS − − −
− Audit Privileged Access to End User Workloads CIS − − −
R22 Audit sensible and configuration files OSB   ×
R23 Storage Reliability OSB   ×
R24 Data Remanence Avoidance OSB ×  ×

[R12] Disconnect unauthorized devices from Virtual Ma- [R17] Encrypt Data at Rest.
chines. Disable all unauthorized/unused device ports such as • Nodes should have encrypted file systems (Virtual)
NIC, USB or serial ports, disable compute unified device • Enable LUKS block storage creation in Cinder and
architecture (CUDA) and direct memory access (DMA) use an appropriate fixed_key as passphrase. Enable
(Virtual) encryption feature in Swift configuration file, use an
encryption_root_secret that is a base-64 encoding of a
[R13] Disable MAC Address Changes and Promiscuous
32 byte value generated by a cryptographically secure
Mode on Guests. Hypervisor or Network virtualizators
random number generator (Cloud)
should deny MAC address changes on the Vnic (Virtual)
[R18] Establish Appropriate Resource Isolation.
[R14] Ensure Network Isolation through VLANs. • Disable memory de-duplication, avoid co-resident at-

• Only VLAN or VLANX should be available and en- tack, do not allow overlapping of VLAN ids and of
abled in the whole deployment (Virtual) virtual disks assignment to hosts, disable live migration
• Only VLAN or VLANX should be enabled in Neutron or limit migration to dedicated network with encryption
(Cloud) enabled, disable delay delete feature for Glance, disable
the compute soft delete for Nova, allow to publish
[R15] Port Groups Should not be Configured to Reserved public images by admin users only (Virtual, Cloud)
VLANs. Neutron m2l plugin should be set to allow only • Make sure that only allowed users are member of your
VLAN ids that do not overlap the reserved ones used by project, use encrypted storage for sensitive data (User)
physical devices in the network infrastructure (Cloud)
[R19] Evaluate Denial of Service Scenarios and Mitiga-
[R16] Secure Access to Cloud Application Programming tions.
Interfaces. Enable and configure SELinux for secure access • Mitigation systems should be place in front of critical
to configuration file, run vulnerability scans for OpenStack assets. Rate-limiting from application server should be
APIs, isolate API endpoints, especially those with public configured, IDS should be installed and configured
access, deploy API endpoints on separate hosts for increased to detect DDoS attacks, blacklisting systems for SSH
isolation (if possible), enable multi-factor authentication (if connection should be enabled (e.g., fail2ban) (Virtual)
available) and only provide APIs over SSL/TLS with mutual • Run performance test and do not go under resource
authentication (Virtual, Cloud) quotas (Cloud)

296

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
[R20] Do Not Use or Set Guest Customization Passwords. • EP is the set of evaluation processes {Eval} to be
• Every node should allow access through a centralized carried out. The output of an evaluation process is a
system only; extra users should not exist, unless the boolean value expressing whether the evaluation has
necessary ones (Virtual) been executed with success or not, plus the evidence
• Admin should not be member of any project and a collected supporting the results.
policy should not allow the admin to access project • ER is an evaluation rule expressed as a propositional

resources she is not member of. Admin should not be logic formula in terms of the evaluation processes
allowed to set/change user password (Cloud) Eval. It combines the results of the different evalu-
ations Eval∈EP and returns true if the property is
[R21] Evaluate and Minimize Cloud Architecture De-
positively evaluated, false otherwise.
pendencies.
We note that, although we can implement more complex first
• Hypervisors should be always up and available, hard-
order logic formulas for ER, propositional logic provides
ware resources such RAM and CPUs should be always
enough expression power to cover all the recommendations
available and all services such as rabbitmq, MySQL
in Section II.
should be running. In addition, high availability should
Example 3.1 (Recommendation verification): Let us con-
be set for services. (Virtual) 
sider the evaluation process REP 1 , ER1  of the recom-
• Guarantee high availability of all OpenStack services.
mendation R=Encrypt Data at Rest with respect to Open-
In Glance, do not allow creation of unsupported image
Stack Cinder and Swift services. Let us consider a set EP1
type. Provide multi-region deployment (Cloud)
of evaluation activities {Eval1 =check Cinder encryption
• Use scheduler filtering to deploy VMs that provide high
enabled, Eval2 =check strength Cinder key, Eval3 =check
availability on at least two different availability zones.
Swift encryption enabled, Eval4 =check strength Swift key }
(User)
and the evaluation rule ER1 =Eval1 ∧ Eval2 ∧ Eval3 ∧
[R22] Audit Sensible and Configuration Files. All regular Eval4 . The evaluation process R  is positive if the evaluation
Linux file system and system calls, and OpenStack service rule ER1 returns true, that is, Eval1 , Eval2 , Eval3 and
configurations should be audited (e.g., auditctl) (Virtual, Eval4 return true.
Cloud) An evaluation process Eval∈EP refers to the evaluation of
[R23] Storage Reliability. a specific service.
• All OSs should run at least on Raid type 1 to guarantee Definition 3.2 (Eval): An evaluation process Eval is a
data replication (Virtual) tuple of the form t, C, where:
• Cinder and Swift should use a dedicated storage dis- • t is the ToE (Target of Evaluation). It is defined as the
tributed at least over three replicas (Cloud) services constituting the perimeter of the evaluation.
[R24] Data Remanence Avoidance. All resources, such as • C is the Control, that is, the process executing the

virtual network, block devices, images, should be always evaluation on t and returning the result of the evaluation
bound to entity in corresponding databases to avoid data together with the set of collected evidence.
remanence [12] (Virtual) We note that the ToE can be either a public service or
a specific mechanism providing a security feature to be
III. R ECOMMENDATION VERIFICATION FOR THE C LOUD evaluated.
Recommendation verification is the process of verify the Example 3.2 (Eval): Let us consider Eval2 =[t2 , C2 ] in
compliance of the target system to the given recommen- Example 3.1 about check strength Cinder key. Target of
dation. A recommendation is a complex concept whose evaluation t corresponds to OpenStack Cinder. Control C2
verification may require a number of different specific describes the evidence collection as a set of activities to be
evaluations on services of the target system mentioned by carried out on target t for checking the strength of the Cinder
the recommendation. In this paper an evaluation process is key.
implemented collecting evidence about the target system, A Control C defines details on how to collect evidence
for instance, by means of testing of monitoring activities on a specific target t to evaluate a recommendation. It is
on a specific service. Collected evidence permits to verify defined as follows.
whether the recommendations in Section II have been satis- Definition 3.3 (C): C is defined over a 3-tuple of the
fied or not (e.g., testing evidence that encryption is enabled form φ, λ, π, where:
on a communication channel). • φ is the flow of evidence collection execution. It is com-
Definition 3.1 (Recommendation verification): Let R be posed of a sequence of atomic evaluation operations.
the recommendation to be verified, the recommendation • λ is set of Control’s Parameters necessary to connect
 defined over the tuple EP, ER
verification is a function R the evaluation flow φ to the target service t (e.g., the
taking value in {true, false} where: target URI)

297

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
Control import paramiko
import re
import StringIO
code/execution flow (φ) environment (π ) from driver import Driver
• dependencies from oslo_config import cfg
(-)
connect • requirements class SSHClient(object):
(paramiko and oslo library) def ssh_connect [...]
• extra-files [...]
(-)

read-config class CinderNovaEncryptedFixedKey(Driver,SSHClient):


parameters (λ) def connect(self, inputs):
• connect ssh_ti = self.testinstances.get("connect", None)
encryption [host - port - password - ssh_key] assert not ssh_ti is None
check • read-config hostname = ssh_ti.get("hostname")
[config-file (optional)]
[...]
• encryption-check
evidence [-] self.ssh_connection = self.ssh_connect(
(a) hostname=hostname,username=username,
[...])
return True
Figure 1. Skeleton of Control C2 : check strength Cinder key. The whole
def read_config(self):
control is available at http://goo.gl/qLoNL3. self.cinder_config_file = "/etc/cinder/cinder.conf"
config_ti = self.testinstances.get("read_config", None)
if config_ti is not None:
self.cinder_config_file = config_ti.get(
• π is a set of Environmental Settings describing how to "config_file",self.cinder_config_file)
deploy the control (e.g., within the ToE perimeter or assert self.cinder_config_file is not None
_stdin, _stdout, _stderr = self.ssh_connection.exec_cmd\
not) and possible dependencies on its execution. ("cat %s" % self.cinder_config_file)
lines = _stdout.readlines()
The execution of a given Control returns true or false return lines
plus a set of evidence on its execution, which are collected def encryption_check(self, cinder_config):
mcp = MyConfigParser(self.cinder_config_file,
at recommendation verification level. cinder_config)
Example 3.3 (C): Let us consider Control C2 in Ex- mcp.parse()
section = mcp.sections.get("key_manager",
ample 3.2 about check strength Cinder key. φ,λ,π are mcp.sections.get("keymgr", None))
represented as follows: assert section is not None
fixed_key = section.get("fixed_key")[0]
• φ =[connect, read-config, encryption-check]. connect assert fixed_key is not None
is the operation that connects the control to the host us- if check_strength(fixed_key):
return True
ing ssh, read−conf ig is the operation of accessing the else:
config file of Cinder and finally encryption − check is return False

the operation of checking if the configured passphrase def appendAtomics(self):


is complex enough. self.appendAtomic(self.connect, lambda:None)
self.appendAtomic(self.read_config, lambda:None)
• λ =[ssh-params, config-file(optional)]. ssh − params self.appendAtomic(self.encryption_check, lambda: None)
represent all the necessary information to connect to the
node via ssh, conf ig − f ile is an optional parameter Figure 2. Moon Cloud python script for Control C2 in Example 3.3.
that specifies where the config file is.
• π =[ openstack-config-library ]. The control requires
the openstack − conf ig − library to be able to parse specific parameters needed for the evaluation, while environ-
properly the required config file. ment π represents the set of pre-requisites and dependencies
to execute or deploy the specific control. Figure 2 shows
IV. C LOUD S ECURITY A SSESSMENT: M OON C LOUD
the code/execution flow φ as a python script for C2 in
Moon Cloud implements the recommendation verification Example 3.3.
process in Section III to assess cloud systems security. It is Moon Cloud implements a master-slave architecture com-
able to execute a number of evaluation processes Eval in posed of two main modules connected with a queue-based
parallel, each one referring to a set of recommendations R messaging protocol (amqp): i) Evaluation Module (master),
to be evaluated. It models controls C in Section III using it manages controls, configurations, and triggers the evalu-
skeletons. The code/execution flow (φ) is modeled as a chain ation rules when necessary, ii) Execution Module (slave), it
containing all operations the control needs to exercise for is responsible for evidence collection.
evidence collection.1 It is implemented as python script. We The Evaluation Module is deployed once for each tenant
note that the first operation of the chain is the configuration executing the benchmark and includes the following com-
of the target service using parameters λ. Parameters (λ) and ponents: i) Dashboard, the user interface for managing the
Environment (π) are represented using a meta-data format. evaluation process, ii) Evaluation manager provides APIs
Each operation within the control flow φ is connected to the to interact with the dashboard and manage communications,
1 The flow φ can be serialized as a sequence of operations.
and to control, dispatch, and schedule recommendations

298

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
evaluation, iii) Management and Result Database man-

public network
Evaluation Module
ages persistent information about controls’ execution and
evidence collection, iv) Control Repository where all the Execution Deployment1 public VM VM VM

queue communication
controls are stored. Execution Deployment2 public OpenStack API
The Execution Module must be deployed so that it can
reach the ToE and includes the following components: i) ...

internal network
Host1 Host2 Host3 Host32

Execution Manager deploys and configures controls for the


Execution Deployment3
requested evaluations, ii) Execution Cluster is the cluster internal

where multiple executions are triggered; it contains Execu-


tion Manager. The Execution Model is designed to be elastic
Figure 3. Moon Cloud architecture
adapting the number of instances in the Execution Cluster
to the number of controls that are executed. For instance,
multiple instances of the Execution Module with different
A. Recommendation Evaluation
deployment locations are required for recommendations R4
or R20, because they require an evaluation on both the public For sake of conciseness, we focus on three recommen-
interface (i.e., public deployment) and on internal private dations extracted from the benchmark in Section II. We
interfaces (i.e., internal deployment). selected them as good representatives to show the applica-
Moon Cloud evaluates the recommendations of our bench- bility of our approach considering different profiles. In the
mark in Section II according to the following execution following for each property we provide i) ToE t including the
flow. The Moon Cloud user accesses to the dashboard to target profile, ii) Control flow φ and the relative Parameters
start the benchmark execution and to visualize the results. λ associated with the flow’s operations, iii) Environment
It sets the scheduling time windows for the continuous π requirements if available, iv) a discussion on the the
evaluation of the benchmark recommendations. Evaluation results of the evaluations and a remediation when needed.
Manager then parses all the recommendations to be verified We note that, all the scripts implementing the Controls,
and sends the execution requests to all involved deployments Parameters and Environmental settings are available at https:
of the Execution Module (e.g., selecting between public or //goo.gl/ji5o3R
internal deployments). The Execution Manager reads the [R8] Maintain Time Synchronization Services.
request messages sent by the Evaluation Manager from the Profile: Virtual.
queue and retrieves the requested controls from the Control ToE: All nodes that compose the OpenStack deployment.
Repository. Once a control is downloaded the Execution Control: The control needs to access every node and checks
Manager deploys and executes it. As soon as the evaluation if the time synchronisation is enabled and if it is connected
returns the collected evidence, the Execution Manager stores to the same server list as required. The control supports both
it into the Results Database. Finally, when all evidence crony and ntp.
about the evaluation processes is available on the Results The execution flow φ consists of three sequential opera-
Database, the evaluation rule formula ER is evaluated for tions with the relative Parameters λ as follows.
each recommendation and the result is made available on
the dashboard. 1) connect_to_server [username, password, private_key,
private_key_passphrase,hostname, port]: control ac-
V. P RELIMINARY EVALUATION cesses trough ssh the node;
This section show how our benchmark can be evaluated 2) check_timesync_enabled [ntp,chrony]: control checks,
against a large, in production installation of OpenStack, using the init system, if crony or ntp is enabled;
using Moon Cloud platform. The scope is to show the ap- 3) check_timesync_config [ntp_config_file
plicability and utility on real OpenStack deployment using a (optional),chrony_config_file (optional),servers_list]:
qualified subset of requirements. The target of the evaluation control checks that servers list in the crony or ntp
is the OpenStack Mitaka deployment at University of Milan, config file are the same as passed in the parameters.
actually used for research projects and teaching activities. . The Environmental settings π are the following:
From now on we will refer to this OpenStack Deployment • Control must be executed with access to the internal
as Lagrange. network.
For evaluating Lagrange we deploy Moon Cloud as de- • The paramiko python library to let the control access
picted in Figure 3.One Evaluation module and one Execution through ssh.
module for accessing to OpenStack API and OpenStack Results: During our evaluation we found that Lagrange is
VMs (public deployment). Another Execution module de- not compliant with this recommendation, in particular the
ployed in the internal network where all Lagrange nodes Control returns negative result for the Host 5 that is using
are reachable (internal deployment). a different pool of time servers from what expected. As

299

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
remediation we replace the Host 5 time server configuration The execution flow φ of the first Controls C1 consists of
file with the one expected. three sequential operations with the relative Parameters λ as
[R20] Do Not Use or Set Guest Customization Passwords. follows.
Profile: Cloud. 1) openstack-connection [user credentials]: using user
ToE: Openstack Keystone. Keystone is the identity service credentials to access OpenStack API
and manages projects, users and groups. 2) retrieve-zone []: control retrieves all availability zones
Control: Admin cannot be member of any projects, excepts in OpenStack.
her owns projects and cannot change users passords. Hence, 3) check-deployment [vm-list]: control checks that at least
the control is double: i) admin user is only member of a one VM from vm-list is deployed in a different avail-
restricted list of project as specified in a list. The OpenStack ability zone.
policy does not allows to change user password and that The Environmental settings π are the following:
should be changed only through the centralized identity • Control must be executed with access to the public
system. OpenStack API.
The execution flow φ of control i) consists of two sequen- • The OpenStack client SDK to be able to communicate
tial operations with the relative Parameters λ as follows. with its API.
1) openstack_connection [os_username, os_password, Results: Lagrange is not compliant with this recommenda-
os_project_id, os_auth_url, os_user_domain_name]: tion since it offers only one availability zone and indeed
using the admin credentials, control connects to Open- all VMs from any user are then deployed on a the same
Stack API one. As remediation we suggest the admin to identify, if
2) checkProject [project_list]: control parses all projects possible, fault-independent zones and aggregate hosts under
and control admin is member only of the passed those zones. If not possible we suggest to re-design or extend
projects. the node cluster to provide different availability zones.
The Environmental settings π are the following:
• Control must be executed with access to the public VI. R ELATED W ORK
OpenStack API. OpenStack is largely adopted as case study for many re-
• The OpenStack client SDK to be able to communicate search works [13], [14], [15], [16], [17], [18], [19], [20], [21]
with its API. since it is widely used and open-source. Anisetti et al. [13]
The execution flow φ of control ii) consists of two sequen- present a first approach to automate tests for non-functional
tial operations with the relative Parameters λ as follows. properties in OpenStack. Ristov et al [15] present how a
1) connect_to_server [username, password, private_key, security vulnerability scan should be design for OpenStack,
private_key_passphrase, hostname, port]: control ac- highlighting the necessity of covering both core services and
cesses through ssh the Keystone nodes. hosts. Luo et al. [18] face the problem of policy enforcement
2) retrieve_policy_file [path]: control reads and parses the in OpenStack providing a solution that replace the default
policy file. policy system. The default policy, in fact, is difficult to
3) inspect_policy_file [key, expected_value]: control manage due to high distribution and fragmentations and
checks that identity:change_password action is problems related to operations coverage. Han et al. [14]
disabled. investigate how to mitigate co-resident VMs attacks by using
The Environmental settings π are the following: dedicated scheduling policy and they apply their solutions to
OpenStack. Sze et al. [17] addresses OpenStack weakness
• Control must be executed with access to the internal
e vulnerability that may not be resolved with its standard
network.
configuration providing a solution to harden compute nodes
• The paramiko python library to let the control access
security for what concerns communication queues and poli-
through ssh.
cies. The importance of automate the evaluation process
Results: Lagrange is perfectly compliant with this recom- to provide a continuous auditing is also discussed by Lins
mendation, the change_password is disabled and the admin et al. [22] where authors exhaustively examine processes
user is only member of projects admin and admin − test. and methods to audit cloud services. More in line with the
[R21] Evaluate Cloud Architecture Dependencies. approach of our Moon Cloud assurance platform, Madi et
Profile: User. Al. [16] present a framework for continuous auditing for
ToE: Nova computing and user VMs. Neutron; the auditing results are then compared and validate
Control: User can mitigate the dependencies from single to given policy as a CSP (Constraint Satisfaction Problem).
point of failure of a cloud by deploying her VMs in different Majumdar et al. [19] present a solution to audit Keystone
availability zone; hence, the control checks that a set of service and enforce policy over resources . OpenStack offers
VM are at least deployed in two different availability zone. an internal and service dedicated system of accountability,

300

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.
the authors build upon it a security compliance auditor [9] “Cis security benchmark repository,” https://benchmarks.
based on formal method with excellent performance.In both cisecurity.org/downloads/.
[16], [19] refer to policy from CSA [23] and NIST27k [10] OpenStack Foundation, OpenStack Security Guide, April
properties. Malik et al. [20] evaluate the OpenStack Neutron 2015, http://docs.openstack.org/security-guide/security-guide.
service, analysing its behaviour under different types and pdf.
severity levels of network errors. Halabi et al [21] support
[11] G. Gerchow, M. A. Haines, and P. Goyal, “Cis quick start
the concept of continuously security assessment for cloud
cloud infrastructure benchmark v1.0.0,” October 2012,
provider via Goal-Question-Metric (GQM) paradigm. https://benchmarks.cisecurity.org/downloads/show-single/
VII. C ONCLUSIONS ?file=cloud.100/.

Continuous evaluation of cloud security assurance is [12] B. Albelooshi, K. Salah, T. Martin, and E. Damiani, “Exper-
among the fundamental requirements for a wide adoption imental proof: Data remanence in cloud vms,” in Proc. IEEE
of the cloud computing model in security-critical domains. CLOUD2015, June 2015.
In this paper, we first proposed a security benchmark for [13] M. Anisetti, C. A. Ardagna, E. Damiani, F. Gaudenzi, and
OpenStack composed of a set of security recommendations. R. Veca, “Toward security and performance certification of
We then presented an assurance platform called Moon Cloud open stack,” in Proc. of IEEE CLOUD2015, June 2015.
for continuous cloud security verification. An application of
[14] Y. Han, J. Chan, T. Alpcan, and C. Leckie, “Using virtual
the proposed benchmark has been finally provided in the machine allocation policies to defend against co-resident
context of the OpenStack deployment of the University of attacks in cloud computing,” IEEE TDSC, vol. 14, no. 1, pp.
Milan. 95–108, Jan 2017.

ACKNOWLEDGMENTS [15] S. Ristov, M. Gusev, and A. Donevski, “Security vulnerability


This work was partly supported by the program “piano assessment of openstack cloud,” in Proc. of CICSyN2014,
May 2014.
sostegno alla ricerca 2015-17” funded by Università degli
Studi di Milano. [16] T. Madi, S. Majumdar, Y. Wang, Y. Jarraya, M. Pourzandi,
and L. Wang, “Auditing security compliance of the virtualized
R EFERENCES infrastructure in the cloud: Application to openstack,” in Proc.
[1] C. Ardagna, R. Asal, E. Damiani, and Q. Vu, “From security of ACM CODASPY2016, New York, NY, USA, 2016.
to assurance in the cloud: A survey,” ACM CSUR, vol. 48,
no. 1, pp. 2:1–2:50, August 2015. [17] W. K. Sze, A. Srivastava, and R. Sekar, “Hardening openstack
cloud platforms against compute node compromises,” in Proc.
[2] R. K. L. Ko, P. Jagadpramana, M. Mowbray, S. Pearson, of the ACM ASIA CCS2016, New York, NY, USA, June 2016.
M. Kirchberg, Q. Liang, and B. S. Lee, “Trustcloud: A
framework for accountability and trust in cloud computing,” [18] Y. Luo, W. Luo, T. Puyang, Q. Shen, A. Ruan, and Z. Wu,
in Proc. of IEEE SERVICES2011, Washington, DC, USA, “Openstack security modules: A least-invasive access control
2011. framework for the cloud,” in Proc. of IEEE CLOUD2016,
June 2016.
[3] A. Naskos, A. Gounaris, H. Mouratidis, and P. Katsaros,
“Online analysis of security risks in elastic cloud applications [19] S. Majumdar, T. Madi, Y. Wang, Y. Jarraya, M. Pourzandi,
using probabilistic model checking,” IEEE Cloud Computing L. Wang, and M. Debbabi, “Security compliance auditing of
Magazine (to appear), 2016. identity and access management in the cloud: Application
to openstack,” in Proc. of IEEE CloudCom2015, November
[4] X. Chen, C. Chen, Y. Tao, and J. Hu, “A cloud security 2015.
assessment system based on classifying and grading,” IEEE
Cloud Computing, vol. 2, no. 2, pp. 58–67, 2015. [20] A. Malik, J. Ahmed, J. Qadir, and M. U. Ilyas, “A mea-
surement study of open source sdn layers in openstack under
[5] D. Gonzales, J. Kaplan, E. Saltzman, Z. Winkelman, and network perturbation,” Computer Communications, vol. 102,
D. Woods, “Cloud-trust - a security assessment model for pp. 139 – 149, 2017.
infrastructure as a service (IaaS) clouds,” IEEE TCC, vol. PP,
no. 99, pp. 1–1, 2015. [21] T. Halabi and M. Bellaiche, “Towards quantification and
evaluation of security of cloud service providers,” Journal
[6] M. Anisetti, C. Ardagna, E. Damiani, and F. Gaudenzi, “A of Information Security and Applications, vol. 33, pp. 55 –
semi-automatic and trustworthy scheme for continuous cloud 65, 2017.
service certification,” IEEE TSC, 2017.
[22] S. Lins, S. Schneider, and A. Sunyaev, “Trust is good, control
[7] S. Pearson, “Toward accountability in the cloud,” IEEE In- is better: Creating secure clouds by continuous auditing,”
ternet Computing, vol. 15, no. 4, pp. 64–69, 2011. IEEE TCC, vol. PP, no. 99, pp. 1–1, 2016.

[8] P. Stephanow and N. Fallenbeck, “Towards continuous certifi- [23] Cloud control matrix (CCM) v3.0.1,, Cloud Security Al-
cation of Infrastructure-as-a-Service using low-level metrics,” liance (CSA), June 2016, https://cloudsecurityalliance.org/
in Proc. of ATC 2015, Beijing, China, August 2015. group/cloud-controls-matrix/.

301

Authorized licensed use limited to: Mukesh Patel School of Technology & Engineering. Downloaded on January 31,2025 at 06:10:12 UTC from IEEE Xplore. Restrictions apply.

You might also like