HyperStoreInstallGuide V 7.5
HyperStoreInstallGuide V 7.5
Installation Guide
Version 7.5
This page left intentionally blank.
Confidentiality Notice
The information contained in this document is confidential to, and is the intellectual property of, Cloudian,
Inc. Neither this document nor any information contained herein may be (1) used in any manner other than
to support the use of Cloudian software in accordance with a valid license obtained from Cloudian, Inc, or (2)
reproduced, disclosed or otherwise provided to others under any circumstances, without the prior written per-
mission of Cloudian, Inc. Without limiting the foregoing, use of any information contained in this document in
connection with the development of a product or service that may be competitive with Cloudian software is
strictly prohibited. Any permitted reproduction of this document or any portion hereof must be accompanied
by this legend.
This page left intentionally blank.
Contents
Chapter 1. HyperStore Installation Introduction 7
Chapter 2. Preparing Your Environment 9
2.1. DNS Set-Up 10
5.4.3. Option for Putting the Metadata DB on Dedicated Drives Rather Than the OS Drives 37
5.6.1. Command Line Options When Using [Link] for HyperStore Cluster Installation 42
5.7. system_setup.sh 45
5.8. Installer Advanced Configuration Options 45
Chapter 1. HyperStore Installation Intro-
duction
This documentation describes how to do a fresh installation of Cloudian HyperStore 7.5.
For instructions on upgrading to 7.5 from an older HyperStore version see "Upgrading Your HyperStore Soft-
ware Version" in the Cloudian HyperStore Administrator's Guide.
If you do not yet have the HyperStore 7.5 package, you can obtain it from the Cloudian FTP site [Link]-
[Link]. You will need a login ID and password (available from Cloudian Support). Once logged into the FTP
site, change into the Cloudian_HyperStore directory and then into the cloudian-7.5 sub-directory. From there
you can download the HyperStore software package, which is named [Link].
Note The HyperStore ISO file (with file name extension .iso) is intended for setting up a HyperStore
Appliance machine. Do not use this on other host hardware.
To install and run HyperStore software you need a HyperStore license file — either an evaluation license or a
production license. If you do not have a license file you can obtain one from your Cloudian sales rep-
resentative or by registering for a free trial on the Cloudian website.
7
This page left intentionally blank
Chapter 2. Preparing Your Environment
Before installing HyperStore, Cloudian recommends that you prepare these aspects of your environment:
9
Chapter 2. Preparing Your Environment
For your HyperStore system to be accessible to external clients, you must configure your DNS name servers
with entries for the HyperStore service endpoints. Cloudian recommends that you complete your DNS con-
figuration prior to installing the HyperStore system. This section describes the required DNS entries.
Note If you are doing just a small evaluation and do not require that external clients be able to access
any of the HyperStore services, you have the option of using the lightweight domain resolution utility
dnsmasq which comes bundled with HyperStore -- rather than configuring your DNS environment to
support HyperStore service endpoints. If you're going to use dnsmasq you can skip ahead to "Host
Hardware and OS Requirements (For Software-Only Installations)" (page 15). During installation of
HyperStore software you can use the configure-dnsmasq option if you want to use dnsmasq for domain
resolution. Details are in the software installation procedure.
By default the HyperStore system uses a standard format for each service endpoint, building on two values that
are specific to your environment:
During HyperStore installation you will supply your domain and your service region names, and the interactive
installer will show you the default service endpoints derived from the domain and region names. During install-
ation you can accept the default endpoints or specify custom endpoints instead. The table that follows below is
based on the default endpoint formats.
Note
* Including the string "s3" in your domain or in your region name(s) is not recommended. By default
HyperStore generates S3 service endpoints by prepending an "s3-" prefix to your <region-
name>.<domain> combination. If you include "s3" within either your domain or your region name, this
will result in two instances of "s3" in the system-generated S3 service endpoints, and this may cause
S3 service requests to fail for some S3 clients.
* If you specify custom endpoints during installation, do not use IP addresses in your endpoints.
* HyperStore by default derives the S3 service endpoint(s) as s3-<regionname>.<domain>. However
10
2.1. DNS Set-Up
HyperStore also supports the format s3.<regionname>.<domain> (with a dot after the "s3" rather than a
dash) if you specify custom endpoints with this format during installation.
The table below shows the default format of each service endpoint. The examples show the service endpoints
that the system would automatically generate if the domain is [Link] and the region name is boston.
IAM Service end- iam.<domain> This is the service endpoint for accessing
point HyperStore’s implementation of the Identity
[Link]
and Access Management API.
(one per entire
11
Chapter 2. Preparing Your Environment
STS Service end- sts.<domain> This is the service endpoint for accessing
point HyperStore’s implementation of the Security
[Link]
Token Service API.
(one per entire
system)
Note Resolve the STS endpoint to
the same address as the IAM end-
point, or use CNAME to map the STS
endpoint to the IAM endpoint.
SQS Service end- s3-sqs.<domain> This is the service endpoint for accessing
point HyperStore’s implementation of the Simple
[Link]
Queue Service (SQS) API.
(one per entire
system)
Note The SQS Service is disabled by
default. For information about
enabling this service, see the SQS
section of the Cloudian HyperStore
AWS APIs Support Reference.
IMPORTANT ! Cloudian Best Practices suggest that a highly available load balancer be used in pro-
duction environments where consistent performance behavior is desirable. For environments where a
load balancer is unavailable, other options are possible. Please consult with your Cloudian Sales
Engineer for alternatives.
For a production environment, in your DNS configuration each HyperStore service endpoint should resolve to
the virtual IP address(es) of two or more load balancers that are configured for high availability. For more detail
see "Load Balancing" (page 13).
12
2.2. Load Balancing
If you want to use a custom S3 endpoint that does not include a region string, the installer allows you to do so.
Note however that if your S3 endpoints lack region strings the system will not be able to support the region
name validation aspect of AWS Signature Version 4 authentication for S3 requests (but requests can still suc-
ceed without the validation).
If you want to use multiple S3 endpoints per service region -- for example, having different S3 endpoints
resolve to different data centers within one service region -- the installer allows you to do this. For this
approach, the recommended syntax is s3-<regionname>.<dcname>.<domain> -- for example s3-boston.d-
[Link] and [Link].
Note If you want to change HyperStore service endpoints after the system has already been installed,
you can do so as described in the "Changing S3, Admin, CMC, or IAM Service Endpoints" section of
the Cloudian HyperStore Administrator's Guide. If you change any endpoints, be sure to update your
DNS configuration.
HyperStore uses a peer-to-peer architecture in which each node in the cluster can service requests to the S3,
Admin, CMC, IAM, STS, and SQS service endpoints. In a production environment you should use load bal-
ancers to distribute S3, Admin, CMC, IAM, STS, and SQS service endpoint requests evenly across all the
nodes in your cluster. In your DNS configuration the S3, Admin, CMC, IAM, STS, and SQS service endpoints
should resolve to the virtual IP address(es) of your load balancers; and the load balancers should in turn dis-
tribute request traffic across all your nodes. Cloudian recommends that you set up your load balancers prior
to installing the HyperStore system.
For high availability it is preferable to use two or more load balancers configured for failover between them (as
versus having just one load balancer which would then constitute a single point of failure). The load balancers
could be commercial products or you can use open source technologies such as HAProxy (load balancer soft-
ware for TCP/HTTP applications) and Keepalived (for failover between two or more load balancer nodes). If
you use software-defined solutions such as these open source products, for best performance you should
install them on dedicated load balancing nodes -- not on any of your HyperStore nodes.
For a single-region HyperStore system, for each service configure the load balancers to distribute request
traffic across all the nodes in the system.
l Configure each region's S3 service endpoint to resolve to load balancers in that region, which distribute
traffic across all the nodes within that region.
l Configure the Admin, IAM, STS, SQS, and CMC service endpoints to resolve to load balancers in the
default service region, which distribute traffic to all the nodes in the default service region. (You will
specify a default service region during the HyperStore installation process. For example, you might
have service regions boston and chicago, and during installation you can specify that boston is the
default service region.)
13
Chapter 2. Preparing Your Environment
For detailed guidance on load balancing set-up, request a copy of the HyperStore Load Balancing Best
Practice Guide from your Cloudian Sales Engineering representative.
Note The HyperStore S3 Service supports PROXY Protocol for incoming connections from a load bal-
ancer. This is disabled by default, but after HyperStore installation is complete you can enable it by con-
figuration if you wish. For more information see s3_proxy_protocol_enabled in [Link].
Note For information about how to perform health checks of HyperStore's HTTP(S) services such as
the S3 Service and the CMC, see the "Checking HTTP(S) Responsiveness" section in the Cloudian
HyperStore Administrator's Guide.
14
Chapter 3. Preparing Your Nodes
This section covers these topics to help you select and prepare your HyperStore host machines:
Note
* For guidance regarding how many nodes you should use to meet your initial workload requirements,
consult with your Cloudian sales representative.
* Running HyperStore on VMware ESXi and vSphere is supported, so long as the VMs have com-
parable specs to those described below. However, avoid KVM or Xen as there are known problems
with running HyperStore in those virtualization environments. For more guidance on deploying Hyper-
Store on VMware, ask your Cloudian representative for the "Best Practices Guide: Virtualized Cloudian
HyperStore on VMware vSphere and ESXi".
Note If you plan to use erasure coding for object data storage, 2 CPUs
per node is recommended. Also, be aware that the higher your erasure
coding m value (such as with k+m = 9+3 or 8+4), the higher the need for
metadata storage capacity. Consult with your Cloudian representative to
ensure that you have adequate metadata storage capacity to support your
use case.
15
Chapter 3. Preparing Your Nodes
If you have not already done so, install CentOS 7.4 Minimal or newer (or RHEL 7.4 or newer) in accordance
with your hardware manufacturer's recommendations.
Below, see these additional requirements related to host systems on which you intend to install HyperStore:
l "Partitioning of Disks Used for the OS and Metadata Storage" (page 16)
l "Host Firewall Services Must Be Disabled" (page 17)
l "Python 2.7.x is Required" (page 17)
l "Do Not Mount /tmp Directory with 'noexec'" (page 18)
l "root User umask Must Be 0022" (page 18)
l By default CentOS/RHEL allocates a large portion of disk space to a /home partition. This will leave
inadequate space for HyperStore metadata storage.
l By default CentOS/RHEL proposes using LVM. Cloudian recommends using standard partitions
instead.
Cloudian recommends that you manually create a partition scheme like this:
16
3.1. Host Hardware and OS Requirements (For Software-Only Installations)
l firewalld
l iptables
l SELinux
To disable filewalld:
RHEL/CentOS 7 uses firewalld by default rather than the iptables service (firewalld uses iptables commands
but the iptables service itself is not installed on RHEL/CentOS by default). So you do not need to take action in
regard to iptables unless you installed and enabled the iptables service on your hosts. If that's the case, then
disable the iptables service.
To disable SELinux, edit the configuration file /etc/selinux/config so that SELINUX=disabled. Save your change
and then restart the host.
HyperStore includes a built-in firewall service (a HyperStore-custom version of the firewalld service) that is
configured to protect HyperStore internal services while keeping HyperStore public services open. In fresh
installations of HyperStore 7.2 or later, the HyperStore firewall is enabled by default upon the completion of
HyperStore installation. In HyperStore systems originally installed as a version older than 7.2 and then later
upgraded to 7.2 or newer, the HyperStore firewall is available but is disabled by default. After installation of or
upgrade to HyperStore 7.2 or later, you can enable or disable the HyperStore firewall by using the installer's
Advanced Configuration Options. For instructions see the "HyperStore Firewall" section in the Cloudian Hyper-
Store Administrator's Guide.
Note For information about HyperStore port usage see "HyperStore Listening Ports" (page 31).
# python --version
Python 2.7.5
17
Chapter 3. Preparing Your Nodes
Note These instructions assume that you have already configured basic networking on each of your
nodes. In particular, each node must already be configured with a hostname and IP address, and the
nodes must be able to reach each other in the network.
1. Log into one of your nodes as root. This will be the node through which you will orchestrate the Hyper-
Store installation for your whole cluster. Also, as part of the HyperStore installation, Puppet con-
figuration management software will be installed and configured in the cluster, and this HyperStore
node will become the Configuration Master node for purposes of ongoing cluster configuration man-
agement. Note that the Configuration Master node must be one of your HyperStore nodes. It cannot be
a separate node outside of your HyperStore cluster.
2. On the node that you've logged into, download or copy the HyperStore product package (Cloud-
[Link] file) into a working directory. Also copy your Cloudian license file (*.lic file) into
18
3.2. Preparing Your Nodes For HyperStore Installation
that same directory. Pay attention to the license file name since you will need the license file name in
the next step.
Note The license file must be your cluster-wide license that you have obtained from Cloudian,
not a license for an individual HyperStore Appliance machine (not a cloudian_appliance.lic file).
3. In the working directory run the commands below to unpack the HyperStore package:
# chmod +x [Link]
# ./[Link] <license-file-name>
This creates an installation staging directory named /opt/cloudian-staging/7.5, and extracts the Hyper-
Store package contents into the installation staging directory.
Note The installation staging directory must persist for the life of your HyperStore system. Do
not delete the installation staging directory after completing the install.
6. From the setup tool's main menu, enter "4" for Setup [Link] File and follow the prompts to create a
system survey file with an entry for each of your HyperStore nodes (including the Configuration Master
node). For each node you will enter a region name, hostname, public IPv4 address, data center name,
and rack name. For each node you can also optionally enter an internal interface name.
l For each node's hostname, specify the node's short hostname (as would be returned if you ran
the hostname -s command on the node) -- not an FQDN.
Note Do not use the same short hostname for more than one node in your entire Hyper-
Store system. Each node must have a unique short hostname within your entire
19
Chapter 3. Preparing Your Nodes
HyperStore system, even in the case of nodes in different data centers or service regions
that have different domains.
l For the region, data center, and rack name the only allowed character types are ASCII alpha-
numerical characters and dashes. For the region name letters must be lower case. Do not
include the string "s3" in the region name.
o Make sure the region name matches the region string that you use in your S3 endpoints
in your "DNS Set-Up" (page 10).
o Within a data center, use the same "rack name" for all of the nodes, even if some
nodes are on different physical racks than others.
l For each node, you can optionally specify the name of the interface that the node uses for
internal cluster communications.
o For each node for which you do not specify an internal interface name here in the survey
file, HyperStore will use a default internal interface name that you will supply later in the
HyperStore installation process.
o For each node for which you do specify an internal interface name here in the survey file,
HyperStore will use that internal interface name for that node. The node-specific internal
interface name in the survey file overrides the default internal interface name that you will
supply later in the HyperStore installation process.
After you've added an entry for each node, return to the setup tool's main menu.
Note Based on your input at the prompts, the setup tool creates a survey file named [Link],
in your installation staging directory. This file must remain in your staging directory -- do not
delete or move it. For more information about the contents of the survey file, see the Installation
Reference topic "Cluster Survey File ([Link])" (page 40).
20
3.2. Preparing Your Nodes For HyperStore Installation
7. If you want to change the root password for your nodes, do so now by entering "5" for Change Root
Password and following the prompts. It's recommended to use the same root password for each node.
Otherwise the pre-installation cluster validation tool described later in the procedure will not be fully
functional.
Note If your host machines are "hardened" HyperStore Appliances -- which have the Hyper-
Store Shell already enabled and the root password disabled -- then the "Change Root Pass-
word" option will not appear in the setup tool's main menu.
8. Back at the setup tool's main menu enter "6" for Install & Configure Prerequisites. When prompted
about whether you want to perform this action for all nodes in your survey file enter "yes". The tool will
connect to each of your nodes in turn and install the prerequisite packages. You will be prompted to
provide the root password for the cluster nodes (unless an SSH key is present, in which case that will
be used rather than a password). When the prerequisite installation completes for all nodes, return to
the setup tool's main menu.
Note If firewalld is running on your hosts the setup tool prompts you for permission to disable it.
And if Selinux is enabled on your hosts, the tool automatically disables it without prompting for
permission (or more specifically, changes it to "permissive" mode for the current running session
and changes the configuration so it will be disabled for future boots of the hosts). For information
on why these services must be disabled on HyperStore host machines see "Operating System
Requirements" (page 16).
After the prerequisite installation completes for all nodes (as indicated by console messages from the setup
tool), return to the setup tool's main menu and proceed to "Configuring Network Interfaces, Time Zone, and
Data Disks" (page 21).
1. From the system setup tool's main menu, complete the setup of the Configuration Master node itself:
a. From the system setup tool's main menu, enter "1" for Configure Networking. This displays the
Networking configuration menu.
21
Chapter 3. Preparing Your Nodes
Here you can review the current network interface configuration for the Configuration Master
node, and if you wish, perform additional configuration such as configuring an internal/back-end
interface. When you are done with any desired network interface configuration changes for this
node, return to the setup tool's main menu.
Note When setting/changing a node's hostname, if you enter a hostname that includes
upper case letters the setup tool automatically converts the hostname to entirely lower
case letters.
b. From the setup tool's main menu, enter "2" for Change Timezone and set the time zone for this
node.
c. From the setup tool's main menu, enter "3" for Setup Disks. This displays the Setup Disks menu.
22
3.2. Preparing Your Nodes For HyperStore Installation
From the list of disks on the node select the disks to format as HyperStore data disks, for storage
of S3 object data. By default the tool automatically selects all disks that are not already mounted
and do not contain a /root, /boot or [swap] mount indication. Selected disks display in green font
in the disk list. The tool will format these disks with ext4 file systems and assign them mount
points /cloudian1, /cloudian2, /cloudian3, and so on. You can toggle (select/deselect) a disk by
entering at the prompt the disk's number from the displayed list (such as "3"). Once you're sat-
isfied with the selected list in green font, enter "c" for Configure Selected Disks and follow the
prompts to have the tool configure the selected disks.
IMPORTANT ! Cloudian recommends using the HyperStore system setup tool to format
and mount your data disks. If you have already formatted and mounted your data
disks using third party tools, then instead of using the disk configuration instructions in
this section follow the guidelines and instructions in "File System Requirements" (page
36).
After you've prepared all your nodes and returned to the setup tool's main menu, proceed to "Running the Pre-
Install Checks Script" (page 23).
1. At the setup tool's main menu enter "r" for Run Pre-Installation Checks. This displays the Pre-Install-
ation Checklist menu.
23
Chapter 3. Preparing Your Nodes
2. From the Pre-Installation Checklist menu enter "r" for Run Pre-Install Checks. After prompting you for a
cluster password the script checks to verify that your cluster meets all requirements for hardware, pre-
requisite packages, and network connectivity.
Note The script only supports your providing one root password, so if some of your nodes do
not use that password the script will not be able to check them and you may encounter errors
during HyperStore installation if requirements are not met.
At the end of its run the script outputs to the console a list of items that the script has evaluated and the
results of the evaluation. You should review any “Warning” items but they don’t necessarily require
action (an example is if the hardware specs are less than recommended but still adequate for the install-
ation to proceed). You must resolve any “Error” items before performing the HyperStore software
installation, or the installation will fail.
When you’re done reviewing the results, press any key to continue and then exit the setup script. If you
make any system changes to resolve errors found by the pre-install check, run the pre-install
check again afterward to verify that your environment meets HyperStore requirements.
After your cluster has successfully passed the pre-install checks, proceed to "Installing a New HyperStore
System" (page 25).
24
Chapter 4. Installing a New HyperStore Sys-
tem
This section describes how to do a fresh installation of HyperStore 7.5 software, after "Preparing Your Envir-
onment" (page 9) and "Preparing Your Nodes For HyperStore Installation" (page 18). From your Con-
figuration Master node you can install HyperStore software across your whole cluster.
1. On your Configuration Master node, in your installation staging directory, launch the HyperStore
installation script as follows:
[7.5]# ./[Link] -s [Link]
Note If you have not configured your DNS environment for HyperStore (see "DNS Set-Up"
(page 10)) and you want to instead use the included dnsmasq utility to resolve HyperStore ser-
vice endpoints, launch the install script with the configure-dnsmasq option as shown below. This
is not appropriate for production systems.
For more script launch options, see the Installation Reference topic "[Link]" (page
42).
25
Chapter 4. Installing a New HyperStore System
Note The installer menu includes an item "0" for Run Pre-Installation Checks. This is the same
pre-installation check that you already ran from within the system_setup.sh tool as described in
"Running the Pre-Install Checks Script" (page 23) -- so you can ignore this option in the
installer menu. If you did not run the pre-install check already, then do so from the installer
menu before proceeding any further.
2. From the installer main menu, enter "1" for Install Cloudian HyperStore. Follow the prompts to perform
the HyperStore installation across all the nodes in your cluster survey file (which you created earlier dur-
ing the node preparation task).
During the HyperStore installation you will be prompted to provide cluster configuration information
including the following:
l The name of the internal interface that your nodes will use by default for internal cluster com-
munications. For example, eth1.
Note The system will use this default internal interface name for all nodes for which you
did not specify an internal interface name in your cluster survey file (which you created
during the "Installing HyperStore Prerequisites" (page 18) procedure). If in the survey
file you specified internal interface names for some or all of your nodes, the system will
use those internal interface names for those nodes, rather than the default internal inter-
face name.
l The starting "replication strategy" that you want to use to protect system metadata (such as
usage reporting data and user account information). The replication strategy you enter must be
formatted as "<datacenter_name>:<replication_#>". For example, "DC1:3" means that in the
data center named DC1, three instances of each system metadata object will be stored (with
each instance on a different host). If you are installing HyperStore into multiple data centers you
must format this as a comma-separated list specifying the replicas per data center -- for example
"DC1:2,DC2:1". The default is 3 replicas per service region, and then subsequently the system
automatically adjusts the system metadata replication level based on the storage policies that
you create. For more on this topic see "Storage of System Metadata" in the Cloudian Hyper-
Store Administrator's Guide.
l Your organization’s domain. For example, [Link]. From this input that you provide, the
installation script will automatically derive HyperStore service endpoint values. You can accept
the derived endpoint values that the script presents to you, or optionally you can enter cus-
tomized endpoint values at the prompts. For S3 service endpoint the default is to have one end-
point per service region, but you also have the option of entering multiple comma-separated
endpoints within a service region -- if for example you want different data centers within the
region to use different S3 service endpoints. If you want to have different S3 endpoints for dif-
ferent data centers within the same service region, the recommended S3 endpoint syntax is s3-
<region>.<dcname>.<domain>. See "DNS Set-Up" (page 10) for more details about HyperStore
service endpoints.
IMPORTANT !
* Do not use IP addresses in your service endpoints.
* Including "s3" in the <domain> value is not recommended. By default HyperStore gen-
erates S3 service endpoints by prepending an "s3-" prefix to your
26
<regionname>.<domain> combination. If you include "s3" within either your domain or
your region name, this will result in two instances of "s3" in the system-generated S3 ser-
vice endpoints, and this may cause S3 service requests to fail for some S3 clients.
* HyperStore by default derives the S3 service endpoint(s) as s3-<region-
name>.<domain>. However HyperStore also supports the format s3.<re-
gionname>.<domain> (with a dot after the "s3" rather than a dash) if you specify a custom
S3 endpoint with this format..
* Your S3 static website endpoint cannot be the same as your S3 service endpoint. They
must be different, or else the static website endpoint will not work properly.
l The NTP servers that HyperStore nodes should connect to for time synchronization. By default
the public servers from the [Link] project are used. If you do not allow outbound con-
nectivity from HyperStore hosts (and consequently public NTP servers cannot be reached) you
must specify NTP server(s) within your environment that HyperStore hosts should connect to
instead. The installation will fail if HyperStore hosts cannot connect to an NTP server.
At the conclusion of the installation an "Install Cloudian HyperStore" sub-menu displays, with indication
of the installation status. If the installation completed successfully, the "Load Schema and Start Ser-
vices" menu item should show an "OK" status:
After seeing that the "Load Schema and Start Services" status is OK, return to the installer's main menu.
Note The "Install Cloudian HyperStore" sub-menu supports re-executing specific installation
operations on specific nodes or on all nodes. This may be helpful if the installer interface indic-
ates that an operation has failed. If one of the operations in the menu indicates an error status,
retry that operation by specifying the menu option letter at the prompt (such as "e" for "Load
Schema and Start Services").
3. After installation has completed successfully, from the installer's main menu enter "2" for Cluster Man-
agement and then enter "d" for Run Validation Tests. This executes some basic automated tests to con-
firm that your HyperStore system is working properly. The tests include S3 operations such as creating
an S3 user group, creating an S3 user, creating a storage bucket for that user, and uploading and down-
loading an S3 object.
27
Chapter 4. Installing a New HyperStore System
For first steps to set up and try out your new HyperStore system, see "Getting Started with a New HyperStore
System" in the Cloudian HyperStore Administrator's Guide.
Note For troubleshooting information, see the Installation Reference topic "Installation Troubleshoot-
ing" (page 30).
28
Chapter 5. HyperStore Installation Refer-
ence
This section of the installation documentation provides reference information that you may find useful in some
installation scenarios and circumstances.
29
Chapter 5. HyperStore Installation Reference
On the Configuration Master node (on which you’re running the install script):
l <installation-staging-directory>/[Link]
l /var/log/puppetserver/[Link]
On each Configuration Agent node (each node on which you’re installing HyperStore):
l /tmp/puppet_agent.log
Scanning these logs for error or warning messages should help you identify the stage at which the installation
encountered a problem, and the nature of the problem. This information can further your own troubleshooting
efforts, and also can help Cloudian Support pinpoint the problem in the event that you need assistance from
Support.
Note When you use system_setup.sh to prepare your nodes for HyperStore installation, that tool writes
its logging output to system_setup.[Link], in the same directory as the system_setup.sh tool is located
(typically your installation staging directory).
# ./[Link] -d
For example, if you encounter an error while running the installer in regular (non-debug) mode, you can exit
the installer menu and then launch the installer again in debug mode. You can then either re-execute the
installation starting from the beginning, or re-execute the installation starting from the step that had previously
failed. If you had partially run the installation, then when you subsequently select Install Cloudian HyperStore
at the main menu a sub-menu will display to let you choose from among several installation tasks to run again.
When run in debug mode, the installer will write highly granular messages to both the console and the install-
ation log ([Link]).
Solution
As suggested by the warning messages, the domain part is missing for the host named "cloudian-singlenode".
To resolve this edit the /etc/hosts file or /etc/[Link] file.
30
5.2. HyperStore Listening Ports
1. Edit the /etc/hosts file and make sure the following entry exists:
Ip-address [Link] cloudian-singlenode
2. Edit the /etc/[Link] file and make sure the following entry exists:
Domain [Link]
Verify that the facter fqdn and hostname –f commands output ‘ [Link]’ to the con-
sole.
ISSUE: Puppet is unable to propagate configuration settings from the Configuration Master node to the Con-
figuration Agent nodes, and in the puppet_agent.log and/or puppet_server.log you see errors indicating cer-
tificate problems or access failures.
Solution
Try going to the installer’s "Advanced Options" sub-menu and executing task [h] — "Remove Existing Puppet
SSL Certificates". Then go back to the main menu and choose the appropriate action below, depending on
what you were doing when you encounted the Puppet run failure:
l If you are doing the initial installation of your HyperStore cluster, choose "Install Cloudian HyperStore",
then execute task "Install Packages and Configure Nodes [includes Run Puppet]".
l If you are performing post-installation configuration tasks, choose "Cluster Management", then execute
task "Push Configuration Settings to the Cluster [Run Puppet]".
ISSUE: While working with the installation script, you get a console message indicating that Puppet access is
locked.
Solution
The Puppet process can sometimes end up left in a "locked" state if a Puppet run is interrupted, such as by a
Ctrl-<c> command or a host shutdown.
To unlock Puppet, go to the installer’s "Advanced Options" sub-menu and execute task [j] — "Remove Puppet
Access Lock". Then go back to the main menu and choose the appropriate Puppet-running action below,
depending on what you were doing when you encountered the Puppet lock error:
l If you are doing the initial installation of your HyperStore cluster, choose "Install Cloudian HyperStore",
then execute task "Install Packages and Configure Nodes [includes Run Puppet]".
l If you are performing post-installation configuration tasks, choose "Cluster Management", then execute
task "Push Configuration Settings to the Cluster [Run Puppet]".
31
Chapter 5. HyperStore Installation Reference
be open to traffic originating from outside the HyperStore system. All other ports must be closed to traffic from
outside the system, for system security.
Each HyperStore node includes a built-in HyperStore Firewall that implements port restrictions appropriate to a
HyperStore cluster. The HyperStore Firewall is disabled by default in HyperStore systems that were originally
installed as a version older than 7.2; and enabled by default in HyperStore systems that originally installed as
version 7.2 or newer. You can enable/disable the firewall on all HyperStore nodes by using the installer's
Advanced Configuration Options. For instructions see "HyperStore Firewall" in the Cloudian HyperStore
Administrator's Guide.
Note If you are installing HyperStore across multiple data centers and/or multiple service regions,
the HyperStore nodes in each data center and region will need to be able to communicate with the
HyperStore nodes in the other data centers and regions. This includes services that listen on the
internal interface (such as Cassandra, the HyperStore Service, and Redis). Therefore you will need to
configure your networking so that the internal networks in each data center and region are connected
to each other (for example, by using a VPN).
Requests relayed by an
HAProxy load balancer using
S3 Service the PROXY Protocol (if enabled
81 All
by configuration; see s3_
proxy_protocol_enabled in
[Link])
Requests relayed by an
HAProxy load balancer using
4431 All
the PROXY Protocol with SSL
(if enabled by configuration)
32
5.2. HyperStore Listening Ports
IMPORTANT ! The
Admin Service Admin Service is inten-
19443 All ded to be accessed
only by the CMC and
by system admin-
istrators using other
types of clients (such as
cURL). Do not expose
the Admin Service to a
public network.
Communication between
9078 Internal primary and backup Redis Mon-
Redis Monitor itor instances
33
Chapter 5. HyperStore Installation Reference
34
5.3. Outbound Internet Access
l Smart Support — The Smart Support feature (also known as "Phone Home") securely transmits Hyper-
Store daily diagnostic information to Cloudian Support over the internet. HyperStore supports con-
figuring this feature to use an explicit forward proxy for its outbound internet access (after installation,
the relevant settings in [Link] are phonehome_proxy_host and the other phonehome_proxy_*
settings that follow after it). To use a forward proxy with this feature you should configure your forward
proxy to support access to *.[Link] (that is, to any sub-domain of s3-sup-
[Link]).
l Auto-Tiering and Cross-Region Replication — If you want to use either the auto-tiering feature or the
cross-region replication feature (CRR), the S3 Service running on each of your HyperStore nodes
requires outbound internet access. These features do not support configuring an explicit forward proxy,
but you can use transparent forward proxying if you wish. (Setting up transparent forward proxying is
outside the scope of this documentation.) For more information on these features see the "Auto-Tering
Feature Overview" and "Cross-Region Replication Overview" sections in the Cloudian HyperStore
Administrator's Guide.
l Pre-Configured ntpd — Accurate, synchronized time across the cluster is vital to HyperStore service. In
of your HyperStore data centers four of your HyperStore nodes are automatically configured to act as
internal NTP servers. (If a HyperStore data center has only four or fewer nodes, then all the nodes in the
data center are configured as internal NTP servers.) These internal NTP servers are configured to con-
nect to external NTP servers — by default the public servers from the [Link] project. In order to
35
Chapter 5. HyperStore Installation Reference
connect to the external NTP servers, the internal NTP servers must be allowed outbound internet
access. This feature does not support configuring an explicit forward proxy, but you can use transparent
forward proxying if you wish. (Setting up transparent forward proxying is outside the scope of this doc-
umentation.)
IMPORTANT ! If you do not allow HyperStore hosts to have outbound connectivity to the inter-
net, then during the interactive installation process -- when you are prompted to specify the
NTP servers that HyperStore hosts should connect to -- you must specify NTP servers within
your environment, rather than the public NTP servers that HyperStore connects to by default. If
HyperStore hosts cannot connect to any NTP servers, the installation will fail.
After HyperStore installation, to see which of your HyperStore nodes are internal NTP servers, log into
the CMC and go to Cluster → Cluster Config → Cluster Information. On that CMC page you can also
see your configured list of external NTP servers.
For more information on HyperStore's NTP set-up, see the "NTP Automatic Set-Up" section in the Cloud-
ian HyperStore Administrator's Guide.
Cloudian recommends that you use the HyperStore system_setup.sh tool to configure the disks and mount
points on your HyperStore nodes, as described in "Configuring Network Interfaces, Time Zone, and Data
Disks" (page 21). The tool is part of the HyperStore product package (when you extract the .bin file).
If you do not use the system setup tool for disk setup, use the information below to make sure that your
hosts meet HyperStore file system requirements.
36
5.4. File System Requirements
l HyperStore will by default use the drive that the OS is on for storing system metadata (in the Metadata
DB, the Credentials DB, and the QoS DB). Cloudian recommends that you dedicate two drives to the
OS and system metadata in a RAID-1 mirroring configuration. Preferably the OS/metadata drives
should be SSDs.
l You must format all other available hard drives with ext4 file systems mounted on raw disks. These
drives will be used for storing S3 object data. RAID is not necessary on the S3 object data drives.
l Mirror the OS on the two SSDs. For more detailed recommendations for partitioning these disks see
"Partitioning of Disks Used for the OS and Metadata Storage" (page 16).
l Format each of the 12 HDDs with ext4 file systems and configure mount points such as /cloudian1,
/cloudian2, /cloudian3 and so on.
Note On the HDDs for storing object data, HyperStore does not support XFS file systems; VirtIO
disks; Logical Volume Manager (LVM); or Multipathing. For questions regarding these unsup-
ported technologies, contact Cloudian Support:
If in your installation cluster some hosts have more disks than others, use as much overlap in mount point nam-
ing as possible. For example, suppose that most of your hosts have 10 disks for storing object data while one
host has 12 disks. In this scenario, all of the hosts can have mount points /cloudian1, /cloudian2, /cloudian3,
and so on up through /cloudian10, while the one larger host has those same mount points plus also /cloud-
ian11 and /cloudian12.
Note Although uniformity of mount point naming across nodes (to the extent possible) is desirable for
simplicity's sake, the HyperStore installation does support a way to accommodate differences in the
number or names mount points across nodes -- this is described in "A Data Directory Mount Point
List ([Link]) Is Required" (page 39)..
5.4.3. Option for Putting the Metadata DB on Dedicated Drives Rather Than
the OS Drives
Regarding the Metadata DB (built on Cassandra), another supported configuration is to put your Cassandra
data on dedicated drives, rather than on the OS drives. In this case you would have:
37
Chapter 5. HyperStore Installation Reference
l OS drives in RAID-1 configuration. The Credentials DB and QoS DB will also be written to these drives.
l Cassandra drives in RAID-1 configuration. On these drives will be written Cassandra data and also the
Cassandra commit log.
Note You must create a Cassandra data directory named as <mountpoint>/cassandra (for
example cassandradb/cassandra) and a Cassandra commit log directory named as <moun-
tpoint>/cassandra_commit (for example cassandradb/cassandra_commit).
l Multiple drives for S3 object data (with mount points for example /cloudian1, /cloudian2, /cloudian3 and
so on), with no need for RAID protection.
If you are not using UUIDs in fstab currently, follow the instructions below to modify your fstab so that it uses
UUIDs for the devices to which you will mount S3 object data directories (you do not need to do this for the OS/-
metadata mount points).
1. Check whether your fstab is currently using UUIDs for your S3 object data drives. In the example below,
there are two S3 object data drives and they are currently identified by device name, not by UUID.
# cat /etc/fstab
...
...
/dev/sdb1 /cloudian1 ext4 rw,noatime,barrier=0,data=ordered,errors=remount-ro 0 1
/dev/sdc1 /cloudian2 ext4 rw,noatime,barrier=0,data=ordered,errors=remount-ro 0 1
3. Retrieve the UUIDs for your devices by using the blkid command.
# blkid
...
...
/dev/sdb1: UUID="a6fed29c-97a0-4636-afa9-9ba23e1319b4" TYPE="ext4"
/dev/sdc1: UUID="rP38Ux-3wzO-sP3Y-2CoD-2TDU-fjpO-ffPFZV" TYPE="ext4"
# Revised line
38
5.4. File System Requirements
6. After editing fstab so that each device on which you will store S3 data is identified by a UUID, save your
changes and close the fstab file.
7. Remount the host’s file systems:
# mount -a
Repeat this process for each host on which you will install HyperStore.
Note If you use the system_setup.sh script to configure the disks and mount points on your nodes, the
script creates the needed mount point list files automatically and you can ignore the instructions below.
If all your nodes have the same data mount points -- for example if all nodes have as their data mount points
/cloudian1, /cloudian2, and so on through /cloudian12 -- you only need to create one mount point list file. If
some nodes have a different set of mount points than do other nodes -- for example if some nodes have more
data disks than other nodes -- you will need to create a default mount point list file and also a node-specific
mount point list file for each node that differs from the default.
In your installation staging directory create a file named [Link] and in the file enter one line for each of your
S3 data directory mount points, with each line using the format below.
<deviceName> <mountPoint>
/dev/sdc1 /cloudian1
/dev/sdd1 /cloudian2
...
Optionally, you can also include an entry for the Cassandra data directory and an entry for the Cassandra com-
mit log directory, if you do not want this data to be put on the same device as the operating system (see
"Option for Putting the Metadata DB on Dedicated Drives Rather Than the OS Drives" (page 37)). If you do
not specify these Cassandra directory paths in [Link], then by default the system automatically puts Cas-
sandra data and commit log directories on the same device on which the operating system resides.
Do not use symbolic links when specifying your mount points. The HyperStore system does not support sym-
bolic links for data directories.
If some of your hosts have data directory mount point lists that differ from the cluster default, in the install-
ation staging directory create a <hostname>_fslist.txt file for each such host. For example, along with the
default [Link] file that specifies the mount points that most of your hosts use, you could also have a cloudian-
node11_fslist.txt file and a cloudian-node12_fslist.txt file that specify mount points for two non-standard nodes
that have hostnames cloudian-node11 and cloudian-node12.
39
Chapter 5. HyperStore Installation Reference
# tune2fs -l <device>
# tune2fs -m 0 <device>
# tune2fs -m 0 /dev/sdc1
Note The survey file must be kept in the installation staging directory, not in a different directory. Do not
delete or move the survey file.
The survey file contains one line for each HyperStore host in your cluster (including the Configuration Master
host), with each line using the format below.
<regionname>,<hostname>,<ip4-address>,<datacenter-name>,<rack-name>[,<internal-interface>]
l <regionname> — HyperStore service region in which the host is located. The HyperStore system sup-
ports having multiple service regions with each region having its own independent storage cluster and
S3 object inventory, and with S3 application users able to choose a storage region when they create
storage buckets. Even if you will have only one region you must give it a name. The maximum allowed
length is 52 characters. The only allowed character types are lower case ASCII alphanumerical char-
acters and dashes (a-z0-9 and dashes). Do not include the string "s3" in the region name. Make sure
the region name matches the region string that you use in your S3 endpoints in your "DNS Set-Up"
(page 10). For more information on regions see "Nodes, Data Centers, and Regions" in the Introduction
section of the Cloudian HyperStore Administrator's Guide.
l <hostname> — Short hostname of the host (as would be returned if you ran the hostname -s command
on the host). This must be the node's short hostname, not an FQDN.
Note Do not use the same short hostname for more than one node in your entire HyperStore sys-
tem. Each node must have a unique short hostname within your entire HyperStore system, even
40
5.5. Cluster Survey File ([Link])
in the case of nodes in different data centers or service regions that have different domains. For
example, in your HyperStore system do not have two nodes with the same short hostname vega
for which the FQDN of one is [Link] and the FQDN of the other is [Link].
l <ip4-address> — IP address (v4) that the hostname resolves to. Do not use IPv6. This should be the IP
address associated with the host's default, external interface -- not an internal interface.
l <datacenter-name> — Name of the data center in which the host machine is located. The maximum
allowed length is 256 characters. The only allowed character types are ASCII alphanumerical char-
acters and dashes (A-Za-z0-9 and dashes).
l <rack-name> — Name of the server rack in which the host machine is located. The maximum allowed
length is 256 characters. The only allowed character types are ASCII alphanumerical characters and
dashes (A-Za-z0-9 and dashes).
Note Within a data center, use the same "rack name" for all of the nodes, even if some nodes
are on different physical racks than others. For example, if you have just one data center, all the
nodes must use the same rack name. And if you have two data centers named DC1 and DC2,
all the nodes in DC1 must use the same rack name as the other nodes in DC1; and all the
nodes in DC2 must use the same rack name as the other nodes in DC2.
l [<internal-interface>] — Use this field only for hosts that will use a different network interface for internal
cluster traffic than the rest of the hosts in the cluster do. For example, if most of your hosts will use "eth1"
for internal cluster traffic, but two of your hosts will use "eth2" instead, use this field to specify "eth2" for
each of those two hosts, and leave this field empty for the rest of the hosts in your survey file. (Later in
the installation procedure you will have the opportunity to specify the default internal interface for the
hosts in your cluster -- the internal interface used by all hosts for which you do not specify the internal-
interface field in your survey file.) If all of your hosts use the same internal network interface — for
example if all hosts use "eth1" for internal network traffic — then leave this field empty for all hosts in the
survey file.
Note Cassandra, Redis, and the HyperStore Service are among the services that will utilize the
internal interface for intra-cluster communications.
region1,arcturus,[Link],DC1,RAC1
This second example survey file is for a three-node HyperStore cluster with just one service region, one data
center, and one rack:
tokyo,cloudian-vm7,[Link],DC1,RAC1
tokyo,cloudian-vm8,[Link],DC1,RAC1
tokyo,cloudian-vm9,[Link],DC1,RAC1
This third example survey file below is for a HyperStore installation that spans two regions, with the first region
comprising two data centers and the second region comprising just one data center. Two of the hosts use a dif-
ferent network interface for internal network traffic than all the other hosts do.
boston,hyperstore1,[Link],DC1,RAC1
boston,hyperstore2,[Link],DC1,RAC1
boston,hyperstore3,[Link],DC1,RAC1
41
Chapter 5. HyperStore Installation Reference
boston,hyperstore4,[Link],DC2,RAC1
boston,hyperstore5,[Link],DC2,RAC1
chicago,hyperstore6,[Link],DC3,RAC1
chicago,hyperstore7,[Link],DC3,RAC1
chicago,hyperstore8,[Link],DC3,RAC1,eth2
chicago,hyperstore9,[Link],DC3,RAC1,eth2
5.6. [Link]
The [Link] tool (also known as "the installer") serves several purposes including:
l Installation of a HyperStore cluster (for detail see "Installing a New HyperStore System" (page 25))
l Implementing advanced, semi-automated system configuration changes (for detail see "Installer
Advanced Configuration Options" (page 45))
l Pushing configuration file edits to the cluster and restarting services to apply the changes(for detail see
"Pushing Configuration File Edits to the Cluster and Restarting Services" in the Cloudian HyperStore
Administrator's Guide)
The [Link] tool is in your installation staging directory on your Configuration Master node. To per-
form advanced configurations, or to push configuration file changes to the system and restart services, you
would launch the tool simply like this, without using additional command line options:
# ./[Link]
# ./[Link] -s [Link]
Or like this if you are not using your DNS environment to resolve HyperStore service endpoints and you want to
use the bundled tool dnsmasq instead (which is not appropriate for production systems):
However the script does support additional command line options. The syntax is as follows:
Note If you use multiple options, on the command line place options that start with a "-" (such as -s
<survey-filename> or -d) before options that do not (such as no-hosts or configure-dnsmasq).
$ hspkg install
The installer's options are the same regardless of whether it is launched from the HSH command line or the
OS command line.
42
5.6. [Link]
Note After using the installer, exit the installer when you’re done. Do not leave it running. Certain auto-
mated system tasks invoke the installer and cannot do so if it is already running.
l [-s <survey-filename>] — Name of your cluster survey file (including the full path to the file). If you do not
specify the survey file name argument, the script will prompt you for the file name during installation.
l [-k <ssh-private-key-filename>] — The Configuration Master employs SSH for secure communication
with the rest of your HyperStore installation nodes. By default the install script automatically creates an
SSH key pair for this purpose. But if instead you would prefer to use your own existing SSH key pair for
this purpose, you can use the installer's -k <ssh-private-key-filename> option to specify the name of the
private key file (including the full path to the file). When you run the install script it will copy the private
key and corresponding public key to the installation staging directory, and in the staging directory the
key file will be renamed to cloudian-installation-key. Then from the staging directory, the public key file
[Link] will be copied to each node on which you are installing HyperStore.
Note This usage information mentions more command line options than are described here in
this Help topic. This is because the usage information includes installer options that are meant
for HyperStore internal system use, such as options that are invoked by the CMC when you use
the CMC to add nodes to your cluster or remove nodes from your cluster. You should perform
such operations through the CMC, not directly through the installer. The CMC implements auto-
mations and sanity checks beyond what is provided by the install script alone.
l [no-hosts] — Use this option if you do not want the install tool to append entries for each HyperStore
host on to the /etc/hosts file of each of the other HyperStore hosts. By default the tool appends to these
files so that each host is resolvable to the other hosts by way of the /etc/hosts files.
l [configure-dnsmasq] — Use this option if you want the install tool to install and configure dnsmasq, a
lightweight utility that can provide domain resolution services for testing a small HyperStore system. If
you use this option the installer installs dnsmasq and automatically configures it for resolution of Hyper-
Store service domains. If you did not create DNS entries for HyperStore service domains as described
in "DNS Set-Up" (page 10), then you must use the configure-dnsmasq option in order for the system to
be functional when you complete installation. Note that using dnsmasq is not appropriate in a pro-
duction environment.
Note If you do not have the installer install dnsmasq during HyperStore installation, and then
later you decide that you do want to use dnsmasq for your already installed and running Hyper-
Store system, do not use the configure-dnsmasq command line option when you re-launch the
installer. Instead, re-launch the installer with no options and use the Installer Advanced Con-
figuration Options menu to enable dnsmasq for your system.
l [no-firewall] — If this option is used, the HyperStore firewall will not be enabled upon HyperStore install-
ation. By default the HyperStore firewall will be enabled upon completion of a fresh HyperStore install-
ation. For more information about the HyperStore firewall see the "HyperStore Firewall" section in the
43
Chapter 5. HyperStore Installation Reference
Note If you specify the force option when running the installer, the forceoption will "stick" and will
be used automatically for any subsequent times the installer is run to install additional nodes
(such as when you do an "Add Node" operation via the Cloudian Management Console, which
invokes the installer in the background). To turn the forceoption off so that it is no longer auto-
matically used when the installer is run to add more nodes, launch the installer and go to the
Advance Configuration Options. Then choose option t for Configure force behavior and follow
the prompts.
Note Even if the force option is used the installer will abort if it detects an error condition on the
host that will prevent successful installation.
l [uninstall] — If you use this option when launching the installer, the installer main menu will include an
additional menu item -- "Uninstall Cloudian HyperStore".
Use this menu option only if you want to delete the entire HyperStore system, on all nodes, including
any metadata and object data stored in the system. You may want to use this Uninstall Cloudian Hyper-
Store option, for example, after completing a test of HyperStore -- if you do not want to retain the test sys-
tem.
44
5.7. system_setup.sh
IMPORTANT ! Do not use this option to uninstall a single node from a HyperStore system that
you want to retain (such as a live production system). For instructions on removing a node from
a HyperStore system see the "Removing a Node" section in the Cloudian HyperStore Admin-
istrator's Guide.
5.7. system_setup.sh
The system_setup.sh tool is for setting up nodes on which you will install HyperStore software, either during ini-
tial cluster installation or during cluster expansion. For basic information about using system_setup.sh, change
into the installation staging directory and run the following command:
# ./system_setup.sh --help
# ./[Link]
$ hspkg install
Once launched, the installer's menu options (such as referenced in the steps below) are the same regardless
of whether it was launched from the HSH command line or the OS command line.
At the installer main menu's Choice prompt enter 4 for Advanced Configuration Options.
45
Chapter 5. HyperStore Installation Reference
From this menu you can choose the type of configuration change that you want to make and then proceed
through the interactive prompts to specify your desired settings.
For information about each of these options, see the "Reference -> Configuration Settings -> Installer
Advanced Configuration Options" section of the Cloudian HyperStore Administrator's Guide.
Note As a best practice, you should complete basic HyperStore installation first and confirm that things
are working properly (by running the installer’s Validation Tests, under the "Cluster Management"
menu) before you consider using the installer's advanced configuration options.
46