Migrate Linux Users Guide
Migrate Linux Users Guide
User's Guide
Notices
Carbonite Migrate for Linux User's Guide, version 8.4.2, Thursday, October 21, 2021
If you need technical assistance, you can contact CustomerCare. All basic configurations outlined in the
online documentation will be supported through CustomerCare. Assistance and support for advanced
configurations may be referred to a Pre-Sales Systems Engineer or to Professional Services.
Man pages are installed and available on Carbonite Availability and Carbonite Migrate Linux servers. These
documents are bound by the same license agreement as the software installation.
This documentation is subject to the following: (1) Change without notice; (2) Furnished pursuant to a
license agreement; (3) Proprietary to the respective owner; (4) Not to be copied or reproduced unless
authorized pursuant to the license agreement; (5) Provided without any expressed or implied warranties, (6)
Does not entitle Licensee, End User or any other party to the source code or source code documentation of
anything within the documentation or otherwise provided that is proprietary to Carbonite; and (7) All Open
Source and Third-Party Components (“OSTPC”) are provided “AS IS” pursuant to that OSTPC’s license
agreement and disclaimers of warranties and liability.
Carbonite and/or its affiliates and subsidiaries in the United States and/or other countries own/hold rights to
certain trademarks, registered trademarks, and logos. Hyper-V and Windows are registered trademarks of
Microsoft Corporation in the United States and/or other countries. Linux is a registered trademark of Linus
Torvalds. vSphere is a registered trademark of VMware. All other trademarks are the property of their
respective companies. For a complete list of trademarks registered to other companies, please visit that
company’s website.
© 2021 Carbonite. All rights reserved.
Contents
Chapter 1 Carbonite Migrate overview 5
Chapter 2 Requirements 6
Replication capabilities 7
Chapter 3 Carbonite Replication Console 9
Carbonite Replication Console requirements 11
Console options 12
Chapter 4 Managing servers 15
Adding servers 24
Providing server credentials 26
Viewing server details 27
Editing server properties 29
General server properties 30
Server licensing 31
Server setup properties 33
Carbonite Migrate queue 36
Source server properties 40
Target server properties 41
Log file properties 42
E-mail notification configuration 44
Viewing server logs 46
Managing VMware servers 48
Chapter 5 Files and folders migration 49
Files and folders migration requirements 50
Creating a files and folders migration job 55
Managing and controlling files and folders migration jobs 66
Viewing files and folders migration job details 76
Validating a files and folders migration job 80
Editing a files and folders migration job 81
Viewing a files and folders migration job log 83
Cutting over files and folders migration jobs 85
Chapter 6 Full server migration 86
Full server migration requirements 87
Creating a full server migration job 94
Managing and controlling full server migration jobs 105
Viewing full server migration job details 115
Validating a full server migration job 119
Editing a full server migration job 120
Viewing a full server migration job log 122
Cutting over full server migration jobs 124
Chapter 7 Full server to ESX migration 125
Full server to ESX migration requirements 126
Creating a full server to ESX migration job 133
Managing and controlling full server to ESX migration jobs 152
Viewing full server to ESX migration job details 162
Contents 3
Validating a full server to ESX migration job 166
Editing a full server to ESX migration job 167
Viewing a full server to ESX migration job log 169
Cutting over full server to ESX migration jobs 171
Chapter 8 DTSetup 172
Running DTSetup 173
Setup tasks 174
Activating your server 175
Modifying security groups 176
Configuring server settings 177
Configuring driver performance settings 178
Starting and stopping the service 179
Starting DTCL 180
Viewing documentation and troubleshooting tools 181
DTSetup menus 182
Chapter 9 Security 183
Adding users to the security groups 184
Chapter 10 Special network configurations 185
Firewalls 186
NAT 187
Contents 4
Chapter 1 Carbonite Migrate overview
Carbonite Migrate is a comprehensive migration solution. It allows you to move an entire server, known
as a source, by mirroring an image of that source to another server, known as the target. The source and
target servers can be physical or virtual. The image of the source contains the server's system state (the
server's configured operating system and applications) and all of the source server’s data. You can also
migrate just a source's data, in which case the target's system state (the target's configured operating
system and applications) will be used with the source's data.
Carbonite Migrate uses patented data replication technology that allows users to continue accessing
and changing data during the migration. As changes are made on the source, replication keeps the
image of the source stored on the target up-to-date. Carbonite Migrate replicates, in real-time, only the
file changes, not the entire file, allowing you to more efficiently use resources. When you are ready to
cutover to the new server, Carbonite Migrate applies the source system state and after a reboot, the
source is available and running on what was the target server hardware.
Chapter 2 Requirements 6
Replication capabilities
Carbonite Migrate replicates all file and directory data in the supported Linux file systems. Carbonite
Migrate does not replicate extended attributes (xattr), ACLs, or items that are not stored on the file
system, such as pseudo-file systems like /proc and /sys. In addition, note the following.
l Carbonite Migrate is compatible with NFS and Samba services as long as they are mounted on
top of Carbonite Migrate. (The mount must be at the origination point, not a remote mounted
point.) Additionally, NFS and Samba should be started after the Double-Take service.
l If you select data stored on a recursive mount point for replication, a mirror will never finish.
Carbonite Migrate does not check for data stored on recursive mount points.
l If any directory or file contained in your replication set specifically denies permission to the account
running the Double-Take service, the attributes of the file on the target will not be updated
because of the lack of access.
l Sparse files will become full size, zero filled files on the target.
l If you are using soft links, keep in mind the following.
l If a soft link to a directory is part of a replication set rule’s path above the entry point to the
replication set data, that link will be created on the target as a regular directory if it must be
created as part of the target path.
l If a soft link exists in a replication set (or is moved into a replication set) and points to a file or
directory inside the replication set, Carbonite Migrate will remap the path contained in that
link based on the Carbonite Migrate target path when the option RemapLink is set to the
default value (1). If RemapLink is set to zero (0), the path contained in the link will retain its
original mapping.
l If a soft link exists in a replication set (or is moved into a replication set) and points to a file or
directory outside the replication set, the path contained in that link will retain its original
mapping and is not affected by the RemapLink option.
l If a soft link is moved out of or deleted from a replication set on the source, that link will be
copies the file that the link pointed to rather than the link itself, then Carbonite Migrate
replicates the file copied by the operating system to the target. If the operating system does
not follow the link, only the link is copied.
l If a soft link to a directory is copied into a replication set on the source and the operating
system copies the directory and all of its contents that the link pointed to rather than the link
itself, then Carbonite Migrate replicates the directory and its contents copied by the
operating system to the target. If the operating system does not follow the link, only the link
is copied.
l If any operating system commands, such as chmod or chown, is directed at a soft link on
the source and the operating system redirects the action to the file or directory which the
link references, then if the file or directory referenced by the link is in a replication set, the
operation will be replicated for that file to the target.
l The operating system redirects all writes to soft links to the file referenced by the link.
Therefore, if the file referenced by the symbolic link is in a replication set, the write operation
will be replicated to the target.
Chapter 2 Requirements 7
l If you are using hard links, keep in mind the following.
l If a hard link exists (or is created) only inside the replication set on the source, having no
locations outside the replication set, the linked file will be mirrored to the target for all
locations and those locations will be linked if all link locations on the target exist on the same
partition.
l If a hard link crosses the boundaries of a replication set on the source, having locations both
inside and outside the replication set, the linked file will be mirrored to the target for only
those locations inside the replication set on the source, and those locations will be linked on
the target if all link locations exist on the same partition.
l If a hard link is created on the source linking a file outside the replication set to a location
inside the replication set, the linked file will be created on the target in the location defined
by the link inside the replication set and will be linked to any other locations for that file which
exist inside the replication set.
l If any hard link location is moved from outside the replication set into the replication set on
the source, the link will not be replicated to the target even if other link locations already
exist inside the replication set, but the linked file will be created on the target in the location
defined by the link.
l If any hard link location existing inside the replication set is moved within the replication set
on the source, the move will be replicated to the target and the link will be maintained if the
new link location does not cross partitions in the target path.
l If any hard link location existing inside the replication set is moved out of the replication set,
location inside the replication set on the source, the copy will be replicated to the target.
l If a hard linked file has a location in the replication set and any of the operating system
commands, such as chmod or chown, are directed at that file from a location inside the
replication set, the modification to the file will be replicated to the target. Operations on hard
links outside of the replication set are not replicated.
l If a hard linked file has a location in the replication set and a write operation is directed at
that file from inside the replication set, the write operation will be replicated to the target.
Operations on hard links outside of the replication set are not replicated.
l If any hard link location existing inside the replication set is deleted on the source, that file or
Chapter 2 Requirements 8
Chapter 3 Carbonite Replication Console
After you have installed the console, you can launch it by selecting Carbonite, Replication, Carbonite
Replication Console from your Programs, All Programs, or Apps, depending on your operating
system.
The Carbonite Replication Console is used to protect and monitor your servers and jobs. Each time you
open the Carbonite Replication Console, you start at the Servers page which allows you to view, edit,
add, remove, or manage the servers in your console. You can also create a new job from this page.
At the bottom of the Carbonite Replication Console, you will see a status bar. At the right side, you will
find links for Jobs with warnings and Jobs with errors. This lets you see quickly, no matter which
page of the console you are on, if you have any jobs that need your attention. Select this link to go to the
Jobs page, where the appropriate Filter: Jobs with warnings or Filter: Jobs with errors will
automatically be applied.
The first time you start the console, you will see the getting started screen tips on the Servers
page. These tips walk you through the basic steps of adding a server to your console, installing
Carbonite Migrate on that server, and creating a job on that server. If you do not want to see the
tips, close them. If you want to reopen the tips after you have closed them, select Help, Show
Getting Started Tips.
You can manually check for Carbonite Migrate updates by selecting Help, Check for Updates.
The Carbonite Migrate installation prohibits the console from being installed on Server Core.
Because Windows 2012 allows you to switch back and forth between Server Core and a full
installation, you may have the console files available on Server Core, if you installed Carbonite
Migrate while running in full operating system mode. In any case, you cannot run the Carbonite
Replication Console on Server Core.
credentials, after the specified retry interval, if the server login credentials are not accepted.
Keep in mind the following caveats when using this option.
l This is only for server credentials, not job credentials.
l A set of credentials provided for or used by multiple servers will not be retried for the
specified retry interval on any server if it fails on any of the servers using it.
l Verify your environment's security policy when using this option. Check your policies
for failed login lock outs and resets. For example, if your policy is to reset the failed
login attempt count after 30 minutes, set this auto-retry option to the same or a
slightly larger value as the 30 minute security policy to decrease the chance of a
lockout.
l Restarting the Carbonite Replication Console will automatically initiate an immediate
login.
l Entering new credentials will initiate an immediate login using the new credentials.
l Retry on this interval—If you have enabled the automatic retry, specify the length of time,
you will need to use the legacy protocol port. This applies to Carbonite Migrate versions 5.1
or earlier.
l Diagnostics—This section assists with console troubleshooting.
l Export Diagnostic Data—This button creates a raw data file that can be used for
debugging errors in the Carbonite Replication Console. Use this button as directed by
technical support.
l View Log File—This button opens the Carbonite Replication Console log file. Use this
button as directed by technical support. You can also select View, View Console Log File
to open the Carbonite Replication Console log file.
l View Data File—This button opens the Carbonite Replication Console data file. Use this
button as directed by technical support. You can also select View, View Console Data
File to open the Carbonite Replication Console data file.
files are located. The parent directory can be local on your console machine or a UNC path.
l Windows—Specify the parent directory where the Windows installation file is
located. The default location is where the Carbonite Replication Console is installed,
which is \Program Files\Carbonite\Replication. The console will automatically use
the \x64 subdirectory which is populated with the Windows installation files when you
installed the console. If you want to use a different location, you must copy the \x64
folder and its installation file to the different parent directory that you specify.
l Linux—Specify the parent directory where the Linux installation files are located.
The default location is where the Carbonite Replication Console is installed, which is
\Program Files\Carbonite\Replication. The console will automatically use the \Linux
subdirectory, however that location will not be populated with the Linux installation
files when you installed the console. You must copy the Linux .deb or .rpm files from
your download to the \Linux subdirectory in your Carbonite Replication Console
installation location. Make sure you only have a single version of Linux installation
files. The push installation cannot determine which version to install if there are
multiple versions in the \Linux subdirectory. If you want to use a different location, you
must copy the \Linux folder and its installation files to the different parent directory
that you specify.
l Default Windows Installation Options—All of the fields under the Default Installation
Options section are used by the push installation on the Install page. The values specified here
will be the default options used for the push installation.
l Temporary folder for installation package—Specify a temporary location on the server
where you are installing Carbonite Migrate where the installation files will be copied and
run.
l Installation folder—Specify the location where you want to install Carbonite Migrate on
each server. This field is not used if you are upgrading an existing version of Carbonite
Migrate. In that case, the existing installation folder will be used.
l Queue folder—Specify the location where you want to store the Carbonite Migrate disk
queue on each server.
l Amount of system memory to use—Specify the maximum amount of memory, in MB,
that can be used for Carbonite Migrate processing.
If the servers you are pushing to do not have a C drive, make sure you update the folder
fields because the Carbonite Replication Console will not validate that the fields are set to
a volume that does not exist and the installation will not start.
l Default Linux Installation Options—All of the fields under the Default Installation Options
section are used by the push installation on the Install page. The values specified here will be the
default options used for the push installation.
l Temporary folder for installation package—Specify a temporary location on the server
where you are installing Carbonite Migrate where the installation files will be copied and
run.
If you have uninstalled and reinstalled Carbonite Migrate on a server, you may see the server
twice on the Servers page because the reinstall assigns a new unique identifier to the server.
One of the servers (the original version) will show with the red X icon. You can safely remove
that server from the console.
Left pane
You can expand or collapse the left pane by clicking on the Server Highlights heading. This pane
allows you to organize your servers into folders. The servers displayed in the top right pane will change
depending on the server group folder selected in the left pane. Every server in your console session is
displayed when the All Servers group is selected. If you have created and populated server groups
under My Servers, then only the servers in the selected group will be displayed in the right pane.
Between the main toolbar and the left pane is a smaller toolbar. These toolbar options control the server
groups in the left pane.
Column 1 (Blank)
The first blank column indicates the machine type.
Carbonite Migrate source or target server which could be a physical server, virtual
machine, or a cluster node
vCenter server
ESX server
Offline server which means the console cannot communicate with this machine.
Any server icon with a red circle with white X overlay is an error which means the
console can communicate with the machine, but it cannot communicate with Carbonite
Migrate on it.
Column 2 (Blank)
The second blank column indicates the security level
Name
The name or IP address of the server.
Operating system
The operating system of the server. This field will not be displayed if the console cannot
connect to Carbonite Migrate on the server.
Product
The Carbonite Migrate products, if any, licensed for the server
Version
The product version information, if any
Add Servers
Adds a new server. This button leaves the Servers page and opens the Add Servers
page. See Adding servers on page 24.
Remove Server
Removes the server from the console.
Provide Credentials
Changes the login credentials that the Carbonite Replication Console use to
authenticate to a server. This button opens the Provide Credentials dialog box where
you can specify the new account information. See Providing server credentials on page
26. You will remain on the Servers page after updating the server credentials.
Install
Installs or upgrades Carbonite Migrate on the selected server. This button opens the
Install page where you can specify installation options.
Uninstall
Uninstalls Carbonite Migrate on the selected server.
Launch Reporting
Launches the Reporting Service report viewer.
Activate Online
Activates licenses and applies the activation keys to servers in one step. You must have
Internet access for this process. You will not be able to activate a license that has
already been activated.
Refresh
Refreshes the status of the selected servers.
Search
Allows you to search the product or server name for items in the list that match the
criteria you have entered.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
Protect
If you are licensed for Carbonite Availability, use the Protect option to create a
protection job for the selected server.
Migrate
If you are licensed for Carbonite Migrate or certain Carbonite Availability licenses, use
the Migrate option to create a migration job for the selected server.
Remove Server
Removes the server from the console.
Provide Credentials
Changes the login credentials that the Carbonite Replication Console use to
authenticate to a server. This button opens the Provide Credentials dialog box where
you can specify the new account information. See Providing server credentials on page
26. You will remain on the Servers page after updating the server credentials.
Uninstall
Uninstalls Carbonite Migrate on the selected server.
Copy
Copies the information for the selected servers. You can then paste the server
information as needed. Each server is pasted on a new line, with the server information
being comma-separated.
Paste
Pastes a new-line separated list of servers into the console. Your copied list of servers
must be entered on individual lines with only server names or IP addresses on each
line.
Launch Reporting
Launches the Reporting Service report viewer.
Activate Online
Activates licenses and applies the activation keys to servers in one step. You must have
Internet access for this process. You will not be able to activate a license that has
already been activated.
Refresh
Refreshes the status of the selected servers.
console.
If you enter the source server's fully-qualified domain name, the Carbonite
Replication Console will resolve the entry to the server short name. If that short
name resides in two different domains, this could result in name resolution issues.
In this case, enter the IP address of the server.
If you are using a NAT environment, make sure you add your server to the
Carbonite Replication Console using the correct public or private IP address. The
name or IP address you use to add a server to the console is dependent on where
you are running the console. Specify the private IP address of any servers on the
same side of the router as the console. Specify the public IP address of any servers
on the other side of the router as the console.
l User name—Specify a local user that is a member of the dtadmin or dtmon security
group on the server.
l Password—Specify the password associated with the User name you entered.
l Management Service port—If you want to change the port used by the Double-Take
Management Service, disable Use default port and specify the port number you want to
use. This option is useful in a NAT environment where the console needs to be able to
communicate with the server using a specific port number. Use the public or private port
depending on where the console is running in relation to the server you are adding.
4. After you have specified the server or appliance information, click Add.
5. Repeat steps 3 and 4 for any other servers or appliances you want to add.
6. If you need to remove servers or appliances from the list of Servers to be added, highlight a
server and click Remove. You can also remove all of them with the Remove All button.
7. When your list of Servers to be added is complete, click OK.
Importing and exporting servers from a server and group configuration file
You can share the console server and group configuration between machines that have the Carbonite
Replication Console installed. The console server configuration includes the server group configuration,
server name, server communications ports, and other internal processing information.
Server name
The name or IP address of the server. If you have specified a reserved IP address, it
will be displayed in parenthesis.
Operating system
The server's operating system version
Roles
The role of this server in your Carbonite Migrate environment. In some cases, a server
can have more than one role.
l Engine Role—Source or target server
l Reporting Service—Reporting Service server
Status
There are many different Status messages that keep you informed of the server
activity. Most of the status messages are informational and do not require any
administrator interaction. If you see error messages, check the rest of the server
details.
Activity
There are many different Activity messages that keep you informed of the server
activity. Most of the activity messages are informational and do not require any
administrator interaction. If you see error messages, check the rest of the server
details.
Connected via
The IP address and port the server is using for communcations. You will also see the
Carbonite Migrate protocol being used to communicate with server. The protocol will
be XML web services protocol (for servers running Carbonite Migrate version 5.2 or
later) or Legacy protocol (for servers running version 5.1 or earlier).
Version
The product version information
Access
The security level granted to the specified user
User name
The user account used to access the server
l Default address—On a server with multiple NICs, you can specify which address Carbonite
Migrate traffic will use. It can also be used on servers with multiple IP addresses on a single NIC. If
you change this setting, you must restart the Double-Take service for this change to take effect.
l Port—The server uses this port to send and receive commands and operations between
Carbonite Migrate servers. If you change the port, you must stop and restart the Double-Take
service.
l Encrypt network data—Use this option to encrypt your data before it is sent from the source to
the target. Both the source and target must be encryption capable ( version 7.0.1 or later),
however this option only needs to be enabled on the source or target in order to encrypt data.
Keep in mind that all jobs from a source with this option enabled or to a target with this option
enabled will have the same encryption setting. Changing this option will cause jobs to auto-
reconnect and possibly remirror. The encryption method used is AES-256.
The fields and buttons in the Licensing section will vary depending on your Carbonite
Replication Console configuration and the type of license keys you are using.
Click the FAQ link if you want more information about licensing and activation.
l Add license keys and activation keys—Your license key or activation key is a 24 character,
alpha-numeric key. You can change your license key without reinstalling, if your license changes.
To add a license key or activation key, type in the key or click Choose from inventory and select
a key from your console's license inventory. Then click Add.
The license inventory feature cannot be enabled if your service provider has restricted
access to it.
l Current license keys—The server's current license key information is displayed. To remove a
key, highlight it and click Remove. To copy a key, highlight it and click Copy. To replace a key,
enter a new key and click Add. If you are replacing an unexpired key with the same version and
serial number, you should not have to reactivate it and any existing jobs will continue
You will not be able to activate a license that has already been activated.
l Obtain activation key online, then activate—If you have Internet access, click the
hyperlink in the Activation section to take you to the web so that you can submit your
activation information. Complete and submit the activation form, and you will receive an e-
mail with the activation key. Activate your server by entering the activation key in the Add
license keys and activations keys field and clicking Add.
l Obtain activation key offline, then activate—If you do not have Internet access, go to
For Carbonite Migrate, license keys do not have a grace period and must be activated in
order to be used. Once the license has been activated, you will have a specific number of
days (generally 30 days) to complete your migration process, depending on your license
type.
l On Demand Licensing—If you are a service provider participating in the On Demand licensing
program, you can configure the subscription license for your target servers here. If you are not in
this program, you can skip this section. For the latest and complete details on On Demand, see
the help link in the On Demand web portal.
1. Specify your Service provider account number. The account number is displayed in the
upper right corner of the On Demand web portal.
2. Specify the Customer name. Use the customer name configured on the Customers list in
the On Demand web portal.
3. Select the appropriate Product that corresponds with the Carbonite Migrate product being
used.
4. If you are using a proxy server, select Enable On Demand Proxy and specify the Proxy
address using the value http://xxx.xxx.xxx.xxx:yyyy where xxx.xxx.xxx.xxx is the IP
address of your proxy server and yyyy is the port number.
5. Click Submit to activate the subscription license on the target.
l Log statistics automatically—If enabled, Carbonite Migrate statistics logging will start
automatically when Carbonite Migrate is started.
l Enable task command processing—Task command processing is a Carbonite Migrate
feature that allows you to insert and run tasks at various points during the replication of data.
Because the tasks are user-defined, you can achieve a wide variety of goals with this feature. For
example, you might insert a task to create a snapshot or run a backup on the target after a certain
segment of data from the source has been applied on the target. This allows you to coordinate a
point-in-time backup with real-time replication. Enable this option to enable task command
processing, however to insert your tasks, you must use the Carbonite Migrate scripting language.
See the Scripting Guide for more information. If you disable this option on a source server, you
can still submit tasks to be processed on a target, although task command processing must be
enabled on the target.
l Automatically reconnect during source initialization—Disk queues are user configurable
and can be extensive, but they are limited. If the amount of disk space specified for disk queuing is
met, additional data would not be added to the queue and data would be lost. To avoid any data
loss, Carbonite Migrate will automatically disconnect jobs when necessary. If this option is
enabled, Carbonite Migrate will automatically reconnect any jobs that it automatically
disconnected. These processes are called auto-disconnect and auto-reconnect and can happen
in the following scenarios.
l Source server restart—If your source server is restarted, Carbonite Migrate will
automatically reconnect any jobs that were previously connected. Then, if configured,
Carbonite Migrate will automatically remirror the data. This process is called auto-remirror.
The remirror re-establishes the target baseline to ensure data integrity, so disabling auto-
remirror is not advised.
l Exhausted queues on the source—If disk queuing is exhausted on the source,
Carbonite Migrate will automatically start disconnecting jobs. This is called auto-disconnect.
The transaction logs and system memory are flushed allowing Carbonite Migrate to begin
processing anew. The auto-reconnect process ensures that any jobs that were auto-
disconnected are automatically reconnected. Then, if configured, Carbonite Migrate will
automatically remirror the data. This process is called auto-remirror. The remirror re-
establishes the target baseline to ensure data integrity, so disabling auto-remirror is not
advised.
l Exhausted queues on the target—If disk queuing is exhausted on the target, the target
instructs the source to pause. The source will automatically stop transmitting data to the
target and will queue the data changes. When the target recovers, it will automatically tell
the source to resume sending data. If the target does not recover by the time the source
queues are exhausted, the source will auto-disconnect as described above. The
If you are experiencing frequent auto-disconnects, you may want to increase the amount
of disk space on the volume where the Carbonite Migrate queue is located or move the
disk queue to a larger volume.
If you have manually changed data on the target, for example if you were testing data on
the target, Carbonite Migrate is unaware of the target data changes. You must manually
remirror your data from the source to the target, overwriting the target data changes that
you caused, to ensure data integrity between your source and target.
l Behavior when automatically remirroring—Specify how Carbonite Migrate will perform the
mirror when it is automatically remirroring.
If you are using a database application or are protecting a domain controller, do not use
the compare file attributes only options unless you know for certain that you need it. With
database applications and because domain controllers store their data in a database, it is
l Do not compare files. Send the entire file.—Carbonite Migrate will not perform any
comparisons between the files on the source and target. All files will be mirrored to the
target, sending the entire file.
l Compare file attributes. Send the entire file.—Carbonite Migrate will compare file
attributes and will mirror those files that have different attributes, sending the entire file.
l Compare file attributes. Send the attributes and bytes that differ.—Carbonite
Migrate will compare file attributes and will mirror only the attributes and bytes that are
different.
l Compare file attributes and data. Send the attributes and bytes that differ.—
Carbonite Migrate will compare file attributes and the file data and will mirror only the
attributes and bytes that are different.
1. If data cannot immediately be transmitted to the target, it is stored in system memory. You can
configure how much system memory you want Carbonite Migrate to use for all of its processing.
2. When the allocated amount of system memory is full, new changed data bypasses the full system
memory and is queued directly to disk. Data queued to disk is written to a transaction log. Each
transaction log can store 5 MB worth of data. Once the log file limit has been reached, a new
transaction log is created. The logs can be distinguished by the file name which includes the target
IP address, the Carbonite Migrate port, the connection ID, and an incrementing sequence
number.
You may notice transaction log files that are not the defined size limit. This is because data
operations are not split. For example, if a transaction log has 10 KB left until the limit and
the next operation to be applied to that file is greater than 10 KB, a new transaction log file
will be created to store that next operation. Also, if one operation is larger than the defined
size limit, the entire operation will be written to one transaction log.
3. When system memory is full, the most recent changed data is added to the disk queue, as
described in step 2. This means that system memory contains the oldest data. Therefore, when
data is transmitted to the target, Carbonite Migrate pulls the data from system memory and sends
l Queue folder—This is the location where the disk queue will be stored. Any changes made to the
queue location will not take effect until the Double-Take service has been restarted on the server.
When selecting the queue location, keep in mind the following caveats.
l Select a dedicated, non-boot volume.
l Do not select the same physical or logical volume as the data being replicated.
Although the read/write ratio on queue files will be 1:1, optimizing the disk for write activity will
benefit performance because the writes will typically be occurring when the server is under a high
load, and more reads will be occurring after the load is reduced. Accordingly, use a standalone
disk, mirrored (RAID 1) or non-parity striped (RAID 0) RAID set, and allocate more I/O adapter
cache memory to writes for best performance. A RAID 5 array will not perform as well as a
mirrored or non-parity striped set because writing to a RAID 5 array incurs the overhead of
Scanning the Carbonite Migrate queue files for viruses can cause unexpected results. If
anti-virus software detects a virus in a queue file and deletes or moves it, data integrity on
the target cannot be guaranteed. As long as you have your anti-virus software configured
to protect the actual production data, the anti-virus software can clean, delete, or move an
infected file and the clean, delete, or move will be replicated to the target. This will keep
the target from becoming infected and will not impact the Carbonite Migrate queues.
l Alert at this queue usage—This is the percentage of the disk queue that must be in use to
trigger an alert message. By default, the alert will be generated when the queue reaches 50%.
l Number of replication packets per one mirror packet—You can specify the ratio of
replication packets to mirror packets that are placed in the source queue. The default value (5)
allows Carbonite Migrate to dynamically change the ratio as needed based on the amount of
replication data in queue. If you set a specific value other than the default (other than 5), the
specified value will be used. Changes to this setting will take effect for future jobs. Existing jobs will
have to be stopped and restarted to pick up the new ratio.
l Maximum pending mirror operations—This option is the maximum number of mirror
operations that are queued on the source. The default setting is 1000. If, during mirroring, the
mirror queued statistic regularly shows low numbers, for example, less than 50, this value can be
increased to allow Carbonite Migrate to queue more data for transfer.
l Size of mirror packets—This option determines the size of the mirror packets, in bytes, that
Carbonite Migrate transmits. The default setting is 65536 bytes. You may want to consider
increasing this value in a high latency environment (greater than 100 ms response times), or if
your data set contains mainly larger files, like databases.
l Calculate size of protected data upon connection—Specify if you want Carbonite Migrate to
determine the mirroring percentage calculation based on the amount of data being protected. If
you enable this option, the calculation will begin when mirroring begins. For the initial mirror, the
percentage will display after the calculation is complete, adjusting to the amount of the mirror that
has completed during the time it took to complete the calculation. Subsequent mirrors will initially
use the last calculated size and display an approximate percentage. Once the calculation is
complete, the percentage will automatically adjust down or up to indicate the amount that has
been completed. Disabling calculation will result in the mirror status not showing the percentage
complete or the number of bytes remaining to be mirrored.
The calculated amount of protected data may be slightly off if your data set contains
compressed or sparse files.
l Pause mirroring at this level—You can specify the maximum percentage of Windows system
memory that can contain mirror data before the target signals the source to pause the sending of
mirror operations. The default setting is 20.
l Resume mirroring at this level—You can specify the minimum percentage of Windows system
memory that can contain mirror data before the target signals the source to resume the sending of
mirror operations. The default setting is 15. You cannot set the resume value higher than the
pause value.
l Retry delay for incomplete operations—This option specifies the amount of time, in seconds,
before retrying a failed operation on the target. The default setting is 3.
l Logging folder—Specify the directory where each of the log files in this section are stored. The
default location is the directory where the Carbonite Migrate program files are installed.
l Messages & Alerts—These settings apply to the service log file.
l Maximum size—Specify the maximum size, in bytes, of the log file. The default size is
10485760 bytes (10 MB). Once the maximum has been reached, a new log file will be
created.
l Maximum number of files—Specify the maximum number of log files that are
maintained. The default is 5, and the maximum is 999. Once the maximum has been
reached, the oldest file will be overwritten.
l Verification—The verification log is created during the verification process and details which files
were verified as well as the files that are synchronized.
l File name—This field contains the base log file name for the verification process. The job
type and a unique identifier will be prefixed to the base log file name. For example, since the
default is DTVerify.log, the verification log for a files and folders job will be Files and
Folders_123456abcdef DTVerify.log.
l Maximum size—Specify the maximum size, in bytes, of the verification log file. The default
to the same log file. If this check box is disabled, each verification process that is logged will
overwrite the previous log file. By default, this option is enabled.
l Statistics—The statistics log maintains connection statistics such as mirror bytes in queue or
replication bytes sent. This file is a binary file that is read by the DTStat utility. See the Reference
Guide for details on DTStat.
l Enable e-mail notification—This option enables the e-mail notification feature. Any specified
notification settings will be retained if this option is disabled.
l E-mail server—Specify the name of your SMTP mail server.
l Log on to e-mail server—If your SMTP server requires authentication, enable this option and
specify the User name and Password to be used for authentication. Your SMTP server must
support the LOGIN authentication method to use this feature. If your server supports a different
authentication method or does not support authentication, you may need to add the Carbonite
Migrate server as an authorized host for relaying e-mail messages. This option is not necessary if
you are sending exclusively to e-mail addresses that the SMTP server is responsible for.
l From address—Specify the e-mail address that you want to appear in the From field of each
Carbonite Migrate e-mail message. The address is limited to 256 characters.
l Send to—Specify the e-mail addresses that each Carbonite Migrate e-mail message should be
sent to. Enter the addresses as a comma or semicolon separated list. Each address is limited to
256 characters. You can add up to 256 e-mail addresses.
When you modify your e-mail notification settings, you will receive a test e-mail
summarizing your new settings. You can also test e-mail notification by clicking Test. By
default, the test will be run from the machine where the console is running. If desired, you
can send the test message to a different e-mail address by selecting Send To and
entering a comma or semicolon separated list of addresses. Modify the Message Text up
to 1024 characters, if necessary. Click Send to test the e-mail notification. The results will
be displayed in a message box.
If an error occurs while sending an e-mail, a message will be generated. This message
will not trigger another e-mail. Subsequent e-mail errors will not generate additional
messages. When an e-mail is sent successfully, a message will then be generated. If
another e-mail fails, one message will again be generated. This is a cyclical process
where one message will be generated for each group of failed e-mail messages, one for
each group of successful e-mail messages, one for the next group of failed messages,
and so on.
If you start and then immediately stop the Double-Take service, you may not get e-mail
notifications for the log entries that occur during startup.
By default, most anti-virus software blocks unknown processes from sending traffic on
port 25. You need to modify the blocking rule so that Carbonite Migrate e-mail messages
are not blocked.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Server logs window. The messages are still logged to their respective files
on the server.
Copy
This button copies the messages selected in the Server logs window to the Windows
clipboard.
Clear
This button clears the Server logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
reopen the Server logs window.
Filter
From the drop-down list, you can select to view all log messages or only those
messages from the Double-Take log or the Management Service log.
Time
This column in the table indicates the date and time when the message was logged.
Description
This column in the table displays the actual message that was logged.
Service
This column in the table indicates if the message is from the Double-Take log or the
Management Service log.
VMware Server
The name of the VMware server
Full Name
The full name of the VMware server
User Name
The user account being used to access the VMware server
Remove Server
Remove the VMware server from the console.
Provide Credentials
Edit credentials for the selected VMware server. When prompted, specify a user
account to access the VMware server.
l Operating system—Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version—7.7 through 7.9
l Operating system—Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version—8.2 through 8.4
l Operating system—CloudLinux
l Version—7.9
l Kernel type—Default
l Kernel type—Default
l Notes—If you are planning to convert an existing file system to Btrfs, you must
delete any existing Carbonite Migrate jobs and re-create them after converting to
Btrfs.
l Operating system—SUSE Linux Enterprise
l Version—15.0 through 15.2
l Kernel type—Default
l Notes—If you are planning to convert an existing file system to Btrfs, you must
delete any existing Carbonite Migrate jobs and re-create them after converting to
Btrfs.
l Kernel type—Generic
l Operating system—Ubuntu
l Version—18.04.1 through 18.04.3
l Kernel type—Generic
l Operating system—Ubuntu
l Version—20.04.0
l Kernel type—Generic
For all operating systems except Ubuntu, the kernel version must match the expected
kernel for the specified release version. For example, if /etc/redhat-release declares the
system to be a Redhat 7.5 system, the kernel that is installed must match that.
l Packages and services—Each Linux server must have the following packages and services
installed before you can install and use Carbonite Migrate. See your operating system
documentation for details on these packages and utilities.
l sshd (or the package that installs sshd)
l lsb
l parted
l dmidecode
l scp
l which
l libnsl (only required for Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
Make sure you have additional disk space for Carbonite Migrate queuing, logging, and so
on.
machine.
l In order to properly resolve IPv6 addresses to a hostname, a reverse lookup entry should
be made in DNS.
l If you are using Carbonite Migrate over a WAN and do not have DNS name resolution, you
will need to add the host names to the local hosts file on each server running Carbonite
Migrate.
l Because of limitations in the way the Linux kernel handles IP address aliases, do not mix
subnets on the eth0 network interface. Failover should not cause problems in this
configuration, but you will lose IP addresses during failback. Therefore, if you must mix
subnets on a single interface, use eth1 or higher.
l Ubuntu Netplan is supported, however the network configuration on the source and target
should match. If you have a mix of network types (traditional, NetworkManager, or Netplan)
on the source and target, you may have to configure the networking on the target after
cutover.
l NAT support—Carbonite Migrate supports NAT environments with the following caveats.
l Only IPv4 is supported.
l Only standalone servers are supported.
l Make sure you have added your server to the Carbonite Replication Console using the
correct public or private IP address. The name or IP address you use to add a server to the
console is dependent on where you are running the console. Specify the private IP address
of any servers on the same side of the router as the console. Specify the public IP address
of any servers on the other side of the router as the console.
l DNS failover and updates will depend on your configuration
l Ports—Port 1501 is used for localhost communication between the engine and management
service and should be opened inbound and outbound for both TCP and UDP in iptables. Ports
1500, 1505, 1506, 6325, and 6326 are used for component communication and must be opened
inbound and outbound for both TCP and UDP on any firewall that might be in use.
l Name resolution—Your servers must have name resolution or DNS. The Carbonite Replication
Console must be able to resolve the target, and the target must be able to resolve all source
servers. For details on name resolution options, see your Linux documentation or online Linux
resources.
Server Not
Description Supported
Configuration Supported
Volumes and folders with a green highlight are included completely. Volumes and folders
highlighted in light yellow are included partially, with individual files or folders included. If there is
no highlight, no part of the volume or folder is included. To modify the items selected, highlight a
volume, folder, or file and click Add Rule. Specify if you want to Include or Exclude the item.
Also, specify if you want the rule to be recursive, which indicates the rule should automatically be
applied to the subdirectories of the specified path. If you do not select Recursive, the rule will not
be applied to subdirectories.
You can also enter wildcard rules, however you should do so carefully. Rules are applied to files
that are closest in the directory tree to them. If you have rules that include multiple folders, an
exclusion rule with a wild card will need to be added for each folder that it needs applied to. For
example, if you want to exclude all .log files from /home and your rules include /home,
/home/folder1, and /home/folder2, you would need to add the exclusion rule for the root and each
subfolder rule. So you will need to add exclude rules for /home/*.log , /home/folder1/*.log, and
/home/folder2/*.log.
If you need to remove a rule, highlight it in the list at the bottom and click Remove Rule. Be
careful when removing rules. Carbonite Migrate may create multiple rules when you are adding
directories. For example, if you add /home/admin to be included in protection, then /home will be
excluded. If you remove the /home exclusion rule, then the /home/admin rule will be removed
also.
If you return to this page using the Back button in the job creation workflow, your
Workload Types selection will be rebuilt, potentially overwriting any manual replication
rules that you specified. If you do return to this page, confirm your Workload Types and
Replication Rules are set to your desired settings before proceeding forward again.
l Current Servers—This list contains the servers currently available in your console
session. Servers that are not licensed for the workflow you have selected and those not
applicable to the workload type you have selected will be filtered out of the list. Select your
target server from the list. If the server you are looking for is not displayed, enable Show all
servers. The servers in red are not available for the source server or workload type you
have selected. Hover your mouse over an unavailable server to see a reason why this
server is unavailable.
l Find a New Server—If the server you need is not in the Current Servers list, click the
Find a New Server heading. From here, you can specify a server along with credentials
for logging in to the server. If necessary, you can click Browse to select a server from a
network drill-down list.
If you enter the target server's fully-qualified domain name, the Carbonite Replication
Console will resolve the entry to the server short name. If that short name resides in two
different domains, this could result in name resolution issues. In this case, enter the IP
address of the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
You may be prompted for a route from the target to the source. This route is used so the
target can communicate with the source to build job options. This dialog box will be
displayed only if needed.
General
For the Job name, specify a unique name for your job.
Failover Options
l Wait for user to initiate failover—The cutover process can wait for you to initiate it,
allowing you to control when cutover occurs. When a cutover occurs, the job will wait in the
Protecting state for you to manually initiate the cutover process. Disable this option if you
want cutover to occur immediately after the mirror is complete.
l Shutdown source server—Specify if you want to shut down the source server, if it is still
running, before the source is cutover to the target, This option prevents identity conflicts on
the network in those cases where the source and target are still both running and
communicating.
l Target Scripts—You can customize cutover by running scripts on the target. Scripts may
contain any valid Linux command, executable, or shell script file. The scripts are processed
using the same account running the Double-Take Management service. Examples of
functions specified in scripts include stopping services on the target before cutover because
they may not be necessary, stopping services on the target that need to be restarted with
l Mirror Options—Choose a comparison method and whether to mirror the entire file or
only the bytes that differ in each file.
l Do not compare files. Send the entire file.—Carbonite Migrate will not perform
any comparisons between the files on the source and target. All files will be mirrored
to the target, sending the entire file. This option requires no time for comparison, but
the mirror time can be slower because it sends the entire file. However, it is useful for
configurations that have large data sets with millions of small files that are frequently
changing and it is more efficient to send the entire file. You may also need to use this
option if configuration management policies require sending the entire file.
l Compare file attributes. Send the entire file.—Carbonite Migrate will compare
file attributes and will mirror those files that have different attributes, sending the
entire file. This option is the fastest comparison method, but the mirror time can be
slower because it sends the entire file. However, it is useful for configurations that
have large data sets with millions of small files that are mostly static and not
changing. You may also need to use this option if configuration management policies
require sending the entire file.
l Compare file attributes. Send the attributes and bytes that differ.—Carbonite
Migrate will compare file attributes and will mirror only the attributes and bytes that
are different. This option is the fastest comparison method and fastest mirror speed.
Files that have not changed can be easily skipped. Also files that are open and
require a checksum mirror can be compared.
l Compare file attributes and data. Send the attributes and bytes that differ.—
Carbonite Migrate will compare file attributes and the file data and will mirror only the
attributes and bytes that are different. This comparison method is not as fast because
If a file is small enough that mirroring the entire file is faster than comparing it and
then mirroring it, Carbonite Availability will automatically mirror the entire file.
Orphaned file configuration is a per target configuration. All jobs to the same
target will have the same orphaned file configuration.
If delete orphaned files is enabled, carefully review any replication rules that
use wildcard definitions. If you have specified wildcards to be excluded from
protection, files matching those wildcards will also be excluded from
orphaned file processing and will not be deleted from the target. However, if
you have specified wildcards to be included in your protection, those files that
fall outside the wildcard inclusion rule will be considered orphaned files and
will be deleted from the target.
Network Route
By default, Carbonite Migrate will select an IP address on the target for transmissions. If desired,
specify an alternate route on the target that the data will be transmitted through. This allows you to
select a different route for Carbonite Migrate traffic. For example, you can separate regular
network traffic and Carbonite Migrate traffic on a machine with multiple IP addresses. You can
also select or manually enter a public IP address (which is the public IP address of the server's
router) if you are using a NAT environment.
If you change the IP address on the target which is used for the target route, you will be
unable to edit the job. If you need to make any modifications to the job, it will have to be
deleted and re-created.
l Mappings—Specify the location on the target where the replica of the source data will be
stored. By default, the replica source data will be stored in the same directory structure on
the target. Make sure you update this location if you are protecting multiple sources or jobs
to the same target. You have two pre-defined locations as well as a custom option that
allows you to set your path.
l All To One—Click this button to set the mapping so that the replica source data will
be stored on a single volume on the target. The pre-defined path is /source_
name/volume_name. If you are protecting multiple volumes on the source, each
volume would be stored on the same volume on the target.
l One To One—Click this button to set the mapping so that the replica source data will
be stored in the same directory structure on the target. For example, /data and /home
will be stored in /data and /home, respectively, on the target.
l Custom Location—If the pre-defined options do not store the data in a location that
is appropriate for your network operations, you can specify your own custom location
where the replica source data will be stored. Click the Target Path and edit it,
selecting the appropriate location.
If you are protecting system state data , you must select the All to One mapping or
specify a customized location in order to avoid sharing violations. Keep in mind that
this mapping will avoid sharing violations on the target, however during a
restoration, you will get sharing violations on the source because the restoration
mapping is one to one and your system state files will be in use on the source you
are restoring to. In this case, restoration will never complete. If you will need to
restore data and you must protect system state data, you should use a full server
job.
To help reduce the amount of bandwidth needed to transmit Carbonite Migrate data, compression
allows you to compress data prior to transmitting it across the network. In a WAN environment this
provides optimal use of your network resources. If compression is enabled, the data is
compressed before it is transmitted from the source. When the target receives the compressed
data, it decompresses it and then writes it to disk. You can set the level from Minimum to
Maximum to suit your needs.
Keep in mind that the process of compressing data impacts processor usage on the source. If you
notice an impact on performance while compression is enabled in your environment, either adjust
to a lower level of compression, or leave compression disabled. Use the following guidelines to
determine whether you should enable compression.
l If data is being queued on the source at any time, consider enabling compression.
l If the server CPU utilization is averaging over 85%, be cautious about enabling
compression.
l The higher the level of compression, the higher the CPU utilization will be.
l Do not enable compression if most of the data is inherently compressed. Many image (.jpg,
.gif) and media (.wmv, .mp3, .mpg) files, for example, are already compressed. Some
images files, such as .bmp and .tif, are decompressed, so enabling compression would be
beneficial for those types.
l Compression may improve performance even in high-bandwidth environments.
l Do not enable compression in conjunction with a WAN Accelerator. Use one or the other to
compress Carbonite Migrate data.
All jobs from a single source connected to the same IP address on a target will share the
same compression configuration.
Bandwidth limitations are available to restrict the amount of network bandwidth used for
Carbonite Migrate data transmissions. When a bandwidth limit is specified, Carbonite Migrate
never exceeds that allotted amount. The bandwidth not in use by Carbonite Migrate is available
for all other network traffic.
All jobs from a single source connected to the same IP address on a target will share the
same bandwidth configuration.
l Do not limit bandwidth—Carbonite Migrate will transmit data using 100% bandwidth
availability.
l Use a fixed limit—Carbonite Migrate will transmit data using a limited, fixed bandwidth.
Select a Preset bandwidth limit rate from the common bandwidth limit values. The
Bandwidth field will automatically update to the bytes per second value for your selected
bandwidth. This is the maximum amount of data that will be transmitted per second. If
desired, modify the bandwidth using a bytes per second value. The minimum limit should be
3500 bytes per second.
l Use scheduled limits—Carbonite Migrate will transmit data using a dynamic bandwidth
based on the schedule you configure. Bandwidth will not be limited during unscheduled
times.
l New—Click New to create a new scheduled bandwidth limit. Specify the following
information.
l Daytime entry—Select this option if the start and end times of the bandwidth
window occur in the same day (between 12:01 AM and midnight). The start
time must occur before the end time.
l Overnight entry—Select this option if the bandwidth window begins on one
day and continues past midnight into the next day. The start time must be later
than the end time, for example 6 PM to 6 AM.
l Day—Enter the day on which the bandwidth limiting should occur. You can
pick a specific day of the week, Weekdays to have the limiting occur Monday
through Friday, Weekends to have the limiting occur Saturday and Sunday, or
Every day to have the limiting repeat on all days of the week.
l Start time—Enter the time to begin bandwidth limiting.
bandwidth limit values. The Bandwidth field will automatically update to the
bytes per second value for your select bandwidth.
l Bandwidth—If desired, modify the bandwidth using a bytes per second value.
If you change your job option from Use scheduled limits to Do not limit
bandwidth or Use a fixed limit, any schedule that you created will be preserved.
That schedule will be reused if you change your job option back to Use scheduled
limits.
You can manually override a schedule after a job is established by selecting Other
Job Options, Set Bandwidth. If you select No bandwidth limit or Fixed
bandwidth limit, that manual override will be used until you go back to your
schedule by selecting Other Job Options, Set Bandwidth, Scheduled
bandwidth limit. For example, if your job is configured to use a daytime limit, you
would be limited during the day, but not at night. But if you override that, your
override setting will continue both day and night, until you go back to your schedule.
See the Managing and controlling jobs section for your job type for more
information on the Other Job Options.
Once a job is created, do not change the name of underlying hardware components used in the
job. For example, volume names, network adapter names, or virtual switch names. Any
component used by name in your job must continue to use that name throughout the lifetime of
Column 1 (Blank)
The first blank column indicates the state of the job.
A green circle with a white checkmark indicates the job is in a healthy state. No
action is required.
A yellow triangle with a black exclamation point indicates the job is in a pending or
warning state. This icon is also displayed on any server groups that you have created
that contain a job in a pending or warning state. Carbonite Migrate is working or waiting
on a pending process or attempting to resolve the warning state.
A red circle with a white X indicates the job is in an error state. This icon is also
displayed on any server groups that you have created that contain a job in an error
state. You will need to investigate and resolve the error.
Name
The name of the job
Target data state
l OK—The data on the target is in a good state.
l Mirroring—The target is in the middle of a mirror process. The data will not be in a
good state until the mirror is complete.
l Mirror Required—The data on the target is not in a good state because a remirror
is required. This may be caused by an incomplete or stopped mirror or an operation
may have been dropped on the target.
l Busy—The source is low on memory causing a delay in getting the state of the data
on the target.
l Not Loaded—Carbonite Migrate target functionality is not loaded on the target
server. This may be caused by a license key error.
l Not Ready—The Linux drivers have not yet completed loading on the target.
l Unknown—The console cannot determine the status.
Mirror remaining
The total number of mirror bytes that are remaining to be sent from the source to the
target.
Mirror skipped
The total number of bytes that have been skipped when performing a difference mirror.
These bytes are skipped because the data is not different on the source and target.
Replication queue
The total number of replication bytes in the source queue
Disk queue
The amount of disk space being used to queue data on the source
Recovery point latency
The length of time replication is behind on the target compared to the source. This is the
time period of replication data that would be lost if a failure were to occur at the current
time. This value represents replication data only and does not include mirroring data. If
you are mirroring and failover, the data on the target will be at least as far behind as the
recovery point latency. It could potentially be further behind depending on the
circumstances of the mirror. If mirroring is idle and you failover, the data will only be as
far behind as the recovery point latency time.
Delete
Stops (if running) and deletes the selected jobs.
Provide Credentials
Changes the login credentials that the job (which is on the target machine) uses to
authenticate to the servers in the job. This button opens the Provide Credentials dialog
box where you can specify the new account information and which servers you want to
update.
Start
Starts or resumes the selected jobs.
If you have previously stopped protection, the job will restart mirroring and replication.
If you have previously paused protection, the job will continue mirroring and replication
from where it left off, as long as the Carbonite Migrate queue was not exhausted during
the time the job was paused. If the Carbonite Migrate queue was exhausted during the
time the job was paused, the job will restart mirroring and replication.
Also if you have previously paused protection, all jobs from the same source to the
same IP address on the target will be resumed.
Stop
Stops the selected jobs. The jobs remain available in the console, but there will be no
mirroring or replication data transmitted from the source to the target. Mirroring and
replication data will not be queued on the source while the job is stopped, requiring a
remirror when the job is restarted. The type of remirror will depend on your job settings.
Take Snapshot
Snapshots are not applicable to migration jobs.
Manage Snapshots
Snapshots are not applicable to migration jobs.
Failover or Cutover
Starts the cutover process. See Cutting over files and folders migration jobs on page 85
for the process and details of cutting over a files and folders migration job.
Failback
Restore
Reverse
Reverses protection. Reverse protection does not apply to migration jobs.
l Verify—Even if you have scheduled the verification process, you can run it manually
any time a mirror is not in progress.
l Report only—Select this option if you only want to generate a verification
report. With this option, no data that is found to be different will be mirrored to
the target. Choose how you want the verification to compare the files.
l Report and mirror files—Select this option if you want to generate a
verification report and mirror data that is different to the target. Select the
comparison method and type of mirroring you want to use. See the previous
mirroring methods described under Mirror Options.
l Set Bandwidth—You can manually override bandwidth limiting settings configured
for your job at any time.
l No bandwidth limit—Carbonite Migrate will transmit data using 100%
bandwidth availability.
l Fixed bandwidth limit—Carbonite Migrate will transmit data using a limited,
fixed bandwidth. Select a Preset bandwidth limit rate from the common
bandwidth limit values. The Bandwidth field will automatically update to the
bytes per second value for your selected bandwidth. This is the maximum
amount of data that will be transmitted per second. If desired, modify the
bandwidth using a bytes per second value. The minimum limit should be 3500
bytes per second.
l Scheduled bandwidth limit—If your job has a configured scheduled
bandwidth limit, you can enable that schedule with this option.
l Delete Orphans—Even if you have enabled orphan file removal during your mirror
and verification processes, you can manually remove them at any time.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
Job name
The name of the job
Job type
Each job type has a unique job type name. This job is a Files and Folders Migration job.
For a complete list of all job type names, press F1 to view the Carbonite Replication
Console online help.
Health
If you have specified replication rules that exclude a volume at the root, that volume will be
incorrectly added as an inclusion if you edit the job after it has been established. If you
need to edit your job, modify the replication rules to make sure they include the proper
inclusion and exclusion rules that you want.
3. If you want to modify the workload items or replication rules for the job, click Edit workload or
replication rules. Modify the Workload item you are protecting, if desired. Additionally, you can
modify the specific Replication Rules for your job.
Click OK to return to the Edit Job Properties page.
If you remove data from your workload and that data has already been sent to the target,
you will need to manually remove that data from the target. Because the data you
removed is no longer included in the replication rules, Carbonite Migrate orphan file
detection cannot remove the data for you. Therefore, you have to remove it manually.
Because the job log window communicates with the target server, if the console loses
communication with the target server after the job log window has already been opened, the job
log window will display an error.
The following table identifies the controls and the table columns in the Job logs window.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Job logs window. The messages are still logged to their respective files on
the server.
Clear
This button clears the Job logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
reopen the Job logs window.
Time
This column in the table indicates the date and time when the message was logged.
Description
This column in the table displays the actual message that was logged.
the target. The source may be automatically shut down if it is still running, depending on
your job configuration.
l Perform test cutover—This option is not applicable to files and folders migration jobs.
queue will be applied before cutover begins. The advantage to this option is that all of the
data that the target has received will be applied before cutover begins. The disadvantage to
this option is depending on the amount of data in queue, the amount of time to apply all of
the data could be lengthy.
l Discard data in the target queues and failover or cutover immediately—All of the
data in the target queue will be discarded and cutover will begin immediately. The
advantage to this option is that cutover will occur immediately. The disadvantage is that any
data in the target queue will be lost.
4. When you are ready to begin cutover, click Failover.
l Operating system—Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version—7.7 through 7.9
l Operating system—Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version—8.2 through 8.4
l Operating system—CloudLinux
l Version—7.9
l Kernel type—Default
l Kernel type—Default
l Notes—If you are planning to convert an existing file system to Btrfs, you must
delete any existing Carbonite Migrate jobs and re-create them after converting to
Btrfs.
l Operating system—SUSE Linux Enterprise
l Version—15.0 through 15.2
l Kernel type—Default
l Notes—If you are planning to convert an existing file system to Btrfs, you must
delete any existing Carbonite Migrate jobs and re-create them after converting to
Btrfs.
l Kernel type—Generic
l Operating system—Ubuntu
l Version—18.04.1 through 18.04.3
l Kernel type—Generic
l Operating system—Ubuntu
l Version—20.04.0
l Kernel type—Generic
For all operating systems except Ubuntu, the kernel version must match the expected
kernel for the specified release version. For example, if /etc/redhat-release declares the
system to be a Redhat 7.5 system, the kernel that is installed must match that.
l Packages and services—Each Linux server must have the following packages and services
installed before you can install and use Carbonite Migrate. See your operating system
documentation for details on these packages and utilities.
l sshd (or the package that installs sshd)
l lsb
l parted
l dmidecode
l scp
l which
l libnsl (only required for Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
Make sure you have additional disk space for Carbonite Migrate queuing, logging, and so
on.
l Server name—Carbonite Migrate includes Unicode file system support, but your server name
must still be in ASCII format. Additionally, all Carbonite Migrate servers must have a unique
server name.
l Protocols and networking—Your servers must meet the following protocol and networking
requirements.
lYour servers must have TCP/IP with static IP addressing.
l IPv4 only configurations are supported, IPv4 and IPv6 are supported in combination,
machine.
l In order to properly resolve IPv6 addresses to a hostname, a reverse lookup entry should
be made in DNS.
l If you are using Carbonite Migrate over a WAN and do not have DNS name resolution, you
will need to add the host names to the local hosts file on each server running Carbonite
Migrate.
l Ubuntu Netplan is supported, however the network configuration on the source and target
should match. If you have a mix of network types (traditional, NetworkManager, or Netplan)
on the source and target, you may have to configure the networking on the target after
cutover.
l NAT support—Carbonite Migrate supports NAT environments with the following caveats.
l Only IPv4 is supported.
l Only standalone servers are supported.
l Make sure you have added your server to the Carbonite Replication Console using the
correct public or private IP address. The name or IP address you use to add a server to the
console is dependent on where you are running the console. Specify the private IP address
of any servers on the same side of the router as the console. Specify the public IP address
of any servers on the other side of the router as the console.
l DNS failover and updates will depend on your configuration
l Name resolution—Your servers must have name resolution or DNS. The Carbonite Replication
Console must be able to resolve the target, and the target must be able to resolve all source
servers. For details on name resolution options, see your Linux documentation or online Linux
resources.
Server Not
Description Supported
Configuration Supported
you need to install on the source any NIC drivers that will be required on the target after
failover.
l Resolve any maintenance updates on the source that may require the server to be
occurs before the required reboot, the target may not operate properly or it may not boot.
l Processors—There are no limits on the number or speed of the processors, but the source and
the target should have at least the same number of processors. If the target has fewer processors
or slower speeds than the source, there will be performance impacts for the users after cutover.
l Memory—The target memory should be within 25% (plus or minus) of the source. If the target
has much less memory than the source, there will be performance impacts for the users after
cutover.
l Network adapters—You must map at least one NIC from the source to one NIC on the target. If
you have NICs on the source that are not being used, it is best to disable them. If the source has
more NICs than the target, some of the source NICs will not be mapped to the target. Therefore,
the IP addresses associated with those NICs will not be available after cutover. If there are more
NICs on the target than the source, the additional NICs will still be available after cutover and will
retain their pre-cutover network settings.
l File system format—The source and the target must have the file system format on each
server. For example, if you have Ext3 on the source, you cannot have XFS on the target. In that
case, the target must also be Ext3.
l Volumes—There are no limits to the number of volumes you can migrate on the source, although
you are bound by operating system limits.
For each non-system volume you are migrating on the source, the target must have a matching
volume. For example, if you are migrating /data and /home on the source, the target must also
have /data and /home. Additional target volumes are preserved and available after cutover with all
data still accessible.
The system volumes / and /boot do not have this matching volume limitation. If you have / and
/boot on different volumes on the source, they can exist on a single volume on the target. If you
have / and /boot on the same volume on the source, they can exist on different volumes on the
target.
3. By default, Carbonite Migrate selects the system and boot volumes for migration. You will be
unable to deselect these volumes. Select any other volumes on the source that you want to
migrate.
If desired, click the Replication Rules heading and expand the volumes under Folders. You will
see that Carbonite Migrate automatically excludes particular files that cannot be used during the
migration. If desired, you can exclude other files that you do not want to migrate, but be careful
when excluding data. Excluded volumes, folders, and/or files may compromise the integrity of
your installed applications.
Volumes and folders with a green highlight are included completely. Volumes and folders
highlighted in light yellow are included partially, with individual files or folders included. If there is
no highlight, no part of the volume or folder is included. To modify the items selected, highlight a
volume, folder, or file and click Add Rule. Specify if you want to Include or Exclude the item.
Also, specify if you want the rule to be recursive, which indicates the rule should automatically be
If you return to this page using the Back button in the job creation workflow, your
Workload Types selection will be rebuilt, potentially overwriting any manual replication
rules that you specified. If you do return to this page, confirm your Workload Types and
Replication Rules are set to your desired settings before proceeding forward again.
l Current Servers—This list contains the servers currently available in your console
session. Servers that are not licensed for the workflow you have selected and those not
applicable to the workload type you have selected will be filtered out of the list. Select your
target server from the list. If the server you are looking for is not displayed, enable Show all
servers. The servers in red are not available for the source server or workload type you
If you enter the target server's fully-qualified domain name, the Carbonite Replication
Console will resolve the entry to the server short name. If that short name resides in two
different domains, this could result in name resolution issues. In this case, enter the IP
address of the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
You may be prompted for a route from the target to the source. This route is used so the
target can communicate with the source to build job options. This dialog box will be
displayed only if needed.
7. You have many options available for your server migration job. Configure those options that are
applicable to your environment.
Go to each page identified below to see the options available for that section of the Set Options
page. After you have configured your options, continue with the next step on page 103.
l General on page 96
l Failover Options on page 97
l Failover Identity on page 98
l Network Adapter Options on page 99
l Mirror, Verify & Orphaned Files on page 99
l Network Route on page 100
l Compression on page 101
l Bandwidth on page 102
General
For the Job name, specify a unique name for your job.
l Wait for user to initiate failover—The cutover process can wait for you to initiate it,
allowing you to control when cutover occurs. When a cutover occurs, the job will wait in the
Protecting state for you to manually initiate the cutover process. Disable this option if you
want cutover to occur immediately after the mirror is complete.
l Shutdown source server—Specify if you want to shut down the source server, if it is still
running, before the source is cutover to the target, This option prevents identity conflicts on
the network in those cases where the source and target are still both running and
communicating.
l Target Scripts—You can customize cutover by running scripts on the target. Scripts may
contain any valid Linux command, executable, or shell script file. The scripts are processed
using the same account running the Double-Take Management service. Examples of
functions specified in scripts include stopping services on the target before cutover because
they may not be necessary, stopping services on the target that need to be restarted with
the source’s machine name and/or IP address, starting services or loading applications that
are in an idle, standby mode waiting for cutover to occur, notifying the administrator before
and after cutover occurs, and so on. There are two types of cutover scripts.
l Pre-failover script—This script runs on the target at the beginning of the cutover
process. Specify the full path and name of the script file.
l Delay until script completes—Enable this option if you want to delay the cutover
process until the associated script has completed. If you select this option, make sure
your script handles errors, otherwise the cutover process may never complete if the
process is waiting on a script that cannot complete.
l Post-failover script—This script runs on the target at the end of the cutover
process. Specify the full path and name of the script file.
l Arguments—Specify a comma-separated list of valid arguments required to
execute the script.
l Apply source network configuration to the target—If you select this option, your
source IP addresses will cut over to the target. If your target is on the same subnet as the
source (typical of a LAN environment), you should select this option.
Do not apply the source network configuration to the target in a WAN environment
unless you have a VPN infrastructure so that the source and target can be on the
same subnet, in which case IP address failover will work the same as a LAN
configuration. If you do not have a VPN, you will have to reconfigure the routers by
moving the source's subnet from the source's physical network to the target's
physical network. There are a number of issues to consider when designing a
solution that requires router configuration to achieve IP address failover. Since the
route to the source's subnet will be changed at failover, the source server must be
the only system on that subnet, which in turn requires all server communications to
pass through a router. Additionally, it may take several minutes or even hours for
routing tables on other routers throughout the network to converge.
l Retain target network configuration—If you select this option, the target will retain all of
its original IP addresses. If your target is on a different subnet (typical of a WAN or
NAT environment), you should select this option.
For Map source network adapters to target network adapters, specify how you want the IP
addresses associated with each NIC on the source to be mapped to a NIC on the target. Do not
mix public and private networks.
l Mirror Options—Choose a comparison method and whether to mirror the entire file or
only the bytes that differ in each file.
l Do not compare files. Send the entire file.—Carbonite Migrate will not perform
any comparisons between the files on the source and target. All files will be mirrored
to the target, sending the entire file. This option requires no time for comparison, but
the mirror time can be slower because it sends the entire file. However, it is useful for
configurations that have large data sets with millions of small files that are frequently
changing and it is more efficient to send the entire file. You may also need to use this
option if configuration management policies require sending the entire file.
l Compare file attributes. Send the attributes and bytes that differ.—Carbonite
Migrate will compare file attributes and will mirror only the attributes and bytes that
are different. This option is the fastest comparison method and fastest mirror speed.
Files that have not changed can be easily skipped. Also files that are open and
require a checksum mirror can be compared.
l Compare file attributes and data. Send the attributes and bytes that differ.—
Carbonite Migrate will compare file attributes and the file data and will mirror only the
attributes and bytes that are different. This comparison method is not as fast because
every file is compared, regardless of whether the file has changed or is open.
However, sending only the attributes and bytes that differ is the fastest mirror speed.
If a file is small enough that mirroring the entire file is faster than comparing it and
then mirroring it, Carbonite Availability will automatically mirror the entire file.
Network Route
By default, Carbonite Migrate will select an IP address on the target for transmissions. If desired,
specify an alternate route on the target that the data will be transmitted through. This allows you to
select a different route for Carbonite Migrate traffic. For example, you can separate regular
network traffic and Carbonite Migrate traffic on a machine with multiple IP addresses. You can
also select or manually enter a public IP address (which is the public IP address of the server's
router) if you are using a NAT environment.
To help reduce the amount of bandwidth needed to transmit Carbonite Migrate data, compression
allows you to compress data prior to transmitting it across the network. In a WAN environment this
provides optimal use of your network resources. If compression is enabled, the data is
compressed before it is transmitted from the source. When the target receives the compressed
data, it decompresses it and then writes it to disk. You can set the level from Minimum to
Maximum to suit your needs.
Keep in mind that the process of compressing data impacts processor usage on the source. If you
notice an impact on performance while compression is enabled in your environment, either adjust
to a lower level of compression, or leave compression disabled. Use the following guidelines to
determine whether you should enable compression.
l If data is being queued on the source at any time, consider enabling compression.
l If the server CPU utilization is averaging over 85%, be cautious about enabling
compression.
l The higher the level of compression, the higher the CPU utilization will be.
l Do not enable compression if most of the data is inherently compressed. Many image (.jpg,
.gif) and media (.wmv, .mp3, .mpg) files, for example, are already compressed. Some
images files, such as .bmp and .tif, are decompressed, so enabling compression would be
beneficial for those types.
l Compression may improve performance even in high-bandwidth environments.
l Do not enable compression in conjunction with a WAN Accelerator. Use one or the other to
compress Carbonite Migrate data.
All jobs from a single source connected to the same IP address on a target will share the
same compression configuration.
Bandwidth limitations are available to restrict the amount of network bandwidth used for
Carbonite Migrate data transmissions. When a bandwidth limit is specified, Carbonite Migrate
never exceeds that allotted amount. The bandwidth not in use by Carbonite Migrate is available
for all other network traffic.
All jobs from a single source connected to the same IP address on a target will share the
same bandwidth configuration.
l Do not limit bandwidth—Carbonite Migrate will transmit data using 100% bandwidth
availability.
l Use a fixed limit—Carbonite Migrate will transmit data using a limited, fixed bandwidth.
Select a Preset bandwidth limit rate from the common bandwidth limit values. The
Bandwidth field will automatically update to the bytes per second value for your selected
bandwidth. This is the maximum amount of data that will be transmitted per second. If
desired, modify the bandwidth using a bytes per second value. The minimum limit should be
3500 bytes per second.
l Use scheduled limits—Carbonite Migrate will transmit data using a dynamic bandwidth
based on the schedule you configure. Bandwidth will not be limited during unscheduled
times.
l New—Click New to create a new scheduled bandwidth limit. Specify the following
information.
l Daytime entry—Select this option if the start and end times of the bandwidth
window occur in the same day (between 12:01 AM and midnight). The start
time must occur before the end time.
l Overnight entry—Select this option if the bandwidth window begins on one
day and continues past midnight into the next day. The start time must be later
than the end time, for example 6 PM to 6 AM.
l Day—Enter the day on which the bandwidth limiting should occur. You can
pick a specific day of the week, Weekdays to have the limiting occur Monday
through Friday, Weekends to have the limiting occur Saturday and Sunday, or
Every day to have the limiting repeat on all days of the week.
l Start time—Enter the time to begin bandwidth limiting.
bandwidth limit values. The Bandwidth field will automatically update to the
bytes per second value for your select bandwidth.
l Bandwidth—If desired, modify the bandwidth using a bytes per second value.
If you change your job option from Use scheduled limits to Do not limit
bandwidth or Use a fixed limit, any schedule that you created will be preserved.
That schedule will be reused if you change your job option back to Use scheduled
limits.
You can manually override a schedule after a job is established by selecting Other
Job Options, Set Bandwidth. If you select No bandwidth limit or Fixed
bandwidth limit, that manual override will be used until you go back to your
schedule by selecting Other Job Options, Set Bandwidth, Scheduled
bandwidth limit. For example, if your job is configured to use a daytime limit, you
would be limited during the day, but not at night. But if you override that, your
override setting will continue both day and night, until you go back to your schedule.
See the Managing and controlling jobs section for your job type for more
information on the Other Job Options.
Once a job is created, do not change the name of underlying hardware components used in the
job. For example, volume names, network adapter names, or virtual switch names. Any
component used by name in your job must continue to use that name throughout the lifetime of
Column 1 (Blank)
The first blank column indicates the state of the job.
A green circle with a white checkmark indicates the job is in a healthy state. No
action is required.
A yellow triangle with a black exclamation point indicates the job is in a pending or
warning state. This icon is also displayed on any server groups that you have created
that contain a job in a pending or warning state. Carbonite Migrate is working or waiting
on a pending process or attempting to resolve the warning state.
A red circle with a white X indicates the job is in an error state. This icon is also
displayed on any server groups that you have created that contain a job in an error
state. You will need to investigate and resolve the error.
Name
The name of the job
Target data state
l OK—The data on the target is in a good state.
l Mirroring—The target is in the middle of a mirror process. The data will not be in a
good state until the mirror is complete.
l Mirror Required—The data on the target is not in a good state because a remirror
is required. This may be caused by an incomplete or stopped mirror or an operation
may have been dropped on the target.
l Busy—The source is low on memory causing a delay in getting the state of the data
on the target.
l Not Loaded—Carbonite Migrate target functionality is not loaded on the target
server. This may be caused by a license key error.
l Not Ready—The Linux drivers have not yet completed loading on the target.
l Unknown—The console cannot determine the status.
Mirror remaining
The total number of mirror bytes that are remaining to be sent from the source to the
target.
Mirror skipped
The total number of bytes that have been skipped when performing a difference mirror.
These bytes are skipped because the data is not different on the source and target.
Replication queue
The total number of replication bytes in the source queue
Disk queue
The amount of disk space being used to queue data on the source
Recovery point latency
The length of time replication is behind on the target compared to the source. This is the
time period of replication data that would be lost if a failure were to occur at the current
time. This value represents replication data only and does not include mirroring data. If
you are mirroring and failover, the data on the target will be at least as far behind as the
recovery point latency. It could potentially be further behind depending on the
circumstances of the mirror. If mirroring is idle and you failover, the data will only be as
far behind as the recovery point latency time.
Delete
Stops (if running) and deletes the selected jobs.
Provide Credentials
Changes the login credentials that the job (which is on the target machine) uses to
authenticate to the servers in the job. This button opens the Provide Credentials dialog
box where you can specify the new account information and which servers you want to
update.
Start
Starts or resumes the selected jobs.
If you have previously stopped protection, the job will restart mirroring and replication.
If you have previously paused protection, the job will continue mirroring and replication
from where it left off, as long as the Carbonite Migrate queue was not exhausted during
the time the job was paused. If the Carbonite Migrate queue was exhausted during the
time the job was paused, the job will restart mirroring and replication.
Also if you have previously paused protection, all jobs from the same source to the
same IP address on the target will be resumed.
Stop
Stops the selected jobs. The jobs remain available in the console, but there will be no
mirroring or replication data transmitted from the source to the target. Mirroring and
replication data will not be queued on the source while the job is stopped, requiring a
remirror when the job is restarted. The type of remirror will depend on your job settings.
Take Snapshot
Snapshots are not applicable to migration jobs.
Manage Snapshots
Snapshots are not applicable to migration jobs.
Failover or Cutover
Starts the cutover process. See Cutting over full server migration jobs on page 124 for
the process and details of cutting over a full server migration job.
Failback
Starts the failback process. Failback does not apply to migration jobs.
Restore
Starts the restoration process. Restore does not apply to migration jobs.
Reverse
Reverses protection. Reverse protection does not apply to migration jobs.
If a file is small enough that mirroring the entire file is faster than
comparing it and then mirroring it, Carbonite Availability will
automatically mirror the entire file.
l Verify—Even if you have scheduled the verification process, you can run it manually
any time a mirror is not in progress.
l Report only—Select this option if you only want to generate a verification
report. With this option, no data that is found to be different will be mirrored to
the target. Choose how you want the verification to compare the files.
l Report and mirror files—Select this option if you want to generate a
verification report and mirror data that is different to the target. Select the
comparison method and type of mirroring you want to use. See the previous
mirroring methods described under Mirror Options.
l Set Bandwidth—You can manually override bandwidth limiting settings configured
for your job at any time.
l No bandwidth limit—Carbonite Migrate will transmit data using 100%
bandwidth availability.
l Fixed bandwidth limit—Carbonite Migrate will transmit data using a limited,
fixed bandwidth. Select a Preset bandwidth limit rate from the common
bandwidth limit values. The Bandwidth field will automatically update to the
bytes per second value for your selected bandwidth. This is the maximum
amount of data that will be transmitted per second. If desired, modify the
bandwidth using a bytes per second value. The minimum limit should be 3500
bytes per second.
l Scheduled bandwidth limit—If your job has a configured scheduled
bandwidth limit, you can enable that schedule with this option.
l Delete Orphans—Even if you have enabled orphan file removal during your mirror
and verification processes, you can manually remove them at any time.
l Target—You can pause the target, which queues any incoming Carbonite Migrate
data from the source on the target. All active jobs to that target will complete the
operations already in progress. Any new operations will be queued on the target
until the target is resumed. The data will not be committed until the target is
resumed. Pausing the target only pauses Carbonite Migrate processing, not the
entire server.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
Job name
The name of the job
Job type
Each job type has a unique job type name. This job is a Full Server Migration job. For a
complete list of all job type names, press F1 to view the Carbonite Replication Console
online help.
Health
If you have specified replication rules that exclude a volume at the root, that volume will be
incorrectly added as an inclusion if you edit the job after it has been established. If you
need to edit your job, modify the replication rules to make sure they include the proper
inclusion and exclusion rules that you want.
3. If you want to modify the workload items or replication rules for the job, click Edit workload or
replication rules. Modify the Workload item you are protecting, if desired. Additionally, you can
modify the specific Replication Rules for your job.
Click OK to return to the Edit Job Properties page.
If you remove data from your workload and that data has already been sent to the target,
you will need to manually remove that data from the target. Because the data you
removed is no longer included in the replication rules, Carbonite Migrate orphan file
detection cannot remove the data for you. Therefore, you have to remove it manually.
Because the job log window communicates with the target server, if the console loses
communication with the target server after the job log window has already been opened, the job
log window will display an error.
The following table identifies the controls and the table columns in the Job logs window.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Job logs window. The messages are still logged to their respective files on
the server.
Clear
This button clears the Job logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
reopen the Job logs window.
Time
This column in the table indicates the date and time when the message was logged.
Description
This column in the table displays the actual message that was logged.
on the target. The source may be automatically shut down if it is still running, depending on
your job configuration. The target will stand in for the source by rebooting and applying the
source identity, including its system state, on the target. After the reboot, the target
becomes the source, and the target no longer exists.
l Perform test cutover—This option is not applicable to full server migration jobs.
queue will be applied before cutover begins. The advantage to this option is that all of the
data that the target has received will be applied before cutover begins. The disadvantage to
this option is depending on the amount of data in queue, the amount of time to apply all of
the data could be lengthy.
l Discard data in the target queues and failover or cutover immediately—All of the
data in the target queue will be discarded and cutover will begin immediately. The
advantage to this option is that cutover will occur immediately. The disadvantage is that any
data in the target queue will be lost.
4. When you are ready to begin cutover, click Cutover.
Google Cloud requires specific installation packages from the Google Cloud Repository in
order to successfully migrate. There is not a concise list of packages required because the
list is dependent on multiple factors including the source operating system and
configuration and your Google Cloud project. Rather than try to pre-install every possible
package, it is generally easier to complete the migration and if there are package issues,
address them post-migration. If the migration fails to cutover, review the job log and then
modify the /opt/dbtk/etc/management-service.properties files to include any missing
packages. See the knowledge base article Full Server to Google Cloud Platform (GCP)
Fails Updating Packages at https://support.carbonite.com/doubletake/articles/Full-
Server-to-Google-Cloud-Platform-GCP-Fails-Updating-Packages for details on how to
update the properties file.
l Operating system—Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version—7.7 through 7.9
l Operating system—Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
l Version—8.2 through 8.4
l Operating system—CloudLinux
l Version—7.9
l Kernel type—Default
l Kernel type—Default
l Notes—If you are planning to convert an existing file system to Btrfs, you must
delete any existing Carbonite Migrate jobs and re-create them after converting to
Btrfs.
l Operating system—SUSE Linux Enterprise
l Version—15.0 through 15.2
l Kernel type—Default
l Notes—If you are planning to convert an existing file system to Btrfs, you must
delete any existing Carbonite Migrate jobs and re-create them after converting to
Btrfs.
l Kernel type—Generic
l Operating system—Ubuntu
l Version—18.04.1 through 18.04.3
l Kernel type—Generic
l Operating system—Ubuntu
l Version—20.04.0
l Kernel type—Generic
For all operating systems except Ubuntu, the kernel version must match the expected
kernel for the specified release version. For example, if /etc/redhat-release declares the
system to be a Redhat 7.5 system, the kernel that is installed must match that.
l Packages and services—Each Linux server must have the following packages and services
installed before you can install and use Carbonite Migrate. See your operating system
documentation for details on these packages and utilities.
l sshd (or the package that installs sshd)
l lsb
l parted
l dmidecode
l scp
l which
l libnsl (only required for Red Hat Enterprise Linux, Oracle Enterprise Linux, and CentOS
The free versions of ESX restrict functionality that Carbonite Migrate requires. Therefore,
you must use one of the paid editions of ESX.
l Virtual recovery appliance—The target ESX host must have an existing virtual machine,
known as a virtual recovery appliance. You must have this appliance before you can begin
migration. When you begin migration, the virtual recovery appliance will mount disks, format disks,
and so on. When cutover occurs, a new virtual machine is powered on using the replicated disks
from the appliance. Once the new virtual machine is online, it will have the identity, data, and
system state of the source. Since the appliance maintains its own identity, it can be reused for
additional cutovers.
You have the choice of using an OVF (Open Virtualization Format) virtual machine included with
Carbonite Migrate for your appliance, or creating your own appliance that meets the requirements
below. In either case, keep in mind the following caveats for the appliance.
l The virtual recovery appliance must be a standalone virtual machine.
l It should not reside in any multiple virtual machine vApp.
l The OVF appliance is pre-configured for optimal performance. You do not need to modify
the memory, CPU, or other configurations.
l You should not install or run anything else on the appliance.
l A single virtual recovery appliance can migrate a maximum of 10 sources or jobs with a
maximum of a combined total of 59 volume groups and raw block devices.
If you are creating your own appliance, it must meet the following requirements.
l Operating system—The virtual machine must be running a 64-bit version of one of the
following operating systems.
l Ubuntu version 18.04.1 through 18.04.3
A SLES appliance can only protect source servers running a Carbonite Migrate
supported SLES version. You cannot protect other Linux operating systems to a
SLES appliance.
are using an Ubuntu appliance, the appliance must have the btrfs-tools package. If
the source server you are protecting is SLES 12.x with Btrfs and you are using a
SLES appliance, the btrfsprogs package should already be on the SLES appliance
by default. You cannot protect Btrfs to a Red Hat or CentOS appliance.
l Permissions—If you want to limit the permissions required for the account that you will be using
for your full server to ESX migration job, your account must have at a minimum the permissions
listed below. These permissions can be set at the vCenter, Datacenter, or host level.
l Datastore—Allocate Space, Browse Datastore, Low level file operations, and Remove
File
l Host, Local Operations—Create Virtual Machine, Delete Virtual Machine, and
l Scheduled Task—Create Tasks, Modify Task, Remove Task, and Run Task
l Virtual Machine, Configuration—Add existing disk, Add new disk, Add or remove
Make sure you have additional disk space for Carbonite Migrate queuing, logging, and so
on.
l Server name—Carbonite Migrate includes Unicode file system support, but your server name
must still be in ASCII format. Additionally, all Carbonite Migrate servers and appliances must have
a unique server name.
l Target drivers—Install on the source any drivers that are required on the target after failover.
For example, you need to install on the source any NIC drivers that will be required on the target
after failover.
l Protocols and networking—Your servers must meet the following protocol and networking
requirements.
lYour servers must have TCP/IP with static IP addressing.
l IPv4 only configurations are supported, IPv4 and IPv6 are supported in combination,
l If you are using IPv6 on your servers, your console must be run from an IPv6 capable
machine.
l In order to properly resolve IPv6 addresses to a hostname, a reverse lookup entry should
be made in DNS.
l If you are using Carbonite Migrate over a WAN and do not have DNS name resolution, you
will need to add the host names to the local hosts file on each server running Carbonite
Migrate.
l Ubuntu Netplan is supported, however the network configuration on the source and target
should match. If you have a mix of network types (traditional, NetworkManager, or Netplan)
on the source and target, you may have to configure the networking on the target after
cutover.
l NAT support—Carbonite Migrate supports NAT environments with the following caveats.
l Only IPv4 is supported.
l Only standalone servers are supported.
l Make sure you have added your server to the Carbonite Replication Console using the
correct public or private IP address. The name or IP address you use to add a server to the
console is dependent on where you are running the console. Specify the private IP address
of any servers on the same side of the router as the console. Specify the public IP address
of any servers on the other side of the router as the console.
l DNS failover and updates will depend on your configuration
l Only the source or target can be behind a router, not both.
3. By default, Carbonite Migrate selects the system and boot volumes for migration. You will be
unable to deselect these volumes. Select any other volumes on the source that you want to
migrate.
The swap partition is excluded by default and you cannot select it, however, a swap
partition will be created on the replica.
If desired, click the Replication Rules heading and expand the volumes under Folders. You will
see that Carbonite Migrate automatically excludes particular files that cannot be used during the
migration. If desired, you can exclude other files that you do not want to migrate, but be careful
when excluding data. Excluded volumes, folders, and/or files may compromise the integrity of
your installed applications.
Volumes and folders with a green highlight are included completely. Volumes and folders
highlighted in light yellow are included partially, with individual files or folders included. If there is
If you return to this page using the Back button in the job creation workflow, your
Workload Types selection will be rebuilt, potentially overwriting any manual replication
rules that you specified. If you do return to this page, confirm your Workload Types and
Replication Rules are set to your desired settings before proceeding forward again.
l Current Servers—This list contains the servers currently available in your console
session. Servers that are not licensed for the workflow you have selected and those not
If you enter the target server's fully-qualified domain name, the Carbonite Replication
Console will resolve the entry to the server short name. If that short name resides in two
different domains, this could result in name resolution issues. In this case, enter the IP
address of the server.
When specifying credentials for a new server, specify a user that is a member of the local
dtadmin security group.
l Current VMware Servers—This list contains the vCenter and ESX servers currently
available in your console session. Select your server from the list.
l Find a New VMware Server—If the server you need is not in the Current VMware
Servers list, click the Find a New VMware Server heading.
l vCenter/ESXi Server—Select your server from the list. If your server is not in the
If your server name does not match the security certificate or the security certificate has
expired, you will be prompted if you want to install the untrusted security certificate.
8. Click Next to continue.
You may be prompted for a route from the target to the source. This route is used so the
target can communicate with the source to build job options. This dialog box will be
displayed only if needed.
9. You have many options available for your server migration job. Configure those options that are
applicable to your environment.
Go to each page identified below to see the options available for that section of the Set Options
page. After you have configured your options, continue with the next step on page 150.
l General on page 136
l Replica Virtual Machine Location on page 136
l Replica Virtual Machine Configuration on page 137
l Replica Virtual Machine Volumes on page 138
l Replica Virtual Machine Network Settings on page 144
l Failover Options on page 145
l Mirror, Verify & Orphaned Files on page 145
l Network Route on page 146
l Compression on page 148
l Bandwidth on page 149
General
For the Job name, specify a unique name for your job.
l Display name—Specify the name of the replica virtual machine. This will be the display
name of the virtual machine on the host system.
l Hardware configuration—Specify how you want the replica virtual machine to be
created.
l Sockets—Specify how many sockets to create on the new virtual machine. The
cores per socket on the source is displayed to guide you in making an appropriate
selection.
l Memory—Specify the amount of memory, in MB, to create on the new virtual
If the operating system on the source is not compatible with the VmxNet3 driver on
the target appliance, and the source does not have VMware Tools already, you
need to install VMware Tools on the replica after failover in order for the VmxNet3
adapter to work correctly. Alternatively, you could select a different network
adapter type, if another type is available.
If your source is UEFI, you will only have the option to create disks that match your
source. You will not be able to create disks per volume on your replica virtual machine.
l Create disks matching source—Select this option if you want the disk configuration on
the target replica to match the disk configuration on the source.
l Virtual Disk—Specify if you want Carbonite Migrate to create a new disk for your
replica virtual machine or if you want to use an existing disk. If you have more than
one disk, you cannot mix and match new and existing. They must all be new disks or
all existing disks.
Reusing a virtual disk can be useful for pre-staging data on a LAN and then
relocating the virtual disk to a remote site after the initial mirror is complete. You save
time by skipping the virtual disk creation steps and performing a difference mirror
instead of a full mirror. With pre-staging, less data will need to be sent across the wire
initially. In order to use an existing virtual disk, it must be a valid virtual disk, it cannot
be attached to any other virtual machine, and it cannot have any associated
snapshots.
Each pre-existing disk must be located on the target datastore specified. If you have
copied the .vmdk file to this location manually, be sure you have also copied the
associated -flat.vmdk file too. If you have used vCenter to copy the virtual machine,
the associated file will automatically be copied. There are no restrictions on the file
If you have reused some existing disks and created some new disks, the
numbering of the hard disks will not be identical on the source and the replica
virtual machine. New disks will be created first and then existing disks will be
attached. VMware assigns the hard disk numbers in order of creation and
then those that are attached. The Virtual Device Node SCSI IDs will still be
correct and there will be no impact within the guest of the replica virtual
machine.
If your source has multiple partitions inside a single .vmdk, you can only use
an existing virtual disk that Carbonite Migrate created. You can only use an
existing virtual disk created outside of Carbonite Migrate if there is one
partition in each pre-existing disk.
If you are using Logical Volume Manager, then you can only use existing
disks when creating a new full server to ESX appliance job if the existing
disks were created using Carbonite Migrate version 7.1 or later. Versions
prior to 7.1 have important LVM information deleted when the job is deleted,
thus you cannot reuse the disk for a future job. If you are not using LVM, this
is not an issue.
l Datastore—Specify the datastore where you want to store the .vmdk files for the
disk. You can specify the location of the virtual machine configuration files in the
Replica Virtual Machine Location section.
l Replica disk format—If you are creating a new disk, specify the format of the disk
that will be created.
l Thick Lazy Zeroed—This disk format allocates the full amount of the disk
space immediately, but does not initialize the disk space to zero until it is
needed. It may also be known as a flat disk.
l Thick Eager Zeroed—This disk format allocates the full amount of the disk
listed in the Volume list. Highlight a volume group and set the available Volume
Group Properties that are displayed to the right of the Volume list. The fields
displayed in the Volume Group Properties will depend on your selection for
Virtual disk.
l Virtual Disk—Specify if you want Carbonite Migrate to create a new disk for
your replica virtual machine or if you want to use an existing disk.
Reusing a virtual disk can be useful for pre-staging data on a LAN and then
relocating the virtual disk to a remote site after the initial mirror is complete.
You save time by skipping the virtual disk creation steps and performing a
difference mirror instead of a full mirror. With pre-staging, less data will need to
be sent across the wire initially. In order to use an existing virtual disk, it must
be a valid virtual disk, it cannot be attached to any other virtual machine, and it
cannot have any associated snapshots.
Each pre-existing disk must be located on the target datastore specified. If you
have copied the .vmdk file to this location manually, be sure you have also
copied the associated -flat.vmdk file too. If you have used vCenter to copy the
virtual machine, the associated file will automatically be copied. There are no
restrictions on the file name of the .vmdk, but the associated -flat.vmdk file
must have the same base name and the reference to that flat file in the .vmdk
must be correct. Carbonite Migrate will move, not copy, the virtual disk files to
the appropriate folders created by the replica, so make sure the selected
target datastore is where you want the replica virtual disk to be located.
In a WAN environment, you may want to take advantage of using an existing
disk by using a process similar to the following.
If you have reused some existing disks and created some new disks,
the numbering of the hard disks will not be identical on the source and
the replica virtual machine. New disks will be created first and then
existing disks will be attached. VMware assigns the hard disk numbers
in order of creation and then those that are attached. The Virtual
Device Node SCSI IDs will still be correct and there will be no impact
within the guest of the replica virtual machine.
If your source has multiple partitions inside a single .vmdk, you can
only use an existing virtual disk that Carbonite Migrate created. You
can only use an existing virtual disk created outside of Carbonite
Migrate if there is one partition in each pre-existing disk.
If you are using Logical Volume Manager, then you can only use
existing disks when creating a new full server to ESX appliance job if
the existing disks were created using Carbonite Migrate version 7.1 or
later. Versions prior to 7.1 have important LVM information deleted
when the job is deleted, thus you cannot reuse the disk for a future job.
If you are not using LVM, this is not an issue.
l Datastore—Specify the datastore where you want to store the .vmdk files for
the volume group. You can specify the location of the virtual machine
configuration files in the Replica Virtual Machine Location section.
l Replica disk format—If you are creating a new disk, specify the format of the
disk that will be created.
l Thick Lazy Zeroed—This disk format allocates the full amount of the
disk space immediately, but does not initialize the disk space to zero
until it is needed. It may also be known as a flat disk.
l Thick Eager Zeroed—This disk format allocates the full amount of the
disk space immediately, initializing all of the allocated disk space to zero.
l Thin—This disk format does not allocate the disk space until it is
needed.
l Physical volume maximum size—If you are creating a new disk, specify the
maximum size, in MB or GB, of the virtual disks used to create the volume
group. The default value is equal to the maximum size that can be attached to
the datastore you selected. That will depend on your ESX version, your file
system version, and the block size of your datastore.
specify the location of the existing virtual disks that you want to reuse.
l Logical Volume Properties—If your source has logical volumes, you will see them
listed in the Volume list. Highlight a logical volume and set the available Logical
Volume Properties that are displayed to the right of the Volume list.
If you are using an existing virtual disk, you will not be able to modify the
logical volume properties.
The size and space displayed may not match the output of the Linux df
command. This is because df shows the size of the mounted file system not
the underlying partition which may be larger. Additionally, Carbonite Migrate
uses powers of 1024 when computing GB, MB, and so on. The df command
typically uses powers of 1000 and rounds up to the nearest whole value.
In some cases, the replica virtual machine may use more virtual disk
space than the size of the source volume due to differences in how the
virtual disk's block size is formatted and how hard links are handled.
l Partition Properties—If your source has partitions, you will see them listed in the
Volume list. Highlight a partition and set the available Partition Properties that are
displayed to the right of the Volume list. The fields displayed in the Partition
Properties will depend on your selection for Virtual disk.
The size and space displayed may not match the output of the Linux df
command. This is because df shows the size of the mounted file system not
the underlying partition which may be larger. Additionally, Carbonite Migrate
uses powers of 1024 when computing GB, MB, and so on. The df command
typically uses powers of 1000 and rounds up to the nearest whole value.
l Virtual Disk—Specify if you want Carbonite Migrate to create a new disk for
your replica virtual machine or if you want to use an existing disk. Review the
details above under Volume Group Properties Virtual Disk for information
on using an existing disk.
l Disk size—This field displays the size of the partition on the source.
l Used space—This field displays the amount of disk space in use on the
source partition.
l Datastore—Specify the datastore where you want to store the .vmdk files for
the partition. You can specify the location of the virtual machine configuration
files in the Replica Virtual Machine Location section.
l Replica disk format—Specify the format of the disk that will be created.
l Thick Lazy Zeroed—This disk format allocates the full amount of the
disk space immediately, but does not initialize the disk space to zero
until it is needed. It may also be known as a flat disk.
needed.
l Replica volume size—Specify the size, in MB or GB, of the replica partition
on the target. The value must be at least the size of the specified Used space
on that partition.
l Pre-existing disks path—If you are using an existing virtual disk, specify the
location of the existing virtual disks that you want to reuse.
l Network adapters—Select a network adapter from the source and specify the Replica
IP addresses, Replica Default Gateways, and Replica DNS Server addresses to be
used after cutover. If you add multiple gateways or DNS servers, you can sort them by
using the arrow up and arrow down buttons. Repeat this step for each network adapter on
the source.
Updates made during cutover will be based on the network adapter name when
protection is established. If you change that name, you will need to delete the job
and re-create it so the new name will be used during cutover.
If you update one of the advanced settings (IP address, gateway, or DNS server),
then you must update all of them. Otherwise, the remaining items will be left blank.
Failover Options
l Wait for user to initiate failover—The cutover process can wait for you to initiate it,
allowing you to control when cutover occurs. When a cutover occurs, the job will wait in the
Protecting state for you to manually initiate the cutover process. Disable this option if you
want cutover to occur immediately after the mirror is complete.
l Shutdown source server—Specify if you want to shut down the source server, if it is still
running, before the source is cutover to the target, This option prevents identity conflicts on
the network in those cases where the source and target are still both running and
communicating.
l Target Scripts—You can customize cutover by running scripts on the target appliance
and replica. Scripts may contain any valid Linux command, executable, or shell script file.
The scripts are processed using the same account running the Double-Take Management
service. Examples of functions specified in scripts include stopping and starting services,
stopping and starting applications or processes, notifying the administrator before and after
cutover occurs, and so on. There are two types of cutover scripts.
l Pre-failover script—This script runs on the target appliance at the beginning of the
cutover process. Specify the full path and name of the script file.
l Delay until script completes—Enable this option if you want to delay the cutover
process until the associated script has completed. If you select this option, make sure
your script handles errors, otherwise the cutover process may never complete if the
process is waiting on a script that cannot complete.
l Post-failover script—This script runs on the replica at the end of the cutover
process. Specify the full path and name of the script file.
l Arguments—Specify a comma-separated list of valid arguments required to
execute the script.
If a file is small enough that mirroring the entire file is faster than comparing it and
then mirroring it, Carbonite Availability will automatically mirror the entire file.
Orphaned file configuration is a per target configuration. All jobs to the same
target will have the same orphaned file configuration.
If delete orphaned files is enabled, carefully review any replication rules that
use wildcard definitions. If you have specified wildcards to be excluded from
protection, files matching those wildcards will also be excluded from
orphaned file processing and will not be deleted from the target. However, if
you have specified wildcards to be included in your protection, those files that
fall outside the wildcard inclusion rule will be considered orphaned files and
will be deleted from the target.
Network Route
If you change the IP address on the target which is used for the target route, you will be
unable to edit the job. If you need to make any modifications to the job, it will have to be
deleted and re-created.
To help reduce the amount of bandwidth needed to transmit Carbonite Migrate data, compression
allows you to compress data prior to transmitting it across the network. In a WAN environment this
provides optimal use of your network resources. If compression is enabled, the data is
compressed before it is transmitted from the source. When the target receives the compressed
data, it decompresses it and then writes it to disk. You can set the level from Minimum to
Maximum to suit your needs.
Keep in mind that the process of compressing data impacts processor usage on the source. If you
notice an impact on performance while compression is enabled in your environment, either adjust
to a lower level of compression, or leave compression disabled. Use the following guidelines to
determine whether you should enable compression.
l If data is being queued on the source at any time, consider enabling compression.
l If the server CPU utilization is averaging over 85%, be cautious about enabling
compression.
l The higher the level of compression, the higher the CPU utilization will be.
l Do not enable compression if most of the data is inherently compressed. Many image (.jpg,
.gif) and media (.wmv, .mp3, .mpg) files, for example, are already compressed. Some
images files, such as .bmp and .tif, are decompressed, so enabling compression would be
beneficial for those types.
l Compression may improve performance even in high-bandwidth environments.
l Do not enable compression in conjunction with a WAN Accelerator. Use one or the other to
compress Carbonite Migrate data.
All jobs from a single source connected to the same IP address on a target will share the
same compression configuration.
Bandwidth limitations are available to restrict the amount of network bandwidth used for
Carbonite Migrate data transmissions. When a bandwidth limit is specified, Carbonite Migrate
never exceeds that allotted amount. The bandwidth not in use by Carbonite Migrate is available
for all other network traffic.
All jobs from a single source connected to the same IP address on a target will share the
same bandwidth configuration.
l Do not limit bandwidth—Carbonite Migrate will transmit data using 100% bandwidth
availability.
l Use a fixed limit—Carbonite Migrate will transmit data using a limited, fixed bandwidth.
Select a Preset bandwidth limit rate from the common bandwidth limit values. The
Bandwidth field will automatically update to the bytes per second value for your selected
bandwidth. This is the maximum amount of data that will be transmitted per second. If
desired, modify the bandwidth using a bytes per second value. The minimum limit should be
3500 bytes per second.
l Use scheduled limits—Carbonite Migrate will transmit data using a dynamic bandwidth
based on the schedule you configure. Bandwidth will not be limited during unscheduled
times.
l New—Click New to create a new scheduled bandwidth limit. Specify the following
information.
l Daytime entry—Select this option if the start and end times of the bandwidth
window occur in the same day (between 12:01 AM and midnight). The start
time must occur before the end time.
l Overnight entry—Select this option if the bandwidth window begins on one
day and continues past midnight into the next day. The start time must be later
than the end time, for example 6 PM to 6 AM.
l Day—Enter the day on which the bandwidth limiting should occur. You can
pick a specific day of the week, Weekdays to have the limiting occur Monday
through Friday, Weekends to have the limiting occur Saturday and Sunday, or
Every day to have the limiting repeat on all days of the week.
l Start time—Enter the time to begin bandwidth limiting.
bandwidth limit values. The Bandwidth field will automatically update to the
bytes per second value for your select bandwidth.
l Bandwidth—If desired, modify the bandwidth using a bytes per second value.
If you change your job option from Use scheduled limits to Do not limit
bandwidth or Use a fixed limit, any schedule that you created will be preserved.
That schedule will be reused if you change your job option back to Use scheduled
limits.
You can manually override a schedule after a job is established by selecting Other
Job Options, Set Bandwidth. If you select No bandwidth limit or Fixed
bandwidth limit, that manual override will be used until you go back to your
schedule by selecting Other Job Options, Set Bandwidth, Scheduled
bandwidth limit. For example, if your job is configured to use a daytime limit, you
would be limited during the day, but not at night. But if you override that, your
override setting will continue both day and night, until you go back to your schedule.
See the Managing and controlling jobs section for your job type for more
information on the Other Job Options.
Once a job is created, do not change the name of underlying hardware components used in the
job. For example, volume names, network adapter names, or virtual switch names. Any
component used by name in your job must continue to use that name throughout the lifetime of
Column 1 (Blank)
The first blank column indicates the state of the job.
A green circle with a white checkmark indicates the job is in a healthy state. No
action is required.
A yellow triangle with a black exclamation point indicates the job is in a pending or
warning state. This icon is also displayed on any server groups that you have created
that contain a job in a pending or warning state. Carbonite Migrate is working or waiting
on a pending process or attempting to resolve the warning state.
A red circle with a white X indicates the job is in an error state. This icon is also
displayed on any server groups that you have created that contain a job in an error
state. You will need to investigate and resolve the error.
Name
The name of the job
Target data state
l OK—The data on the target is in a good state.
l Mirroring—The target is in the middle of a mirror process. The data will not be in a
good state until the mirror is complete.
l Mirror Required—The data on the target is not in a good state because a remirror
is required. This may be caused by an incomplete or stopped mirror or an operation
may have been dropped on the target.
l Busy—The source is low on memory causing a delay in getting the state of the data
on the target.
l Not Loaded—Carbonite Migrate target functionality is not loaded on the target
server. This may be caused by a license key error.
l Not Ready—The Linux drivers have not yet completed loading on the target.
l Unknown—The console cannot determine the status.
Mirror remaining
The total number of mirror bytes that are remaining to be sent from the source to the
target.
Mirror skipped
The total number of bytes that have been skipped when performing a difference mirror.
These bytes are skipped because the data is not different on the source and target.
Replication queue
The total number of replication bytes in the source queue
Disk queue
The amount of disk space being used to queue data on the source
Recovery point latency
The length of time replication is behind on the target compared to the source. This is the
time period of replication data that would be lost if a failure were to occur at the current
time. This value represents replication data only and does not include mirroring data. If
you are mirroring and failover, the data on the target will be at least as far behind as the
recovery point latency. It could potentially be further behind depending on the
circumstances of the mirror. If mirroring is idle and you failover, the data will only be as
far behind as the recovery point latency time.
Delete
Stops (if running) and deletes the selected jobs.
Provide Credentials
Changes the login credentials that the job (which is on the target machine) uses to
authenticate to the servers in the job. This button opens the Provide Credentials dialog
box where you can specify the new account information and which servers you want to
update.
Start
Starts or resumes the selected jobs.
If you have previously stopped protection, the job will restart mirroring and replication.
If you have previously paused protection, the job will continue mirroring and replication
from where it left off, as long as the Carbonite Migrate queue was not exhausted during
the time the job was paused. If the Carbonite Migrate queue was exhausted during the
time the job was paused, the job will restart mirroring and replication.
Also if you have previously paused protection, all jobs from the same source to the
same IP address on the target will be resumed.
Stop
Stops the selected jobs. The jobs remain available in the console, but there will be no
mirroring or replication data transmitted from the source to the target. Mirroring and
replication data will not be queued on the source while the job is stopped, requiring a
remirror when the job is restarted. The type of remirror will depend on your job settings.
Take Snapshot
Snapshots are not applicable to migration jobs.
Manage Snapshots
Snapshots are not applicable to migration jobs.
Failover or Cutover
Starts the cutover process. See Cutting over full server to ESX migration jobs on page
171 for the process and details of cutting over a full server to ESX migration job.
Failback
Starts the failback process. Failback does not apply to migration jobs.
Restore
Starts the restoration process. Restore does not apply to migration jobs.
Reverse
Reverses protection. Reverse protection does not apply to migration jobs.
If a file is small enough that mirroring the entire file is faster than
comparing it and then mirroring it, Carbonite Availability will
automatically mirror the entire file.
l Verify—Even if you have scheduled the verification process, you can run it manually
any time a mirror is not in progress.
l Report only—Select this option if you only want to generate a verification
report. With this option, no data that is found to be different will be mirrored to
the target. Choose how you want the verification to compare the files.
l Report and mirror files—Select this option if you want to generate a
verification report and mirror data that is different to the target. Select the
comparison method and type of mirroring you want to use. See the previous
mirroring methods described under Mirror Options.
l Set Bandwidth—You can manually override bandwidth limiting settings configured
for your job at any time.
l No bandwidth limit—Carbonite Migrate will transmit data using 100%
bandwidth availability.
l Fixed bandwidth limit—Carbonite Migrate will transmit data using a limited,
fixed bandwidth. Select a Preset bandwidth limit rate from the common
bandwidth limit values. The Bandwidth field will automatically update to the
bytes per second value for your selected bandwidth. This is the maximum
amount of data that will be transmitted per second. If desired, modify the
bandwidth using a bytes per second value. The minimum limit should be 3500
bytes per second.
l Scheduled bandwidth limit—If your job has a configured scheduled
bandwidth limit, you can enable that schedule with this option.
l Delete Orphans—Even if you have enabled orphan file removal during your mirror
and verification processes, you can manually remove them at any time.
l Target—You can pause the target, which queues any incoming Carbonite Migrate
data from the source on the target. All active jobs to that target will complete the
operations already in progress. Any new operations will be queued on the target
until the target is resumed. The data will not be committed until the target is
resumed. Pausing the target only pauses Carbonite Migrate processing, not the
entire server.
Overflow Chevron
Displays any toolbar buttons that are hidden from view when the window size is
reduced.
Job name
The name of the job
Job type
Each job type has a unique job type name. This job is a Full Server to ESX Migration
job. For a complete list of all job type names, press F1 to view the Carbonite Replication
Console online help.
Health
If you have specified replication rules that exclude a volume at the root, that volume will be
incorrectly added as an inclusion if you edit the job after it has been established. If you
need to edit your job, modify the replication rules to make sure they include the proper
inclusion and exclusion rules that you want.
3. If you want to modify the workload items or replication rules for the job, click Edit workload or
replication rules. Modify the Workload item you are protecting, if desired. Additionally, you can
modify the specific Replication Rules for your job.
Volumes and folders with a green highlight are included completely. Volumes and folders
highlighted in light yellow are included partially, with individual files or folders included. If there is
no highlight, no part of the volume or folder is included. To modify the items selected, highlight a
volume, folder, or file and click Add Rule. Specify if you want to Include or Exclude the item.
Also, specify if you want the rule to be recursive, which indicates the rule should automatically be
applied to the subdirectories of the specified path. If you do not select Recursive, the rule will not
be applied to subdirectories.
You can also enter wildcard rules, however you should do so carefully. Rules are applied to files
that are closest in the directory tree to them. If you have rules that include multiple folders, an
exclusion rule with a wild card will need to be added for each folder that it needs applied to. For
example, if you want to exclude all .log files from /home and your rules include /home,
/home/folder1, and /home/folder2, you would need to add the exclusion rule for the root and each
subfolder rule. So you will need to add exclude rules for /home/*.log , /home/folder1/*.log, and
/home/folder2/*.log.
If you need to remove a rule, highlight it in the list at the bottom and click Remove Rule. Be
careful when removing rules. Carbonite Migrate may create multiple rules when you are adding
directories. For example, if you add /home/admin to be included in protection, then /home will be
excluded. If you remove the /home exclusion rule, then the /home/admin rule will be removed
also.
Click OK to return to the Edit Job Properties page.
Because the job log window communicates with the target server, if the console loses
communication with the target server after the job log window has already been opened, the job
log window will display an error.
The following table identifies the controls and the table columns in the Job logs window.
Start
This button starts the addition and scrolling of new messages in the window.
Pause
This button pauses the addition and scrolling of new messages in the window. This is
only for the Job logs window. The messages are still logged to their respective files on
the server.
Clear
This button clears the Job logs window. The messages are not cleared from the
respective files on the server. If you want to view all of the messages again, close and
reopen the Job logs window.
Time
This column in the table indicates the date and time when the message was logged.
Description
This column in the table displays the actual message that was logged.
on the target. The source may be automatically shut down if it is still running, depending on
your job configuration. The protection job is stopped and the replica virtual machine is
started on the target with full network connectivity.
l Perform test failover—This option is not applicable to full server to ESX migration jobs.
queue will be applied before cutover begins. The advantage to this option is that all of the
data that the target has received will be applied before cutover begins. The disadvantage to
this option is depending on the amount of data in queue, the amount of time to apply all of
the data could be lengthy.
l Discard data in the target queues and failover or cutover immediately—All of the
data in the target queue will be discarded and cutover will begin immediately. The
advantage to this option is that cutover will occur immediately. The disadvantage is that any
data in the target queue will be lost.
4. When you are ready to begin cutover, click Cutover.
Once cutover has started, do not reboot the target appliance. If the cutover process is
interrupted, it may fail.
Changing the driver performance settings can have a positive or negative impact on server
performance. These settings are for advanced users. If you are uncertain how to best modify the
driver performance settings, contact technical support.
(false) Adaptive Throttling. This occurs when kernel memory usage exceeds the
Throttling Start Level percentage. When throttling is enabled, operations are delayed by,
at most, the amount of time set in Maximum Throttling Delay, thus reducing kernel
memory usage. Throttling stops when the kernel memory usage drops below the
Throttling Stop Level percentage.
l Toggle Forced Adaptive Throttling—You can toggle between enabling (true) and
disabling (false) Forced Adaptive Throttling. This causes all operations to be delayed by,
at most, the amount of time in set in Maximum Throttling Delay, regardless of the kernel
memory being used. Adaptive Throttling must be enabled (true) in order for Forced
Adaptive Throttling to work.
l Set Maximum Throttling Delay—This option is the maximum time delay, in milliseconds,
memory usage during a throttling delay. If a delay is no longer needed, the remainder of the
delay is skipped.
l Set Throttling Start Level—Throttling starts when disk writes reach the specified
percentage. This prevents the driver from stopping replication because memory has been
exhausted.
l Set Throttling Stop Level—Throttling stops when disk writes reach the specified
percentage.
l Set Memory Usage Limit—This option is the amount of kernel memory, in bytes, used for
queuing replication operations. When this limit is exceeded, the driver will send an error to
the service forcing a remirror of all active connections.
l Set Maximum Write Buffer Size—This option is the maximum amount of system
memory, in bytes, allowed for a single write operation. Operations exceeding this amount
are split into separate operations in the queue.
6. After you have completed your driver performance modifications, press Q as many times as
needed to return back to the main menu or to exit DTSetup.
Double-Take service but does not unload the Carbonite Migrate drivers.
l Restart service and reset driver config—This option does a full stop and start,
completely unloading the Double-Take service and Carbonite Migrate drivers and then
reloading them.
l Stop the running service and teardown driver config—This option stops the Double-
Configure Block Device Replication. When you press Q to exit from that menu, you will
return this menu.
4. When you have completed your starting and stopping tasks, press Q as many times as needed to
return back to the main menu or to exit DTSetup.
6. When you have completed your documentation and troubleshooting tasks, press Q as many times
as needed to return back to the main menu or to exit DTSetup.
When setting up a job in an environment with IP or port forwarding, make sure you specify the following
configurations.
l Make sure you have added your server to the Carbonite Replication Console using the correct
public or private IP address. The name or IP address you use to add a server to the console is
dependent on where you are running the console. Specify the private IP address of any servers
on the same side of the router as the console. Specify the public IP address of any servers on the
other side of the router as the console. This option is on the Add Servers page in the Manual
Entry tab.