0% found this document useful (0 votes)
40 views295 pages

Mysql For TPCF

mysql tanzu

Uploaded by

Shalabi Shalabie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views295 pages

Mysql For TPCF

mysql tanzu

Uploaded by

Shalabi Shalabie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Tanzu for MySQL

Tanzu for MySQL on Cloud Foundry 10.0


Tanzu for MySQL

You can find the most up-to-date technical documentation on the VMware by Broadcom website at:

https://techdocs.broadcom.com/

VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

Copyright © 2025 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its
subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade names, service marks,
and logos referenced herein belong to their respective companies.

2
Tanzu for MySQL

Contents
VMware Tanzu for MySQL on Cloud Foundry ..................... 14
VMware Tanzu for MySQL ....................................... 15
Product Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
On-Demand Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
On-Demand Service Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Enterprise-ready checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Release Notes .................................................. 18


v10.0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
v10.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
v10.0.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

VMware Tanzu for MySQL - Architecture and Planning Guide .... 22


On-Demand networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Service network requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Default network and Service network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Required networking rules for on-demand services . . . . . . . . . . . . . . . . . . . . . 23
Required networking rules for Tanzu for MySQL . . . . . . . . . . . . . . . . . . . . . . . . 24
Required networking rules for Leader-Follower plans . . . . . . . . . . . . . . . . . . . . . . . 25
Required networking rules for highly available (HA) cluster plans . . . . . . . . . . . . . . 25
Required networking rules for Multi‑Site Replication . . . . . . . . . . . . . . . . . . . . . . . 26

Availability options ............................................. 26


VMware Tanzu for MySQL on Cloud Foundry Topologies . . . . . . . . . . . . . . . . . . 26
Pros and cons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
RPOs and RTOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
RPOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
RTOs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

About Leader-Follower topology ................................ 29


Overview on Leader-Follower Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
About Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About Synchronous Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
About Leader-Follower Errands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Infrastructure Requirements for Leader-Follower Deployments . . . . . . . . . . . . 30
Capacity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3
Tanzu for MySQL

Availability Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Networking Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

About highly available clusters .................................. 31


Highly available cluster topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Using highly available deployments as a multi-site leader . . . . . . . . . . . . . . . . 32
Infrastructure requirements for highly available deployments . . . . . . . . . . . . . 33
Capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Availability zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Networking Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Highly available cluster limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Storage engine limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Architecture limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Networking limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
MySQL Server defaults for HA components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
State snapshot transfer process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Large data file splitting enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Maximum transaction sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
MySQL proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

About multi-site replication ..................................... 36


Overview of Multi-Site Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Multi-Site Replication Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Multi-site replication benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Active-passive topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
App-layer active-active topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Failover and switchover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Active-passive switchover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
App-layer active-active switchover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
About enabling external access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Infrastructure requirements for multi-site configuration . . . . . . . . . . . . . . . . . 44
Capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Networking requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Availability zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Multi-site replication limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

About Service-Gateway access .................................. 45


Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

VMware Tanzu for MySQL recommended usage and limitations .. 46


About on-demand plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Resource usage planning for on-demand plans . . . . . . . . . . . . . . . . . . . . . . . . . 47
Availability using multiple AZs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Downtime during redeploys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Persistent disk usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Single node and leader-follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Multi-Site Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
HA Cluster Jumpbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
HA cluster node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4
Tanzu for MySQL

VMware Tanzu for MySQL - Operator Guide ..................... 52


Getting Started with VMware Tanzu for MySQL - Operator ...... 52
Installing and configuring VMware Tanzu for MySQL ............. 53
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Create an App Security Group for Tanzu for MySQL . . . . . . . . . . . . . . . . . . . . . . . . 53
Enable the BOSH Resurrector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Download and import the tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Configuring the tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Configure AZs and Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Configuring service plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Procedure for configuring service plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Deactivate Service Plan (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Configure Global Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Configuring MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Configuring backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Configuring security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Configuring monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Configuring system logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Configuring service instance upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Review errands (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Verifying stemcell version and apply all changes . . . . . . . . . . . . . . . . . . . . . . . . . 71

Preparing for multi-site replication ............................. 71


Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Enable Multi-Site replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Preparing for TLS ............................................... 72


Generated or Provided CA Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Using the Generated CA Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Providing Your Own CA Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Find the CredHub Credentials in Tanzu Operations Manager . . . . . . . . . . . . . . 74
Set a Custom CA Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Add the CA Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Configure TLS in TCF-MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Setting limits for on-demand service instances ................. 77


Create global-level quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Create plan-level quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Create and set org-level quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Create and set space-level quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
View current org and space-level quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Monitor quota use and service instance count . . . . . . . . . . . . . . . . . . . . . . . . . 79
Calculate resource costs for on-demand plans . . . . . . . . . . . . . . . . . . . . . . . . . 80
Example: Single Node and Leader Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Example: Galera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Calculate maximum resource cost per on-demand plan . . . . . . . . . . . . . . . . . . . . . 81
Calculate maximum resource cost for all on-demand plans . . . . . . . . . . . . . . . . . . 81

5
Tanzu for MySQL

Calculate actual resource cost of all on-demand plans . . . . . . . . . . . . . . . . . . . . . 82

Enabling Service-Gateway access ............................... 82


Enable TCP Routing using the tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Configure the firewall to allow incoming traffic to the TCP Router . . . . . . . . . 83
Configure the Load Balancer in the IaaS to redirect traffic to the TCP Router . 83
Create a DNS record that maps to the Load Balancer . . . . . . . . . . . . . . . . . . . 84
Enable Service-Gateway access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Turn off Service-Gateway access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Developer workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Controlling access to service plans by org . . . . . . . . . . . . . . . . . . . . . . . 86


Control access to service plan by orgs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Operator Guide - Managing VMware Tanzu for MySQL ........... 86


Upgrading VMware Tanzu for MySQL ............................ 87
Upgrade Tanzu for MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
About individual service instance upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Upgrading from MySQL 5.7 to 8.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
About MySQL 5.7 to 8.0 upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Service interruptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Stemcell or service update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Plan change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Service broker deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Backing up and Restoring ....................................... 91


About Backups for Tanzu for MySQL ............................ 91
About full backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
About full backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
About incremental backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
About incremental backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Prerequisite: adbr plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Configuring full backups ........................................ 93


Configuring automatic full backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Back up using SCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Create a public and private Key‑Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Configure backups in Tanzu Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Back up to Amazon S3 or Ceph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Create a Custom policy and access key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Configure backups in Tanzu Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Back up to Amazon S3 with instance profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Create an IAM role with a custom policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Add a policy to the existing Tanzu Operations Manager user or role . . . . . . . . . . . 100
Configure a VM Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Apply Changes and upgrade all service instances . . . . . . . . . . . . . . . . . . . . . . . . 108
(Optional) Verify that the IAM role is associated with MySQL service instances . . . 108
Back up to GCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Create a service account and private key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

6
Tanzu for MySQL

Configure backups in Tanzu Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . 109


Back up to Azure storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Create a storage account and access key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Configure backups in Tanzu Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . 111
Use Healthwatch to confirm full backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Configuring incremental backups .............................. 113


Manually restoring from backup ............................... 114
Identify and download the backup artifact . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
If restoring a backup from a deleted service instance . . . . . . . . . . . . . . . . . . . . . 115
If restoring to a different foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Retrieve backup encryption key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Create and prepare a new service instance for restore . . . . . . . . . . . . . . . . . 118
Restore the service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Restage the service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Accessing a database as an admin user ....................... 121


Connect to MySQL with BOSH SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Connect to MySQL with CredHub credentials . . . . . . . . . . . . . . . . . . . . . . . . . 121

Rotating certificates ........................................... 123


Rotate services TLS certificate authority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Certificates used by Tanzu for MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Resolving service interruptions ................................ 124


Stemcell or service update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Plan change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
VM Process failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
VM Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
AZ Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Region failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Running service broker errands ................................ 126


Run an errand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Available errands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
find-deprecated-bindings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
smoke-tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
configure-leader-follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
make-leader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
make-read-only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
upgrade-all-service-instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
register-broker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
delete-all-service-instances-and-deregister-broker . . . . . . . . . . . . . . . . . . . . . . . 129
recreate-all-service-instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
orphan-deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
inspect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Troubleshooting VMware Tanzu for MySQL ..................... 133


Troubleshoot errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

7
Tanzu for MySQL

Failed Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


Cannot Create or Delete Service Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Broker Request Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Instance Does Not Exist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Cannot Bind to or Unbind from Service Instances . . . . . . . . . . . . . . . . . . . . . . . . 135
Cannot Connect to a Service Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Upgrade All Service Instances Errand Fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Missing Logs and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
MySQL Load is High with Large Number of CredHub Encryption Keys . . . . . . . . . . 137
Leader-Follower Service Instance Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Inoperable app and database errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Highly available cluster errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Failed backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Troubleshoot components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
BOSH problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Failing jobs and unhealthy instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
AZ or region failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Techniques for troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Parse a Cloud Foundry error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Access broker and instance logs and VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Run service broker errands to manage brokers and instances . . . . . . . . . . . . . . . . 153
Reinstall a tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
View resource saturation and scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Identify apps using a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Monitor quota saturation and service instance count . . . . . . . . . . . . . . . . . . . . . . 154
Techniques for troubleshooting highly available clusters . . . . . . . . . . . . . . . . . . . 155
Force a node to rejoin a highly available cluster manually . . . . . . . . . . . . . . . . . . 156
Recreate a corrupted VM in a highly available cluster . . . . . . . . . . . . . . . . . . . . . 156
Check replication status in a highly available cluster . . . . . . . . . . . . . . . . . . . . . . 157
Tools for Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Downloading logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
mysql-diag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Knowledge Base (Community) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
File a support ticket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

About data migration in VMware Tanzu for MySQL ............. 160


Triggering a Leader-Follower failover .......................... 160
Retrieve information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Promote the Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Clean up former Leader VM (Optional) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Configure the new Follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Unbind and rebind the app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

VMware Tanzu for MySQL Clusters HA Procedures ............. 167

8
Tanzu for MySQL

Bootstrapping ................................................. 167


When to bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Run the Bootstrap errand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Discover type of cluster failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Scenario 1: VMs running, cluster disrupted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Scenario 2: VMs terminated or lost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Recreate the missing VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Run the Bootstrap errand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Bootstrap manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Shut down MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Verify which node to bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Bootstrap the first node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Restart remaining nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Verify that the nodes have joined the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Manually force a MySQL node to rejoin if a node cannot rejoin the HA
cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

Running mysql-diag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180


Run mysql-diag using the BOSH CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Example healthy output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Example unhealthy output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

About the replication canary ................................... 183


Sample notification email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Determine if the cluster is accessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

Data at rest full-disk encryption (FDE) ......................... 185


Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Enabling full-disk encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

VMware Tanzu for MySQL - Developer Guide ................... 186


VMware Tanzu for MySQL - Developer Guide - Getting Started . 186
Using VMware Tanzu for MySQL ............................... 187
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Confirm the Tanzu for MySQL service availability . . . . . . . . . . . . . . . . . . . . . . 188
Check service availability in the Marketplace . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Check that an instance is running in the space . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Create a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Bind a service instance to your app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Use the MySQL service in your app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Use custom schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
MySQL environment variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Manage a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Migrate data to a different plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Upgrade an individual service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Upgrade a service instance to MySQL 8.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Share service instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Unbind an app from a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Delete a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

9
Tanzu for MySQL

Purge a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

Using VMware Tanzu for MySQL for multi-site replication ...... 197
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Multi-site replication usage overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Select a multi-site leader topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Create multi-site service instances in your two foundations . . . . . . . . . . . . . 199
Configure multi-site using the mysql-tools plug-in . . . . . . . . . . . . . . . . . . . . . 201
Configure multi-site manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Bind a multi-site configured service instance to your app . . . . . . . . . . . . . . . 208
Upgrade a multi-site configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

Using TLS ..................................................... 208


Establish a TLS connection to a service instance . . . . . . . . . . . . . . . . . . . . . . 209

About data migration in VMware Tanzu for MySQL ............. 210


The migrate command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Use cases requiring the migrate command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Use cases not requiring the migrate command . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Omitted data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

Migrating data in VMware Tanzu for MySQL .................... 212


Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Resource planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Install the mysql-tools CF CLI plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Enable source access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Source access across spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Source access off-platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Migrate data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Stop and unbind apps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Migrate data to destination instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Validate data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Rebind and re-stage apps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

About MySQL server defaults .................................. 218


Server Defaults for All Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Server defaults for single node and leader-follower plans . . . . . . . . . . . . . . . 220
Server defaults for highly available cluster plans . . . . . . . . . . . . . . . . . . . . . . 221
Server defaults for Multi‑Site Replication plans . . . . . . . . . . . . . . . . . . . . . . . 221

Changing defaults using arbitrary parameters ................. 222


Optional parameters for changing server defaults . . . . . . . . . . . . . . . . . . . . . 222
Set optional parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Workload Profile Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Lowercase table names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Character sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Synchronous replication for leader-follower . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Synchronous replication timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Backup schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

10
Tanzu for MySQL

Optimize for short words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227


WSREP applier threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

Developer Guide - Managing VMware Tanzu for MySQL ......... 228


Connecting to VMware Tanzu for MySQL ....................... 228
Customizing database credentials ............................. 228
Create read-only access credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Creating custom username credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

Using management tools for VMware Tanzu for MySQL ........ 231
MySQLWeb Database management app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
cf CLI MySQL plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Desktop Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

Creating a service instance with Service-Gateway access ...... 233


Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Create a service instance that allows off-platform access . . . . . . . . . . . . . . . 234
Enable an existing service instance for off-platform access . . . . . . . . . . . . . 235
Deactivate off-platform access on a service instance . . . . . . . . . . . . . . . . . . 236

Using SSH to connect from outside a deployment .............. 236


Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

Triggering multi-site replication failover and switchover ....... 237


Verify follower health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Select your promoted leader topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Trigger a failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Promote the follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Delete or purge the former leader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Create a new follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Reconfigure multi-site replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Trigger a switchover using mysql-tools plugin . . . . . . . . . . . . . . . . . . . . . . . . . 242
Trigger a Switchover Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Make the leader read-only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Promote the follower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Reconfigure multi-site replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246

Backing up and Restoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252


About Backups for Tanzu for MySQL ........................... 253
About full backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
About incremental backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Prerequisite: adbr plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254

Using full backups for VMware Tanzu for MySQL ............... 254
Manually back up a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Restore a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
About restoring a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Restore a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
Troubleshooting the adbr plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

11
Tanzu for MySQL

Monitor backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258


Use the adbr plug-in to list backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Restoring Incremental Backups ................................ 259


Overview of incremental backups and restores . . . . . . . . . . . . . . . . . . . . . . . 259
Prepare incremental backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Perform an incremental restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Monitor Incremental Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Troubleshooting Incremental Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

Backing up and restoring with mysqldump ..................... 262


Back up and restore a Tanzu for MySQL logical backup . . . . . . . . . . . . . . . . . 263
Create a Tanzu for MySQL logical backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Restore from a Tanzu for MySQL logical backup . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Restore from an off-platform logical backup . . . . . . . . . . . . . . . . . . . . . . . . . . 265

Monitoring Node Health ....................................... 266


Monitor node health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Monitor node health using the dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Monitor node health using the API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Node health status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Healthy nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Unhealthy nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Unresponsive nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

Migrating HA instances for multi-site replication ............... 270


Migrating older high-availability instances for multi-site replication . . . . . . . 270
Backup and restore your old HA to a new Multi‑Site Replication instance . . . . . . . . 271
Scale up Multi‑Site Replication to your new HA . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Configure your new HA as multi-site replication leader . . . . . . . . . . . . . . . . . . . . 271
Other Notes and Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
cf mysql-tools plugin HA leaders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272

Troubleshooting on-demand instances ......................... 272


Troubleshoot errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Common service errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
Instances or database are inaccessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Failed backup or restore with the adbr plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Persistent disk usage is increasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Techniques for troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Understand a Cloud Foundry error message . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Find information about your service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Use the Knowledge Base Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
File a Support Ticket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279

Monitoring and KPIs for VMware Tanzu for MySQL ............. 280
About metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Access MySQL metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Use Grafana . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Use Log Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

12
Tanzu for MySQL

KPIs for MySQL service instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281


Server availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Persistent Disk Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Ephemeral Disk Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
CPU use percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Queries Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Highly Available Cluster WSREP Ready . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Highly Available Cluster WSREP Cluster Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Highly Available Cluster WSREP Cluster Status . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Hours Since Last Successful Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Component Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
MySQL metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Disk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Leader-Follower metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Highly Available Cluster Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294

13
Tanzu for MySQL

VMware Tanzu for MySQL on Cloud


Foundry

VMware Tanzu for MySQL on Cloud Foundry is an on-demand service. Users can provision dedicated
instances of MySQL using the cf CLI or Apps Manager.

Tanzu Application Service has been renamed, and is now called Tanzu Platform for Cloud
Foundry. The current version of Tanzu Platform for Cloud Foundry is 10.0.

Tanzu for MySQL enables developers to provision and use dedicated instances of MySQL database on
demand. When you install Tanzu for MySQL, the tile is deployed. It maintains a single service broker that
integrates Tanzu for MySQL with Tanzu Operations Manager.

Tanzu for MySQL is configured with sensible settings by default so that the service meets user
expectations for a general use relational database.

14
Tanzu for MySQL

VMware Tanzu for MySQL

You can use on-demand VMware Tanzu for MySQL on Cloud Foundry service to provision and use
dedicated instances MySQL database on demand.

When you install Tanzu for MySQL, the tile deploys and maintains a single service broker that integrates
Tanzu for MySQL with Tanzu Operations Manager.

Tanzu for MySQL is configured with sensible settings by default so that the service meets user
expectations for a general-use relational database.

Tanzu for MySQL supports the following VM topologies:

Single node.

Leader-follower. For more information, see About Leader-Follower.

Highly available (HA) cluster.


For highly available clusters Tanzu for MySQL uses a patched Galera Cluster named Percona
XtraDB Cluster (PXC). For more information about PXC, see Percona XtraDB Cluster.

Multi‑Site Replication. For more information, see About Multi‑Site Replication.

Product Snapshot
The following table provides version and version-support information about Tanzu for MySQL.

Element Details

Version 10.0

Release date April 15, 2025

Compatible Tanzu Operations Manager versions 3.0

Compatible Tanzu Application Service (TAS for VMs) versions 6.0 4.0

Compatible Tanzu Platform for Cloud Foundry versions 10.x

IaaS support AWS, Azure, GCP, OpenStack, and vSphere

IPsec support Yes

The table shows only the supported versions of compatible products, but older versions are
still compatible.
TAS for VMs 2.11, 2.13, 3.0, and 5.0 are supported. Tanzu Operations Manager 2.10 is
supported.

15
Tanzu for MySQL

On-Demand Service
Tanzu for MySQL is an on-demand service. This means that the service provides dedicated instances of
MySQL that developers can provision on-demand using the cf CLI or Apps Manager.

With Tanzu for MySQL you have more options for how and when instances are provisioned. Tanzu for
MySQL enables both the operator and developer to configure MySQL settings and resource use.

The Tanzu for MySQL on-demand service uses the On-Demand Services SDK for VMware Tanzu. For
information about the , see On-Demand Services SDK for VMware Tanzu.

On-Demand Service Plan


Tanzu for MySQL offers an on-demand service plan called p.mysql. Operators can configure and update
the plan settings.

When operators update the VM or disk size these settings are applied to all existing instances. If an
operator decreases the disk size, data loss might occur in existing instances.

Enterprise-ready checklist
Review the following table to understand the Tanzu for MySQL features.

Plans and Instances More Information

On-Demand, Tanzu for MySQL provides on-demand service plans. On-Demand Networking
Dedicated-VM
Plans

Customizable Operators can customize the VM, disk size, and availability zone for Configure Service Plans
Plans service plans.

Custom Tanzu for MySQL supports custom schemas. Using custom schemas Use Custom Schemas
Schemas enables apps to share a MySQL service instance and isolate app data by
schema.

Share Service Tanzu for MySQL supports sharing service instances between different Share Service Instances
Instances orgs and spaces.

Installation and Upgrades More Information

Product Tanzu for MySQL Upgrading Tanzu for


Upgrades MySQL

Deployment Tanzu for MySQL installations and upgrades run a post-deployment BOSH smoke-tests errand
Smoke Tests errand that validates basic MySQL operations.

Maintenance and Backups More Information

Operational Tanzu for MySQL provides metrics for monitoring service plan usage, Monitoring Tanzu for
Monitoring and service quotas, and MySQL component metrics. Tanzu for MySQL can MySQL
Logging also forward metrics to an external service.

Backup and Tanzu for MySQL provides backups to an external storage solution on a Configuring Automated
Restore configurable schedule. Tanzu for MySQL also provides a restore process. Backups and
Backing up and Restoring
Tanzu for MySQL

16
Tanzu for MySQL

Scaling and Availability More Information

Scaling Operators can scale the size of VMs up, but not down. Installing Configuring Tanzu
for MySQL

Rolling Tanzu for MySQL supports rolling deployments when upgrading HA Using a rolling schema
Deployments clusters. Single node and leader-follower service instances are not upgrade (RSU) in MySQL
available during upgrades. for VMware Tanzu

AZ Support Tanzu for MySQL can be deployed to multiple zones to ensure availability if About Availability Zones
an unplanned outage of a zone occurs.

Encryption More Information

Transport Tanzu for MySQL supports TLS. Enabling TLS provisions a MySQL server Preparing for TLS
Layer Security with a certificate so that apps and clients can establish an encrypted
(TLS) connection with the data service.

Encrypted Tanzu for MySQL has been tested successfully with the BOSH IPsec Add- IPsec Add-on
Communication on.
in Transit

Feedback
Please send any issues, feature requests, or questions to Support.

17
Tanzu for MySQL

Release Notes

VMware recommends that you upgrade to the latest patch available for your current minor, and then
upgrade to the latest patch available for the next minor.

Tanzu Application Service has been renamed, and is now called Tanzu Platform for Cloud
Foundry. The current version of Tanzu Platform for Cloud Foundry is 10.x.

For product versions and upgrade paths, see Upgrade Planner.

Because VMware uses the Percona Distribution for MySQL, expect a time lag between Oracle releasing a
MySQL patch and VMware releasing VMware Tanzu for MySQL on Cloud Foundry containing that patch.

v10.0.2
Release Date: August 19, 2025

Changes
This release includes the following changes:

Upgrades dependencies

Updates old documentation links and prod names

Compatibility
Tanzu MySQL v10.0 has been tested with Ubuntu Jammy Stemcell v1.866, Ubuntu Jammy (FIPS)
Stemcell 1.866.

The following components are compatible with this release:

Component Version

Stemcell 1.866*

Percona Server 8.0.42-33*

Percona XtraDB cluster 8.0.42-33*

Percona XtraBackup 8.0.35-33*

adbr-release 0.121.0*

bpm-release 1.4.20

cf-cli-release 2.3.0*

18
Tanzu for MySQL

Component Version

cf-service-gateway-release 170.0.0*

count-cores-indicator-release 2.0.0

dedicated-mysql-release 0.349.0*

dedicated-mysql-adapter-release 0.549.0*

loggregator-agent-release 7.8.2

mysql-data-backup-restore-release 3.37.0

mysql-monitoring-release 10.27.0*

on-demand-service-broker-release 0.49.1*

pxc-release 1.1.2*

routing-release 0.342.0*

service-metrics-release 2.0.44

* Components marked with an asterisk have been updated

v10.0.1
Release Date: June 17, 2025

Changes
This release includes the following changes:

Upgrades dependencies, resolves CVEs

Updates links to Broadcom documentation

Compatibility
Tanzu MySQL v10.0 has been tested with Ubuntu Jammy Stemcell v1.824, Ubuntu Jammy (FIPS)
Stemcell 1.824.

The following components are compatible with this release:

Component Version

Stemcell 1.824*

Percona Server 8.0.41-32*

Percona XtraDB cluster 8.0.41-32*

Percona XtraBackup 8.0.35-31

adbr-release 0.120.0*

bpm-release 1.4.20*

cf-cli-release 2.2.0*

19
Tanzu for MySQL

Component Version

cf-service-gateway-release 169.0.0*

count-cores-indicator-release 2.0.0

dedicated-mysql-release 0.343.0*

dedicated-mysql-adapter-release 0.538.0*

loggregator-agent-release 7.8.2*

mysql-data-backup-restore-release 3.37.0*

mysql-monitoring-release 10.25.0*

on-demand-service-broker-release 0.48.2*

pxc-release 1.0.41*

routing-release 0.339.0*

service-metrics-release 2.0.44*

* Components marked with an asterisk have been updated

v10.0.0
Release Date: April 15, 2025

3.1.x is the last version of VMware Tanzu for MySQL on Cloud Foundry that supports
MySQL 5.7. Tanzu for MySQL 3.2.0 and later only support MySQL 8.0. If you are
upgrading from an earlier release of Tanzu for MySQL, you must reconfigure any plans
previously configured with "MySQL Default Version" of 5.7 to specify 8.0. For more
information about upgrading Tanzu for MySQL, see Upgrading VMware Tanzu for MySQL.

Changes
This release includes the following changes:

Feature: Incremental backup - Operators can activate and configure incremental backups to only
save data that has changed since last successful backup

Feature: Incremental restore - Developers can apply incremental backups using the ADBR CLI to
restore to the latest point in time based on the most recent incremental backup

Feature: New metrics exposed to monitor the health of incremental backups when activated -
binlog_collector_last_archived_timestamp_seconds,
binlog_collector_files_archived_total, binlog_collector_gap_detected_total

Upgrades dependencies, resolves CVEs

Compatibility
Tanzu MySQL v10.0 has been tested with Ubuntu Jammy Stemcell v1.803, Ubuntu Jammy (FIPS)
Stemcell 1.803

20
Tanzu for MySQL

The following components are compatible with this release:

Component Version

Stemcell 1.803*

Percona Server 8.0.41-32*

Percona XtraDB cluster 8.0.41-32*

Percona XtraBackup 8.0.35-31*

adbr-release 0.119.0*

bpm-release 1.4.17*

cf-cli-release 2.1.0*

cf-service-gateway-release 168.0.0*

count-cores-indicator-release 2.0.0

dedicated-mysql-release 0.335.0*

dedicated-mysql-adapter-release 0.529.0*

loggregator-agent-release 7.8.2*

mysql-data-backup-restore-release 3.35.0*

mysql-monitoring-release 10.23.0*

on-demand-service-broker-release 0.48.1*

pxc-release 1.0.39*

routing-release 0.334.0*

service-metrics-release 2.0.44*

* Components marked with an asterisk have been updated

21
Tanzu for MySQL

VMware Tanzu for MySQL - Architecture


and Planning Guide

This topic covers the following areas:

On-Demand networking

Availability options

About Leader-Follower topology

About Highly Available clusters

About Multi‑Site Replication

About Service-Gateway access

Recommended usage and limitations

On-Demand networking
This topic tells you about on-demand services, including VMware Tanzu for MySQL on Cloud Foundry.

Service network requirement


When you deploy Tanzu Platform for Cloud Foundry (Tanzu Platform for CF), you must create a statically
defined network to host the component VMs that make up the infrastructure. Components, such as Cloud
Controller and UAA, run on this infrastructure network.

On-Demand services might require you to host them on a separate network from the default network. You
can also deploy on-demand services on a separate service network to meet your own security
requirements.

Tanzu Platform for CF supports dynamic networking. You can use dynamic networking with asynchronous
service provisioning to define dynamically-provisioned service networks. For more information, see Default
network and service network.

On-Demand services are enabled by default on all networks. You can create separate networks to host
services in BOSH Director, if required. You can select which network hosts on-demand service instances
when you configure the tile for that service.

Default network and Service network


On-demand Tanzu for MySQL services use BOSH to dynamically deploy VMs and create single-tenant
service instances in a dedicated network. On-demand services use the dynamically-provisioned service

22
Tanzu for MySQL

network to host single-tenant worker VMs. These worker VMs run as service instances within development
spaces.

This on-demand architecture has the following advantages:

Developers can provision IaaS resources for their services instances when the instances are
created. This removes the need for operators to pre-provision a fixed amount of IaaS resources
when they deploy the service broker.

Service instances run on a dedicated VM and do not share VMs with unrelated processes. This
removes the “noisy neighbor” problem, where an app monopolizes resources on a shared cluster.

Single-tenant services can support regulatory compliance requirements where sensitive data must
be separated across different machines.

An on-demand service separates operations between the default network and the service network. Shared
service components, such as executive controllers and databases, Cloud Controller, UAA, and other on-
demand components, run on the default network. Worker pools deployed to specific spaces run on the
service network.

The following diagram shows worker VMs in an on-demand service instance running on a separate services
network, while other components run on the default network.

View a larger version of this image

Required networking rules for on-demand services

23
Tanzu for MySQL

Before deploying a service tile that uses the on-demand service broker (ODB), you must create networking
rules to enable components to communicate with ODB. For instructions for creating networking rules, see
the documentation for your IaaS.

The following table lists key components and their responsibilities in the on-demand architecture.

Key Components Component Responsibilities

BOSH Director Creates and updates service instances as instructed by ODB.

BOSH Agent Adds an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director
and executes those instructions. The agent receives job specifications from the BOSH Director and
uses them to assign a role or job to the VM.

BOSH UAA Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users.

Tanzu Platform for Contains the apps that consume services.


Cloud Foundry

ODB Instructs BOSH to create and update services. Connects to services to create bindings.

Deployed service Runs the given service. For example, a deployed TCF-MySQL service instance runs the TCF-
instance MySQL service.

Required networking rules for Tanzu for MySQL


Regardless of the specific network layout, you must set network rules.

To ensure that connections are open, see the following table:

Destination Default TCP


Source Component Notes
Component Port

BOSH Agent BOSH Director 4222 The BOSH Agent runs on every VM in the system,
including the BOSH Director VM. The BOSH Agent
initiates the connection with the BOSH Director.

The default port is not configurable.

The communication between these components is two-


way.

Broker and service Doppler on Tanzu 8082 This port is for metrics.
instances Platform for Cloud
Foundry

Deployed apps on Tanzu MySQL service 3306 This port is for general use, app-specific tasks. In
Platform for Cloud instances addition to configuring your IaaS, create a security
Foundry group for the MySQL service instance.

ODB BOSH Director 25555 The default ports are not configurable.
(BOSH
BOSH UAA Director)

8443 (UAA)

8844
(CredHub)

24
Tanzu for MySQL

Destination Default TCP


Source Component Notes
Component Port

ODB MySQL service 8443 This connection is for administrative tasks. Avoid
instances 3306 opening general use, app-specific ports for this
connection.

ODB Tanzu Platform for 8443 The default port is not configurable.
Cloud Foundry

Tanzu Platform for Cloud ODB 8080 This port allows to communicate with the ODB
Foundry component.

Tanzu Platform for Cloud ODB 2345 This port allows Tanzu Platform for CF to communicate
Foundry with the ODB component so that the
ApplicationDataBackupRestore (adbr) API can take
backups.

Deployed apps on Tanzu Runtime CredHub 8844 This port is needed if secure service binding
Platform for Cloud credentials are enabled. For information, see Configure
Foundry Security.

Tanzu Platform for Cloud MySQL service 8853 This port is used for DNS to run health checks against
Foundry instances services instances.

Required networking rules for Leader-Follower plans


If you are using a leader-follower service plan, you must set leader-follower network rules in addition to the
networking rules required for Tanzu for MySQL.

To ensure that connections are open, see the following table:

Source Destination Default TCP


Notes
VM VM Port

Leader Follower VM 8443 This port is needed if leader-follower is enabled. For more information, see
VM 8081 Networking Rules.

The communication between these VMs is two-way.

Required networking rules for highly available (HA) cluster plans


If you are using an HA cluster service plan, you must set HA network rules in addition to the networking
rules required for Tanzu for MySQL.

To ensure that connections are open, see the following table:

Default
Source VM Destination VM Notes
TCP Port

Tanzu Platform MySQL service 8083 This port is needed to monitor cluster health with Switchboard. For
for Cloud instances more information, see Monitoring node health (HA cluster).
Foundry

Jumpbox VM Tanzu Platform for 8443 This port is needed so that the replication canary can create a
Cloud Foundry UAA UAA client for sending email notifications. For more information,
see About the replication canary.

25
Tanzu for MySQL

Default
Source VM Destination VM Notes
TCP Port

HA cluster node HA cluster node 4567 This port is needed to maintain network connectivity between
4568 nodes in an HA cluster. For more information, see Firewall
4444 Configuration in the Percona documentation.

The communication between these VMs is two-way.

Galera Galera healthcheck 9200 This port is for monitoring the health of nodes in an HA cluster.
healthcheck
The communication between these VMs is two-way.

Required networking rules for Multi‑Site Replication


If you are using multi‑site replication, you must set multi-site network rules for both foundations in addition
to the networking rules required for Tanzu for MySQL.

To ensure that connections are open in both foundations, see the following table:

Source Destination Default TCP


Notes
VM VM Port

Leader Follower VM 8081 These ports are needed to enable replication between services instance in
VM 8443 two different foundations.
18443
3306 The communication between these VMs is two-way.

Availability options
This topic tells you about the topologies for VMware Tanzu for MySQL on Cloud Foundry and explains the
kind of availability each provides.

VMware Tanzu for MySQL on Cloud Foundry Topologies


The topologies offered by Tanzu for MySQL are:

Single node: This one-VM topology is inexpensive and good for testing. You can use this topology
for apps where high availability is not important.

Leader-follower: This two-VM topology gives you redundancy through failover of the leader VM to
the follower VM. For more information, see About Leader-Follower.

Highly available (HA) cluster: This three-VM plus jumpbox cluster uses a patched Galera cluster,
Percona XtraDB Cluster (PXC), to provide the greatest possible availability. For more information,
see About Highly Available Clusters.

multi‑site replication: This two-VM topology enables you to provision the leader and follower VMs
in two different foundations. For more information about how to provision leader and follower VMs,
see About multi‑site replication.

Pros and cons

26
Tanzu for MySQL

The following table lists some key characteristics of the topologies to help you decide which one is best for
your developers.

Topology VMs needed Pros Cons

Single node One (1) Simple No redundancy

Least expensive All data since last backup


can be lost
Straightforward operator
experience Data recovery requires
restore from backup
Easy for the developer

Leader-follower Two (2) Two copies of the data Less flexible tuning than
single node
Data recovery is faster than
single node Developers require some
technical understanding

Operator must initiate


failover

Highly available Three (3) and a Tightest RPO and RTO for Less flexible tuning than
cluster jumpbox VM downtime, including downtime for single node
upgrades.
Developers require
See RPOs and RTOs.
moderate technical
understanding

Multi-Site Replication Two (2) Two copies of the data Less flexible tuning than
single node
Data recovery is faster than
single node Developers require some
technical understanding
Resilience to data center outages
and upgrades

Developers can initiate failover

Highly available Six (6) Two copies of the data Most expensive
Multi-Site Leader
Data recovery is faster than Less flexible tuning than
single node single node

Resilience to data center outages Developers require some


technical understanding
Resilience to upgrades
Increased network traffic
Resilience to VM failures within a
datacenter

Developers can initiate failover

RPOs and RTOs


Recovery point objective (RPO) and recovery time objective (RTO) are important considerations for
availability.

The following tables describe the RPOs and RTOs for the topologies.

27
Tanzu for MySQL

RPOs
This table compares the recovery point objectives for the topologies:

Planned
Topology Unplanned Downtime Data Recovery
Maintenance

Single Node Zero Zero Time since last backup

Leader-Follower Zero Replica lag, or zero if in The time to initialize the follower VM, or zero
sync mode if in sync mode

Highly Available Cluster Zero Almost zero* Almost zero*

multi‑site replication Zero Replica lag Replica lag

Highly Available Multi-Site Zero Almost zero* Almost zero*


Leader

*Database clients are notified that incomplete transactions are not saved.

RTOs
This table compares the recovery time objectives for the topologies:

Topology Planned Maintenance Unplanned Downtime Data Recovery

Single Node Time required to recreate the Time required to recreate the VM and Time to restore from
VM and reconnect to storage reconnect to storage backup

Leader-Follower Time required depends on the Time required to restore the VM or time Time for operator to initiate
type of maintenance for operator to initiate failover and for failover and for failover to
failover to complete complete

Highly Available Almost zero* Almost zero* Almost zero*


Cluster

multi‑site Time for developer to initiate Time for developer to initiate failover Time for developer to
replication switchover and for switchover to and for failover to complete initiate failover and for
complete failover to complete

Highly Available Almost zero* Almost zero* Almost zero*


Multi-Site Leader

*This includes time for apps to reconnect to the MySQL service instance.

Risk
When you choose a topology, risk is a factor to consider. Risk encompasses the likelihood of:

Operators being interrupted to perform disaster recovery

Encountering issues because of the complexity of the topology and technology

Single node topology has the lowest risk. Highly available clusters have the highest risk.

28
Tanzu for MySQL

Topology Risk-level

Single node Low


Leader-follower Medium
Highly available cluster High
multi‑site replication Medium-high
Highly Available Multi-Site Leader High

About Leader-Follower topology


This topic describes how the leader-follower topology works in VMware Tanzu for MySQL on Cloud Foundry
and contains information to help you decide whether to enable or use leader-follower.

Overview on Leader-Follower Topology


The leader-follower topology increases the availability of a MySQL database by deploying two MySQL VMs
in two separate availability zones (AZs). When a developer creates a leader-follower service instance, the
on-demand broker deploys a leader VM in one AZ, and a follower VM in another AZ.

A leader-follower topology enables operators to initiate a failover and send app traffic to the follower if the
leader fails. This ensures that the app bound to the MySQL database continues to function normally.

The follower VM is only for increasing availability and is not exposed to developers to increase read
throughput. Developers who want increased read throughput can configure workload profiles. For more
information, see About Workload Types.

The following diagram shows a leader-follower service instance in two availability zones (AZs).

View a larger version of this diagram.

The leader and follower nodes are each shown in their own AZ. The app sends traffic to the leader node,
and any data that is written to the leader is replicated to the follower. The leader and the follower each have
their own disk.

29
Tanzu for MySQL

About Failover
Tanzu for MySQL focuses on data consistency rather than availability. Therefore, it does not trigger failover
automatically, but relies on operators to trigger a failover. When an operator triggers a failover, the follower is
promoted to the leader and app traffic is sent to the follower.

For more information, see Triggering a Leader-Follower Failover.

About Synchronous Replication


By default, any data that is written to the leader is asynchronously replicated to the follower. Tanzu for
MySQL also supports synchronous (“sync”) replication.

In sync replication, data does not get committed to the leader node until the follower acknowledges the
commit and can replicate it. By default, the timeout for sync replication is set to approximately 292 million
years. This means that the leader always waits for the follower to confirm receipt of the transaction.

For information about enabling sync replication, see Synchronous Replication for Leader-Follower.

About Leader-Follower Errands


Tanzu for MySQL automates orchestrating the standard lifecycle of creating, updating, and deleting leader-
follower service instances.

In addition, Tanzu for MySQL provides three leader-follower errands. An operator can use them to control
the lifecycle of a leader-follower service instance and manually intervene, when necessary, without being an
expert at MySQL.

For more information, see configure-leader-follower in Running Errands.

Infrastructure Requirements for Leader-Follower


Deployments
Leader-follower instances have additional infrastructure requirements compared to singleton instances, as
described in the following sections.

Capacity Planning
When calculating IaaS usage, you must take into account that each leader-follower instance requires two
VMs. Therefore, the resources used for a leader-follower-enabled plan must be doubled. For more
information, see Resource Usage Planning for On-Demand Plans.

Availability Zones
To minimize the impact of an availability zone (AZ) outage and to remove single points of failure, VMware
recommends that you provision three AZs when using leader-follower deployments. With three AZs, the
MySQL VMs are deployed to two AZs and the broker is deployed to the third VM.

For more information, see Availability Using Multiple AZs in VMware Tanzu for MySQL on Cloud Foundry
Recommended Usage and Limitations.

Networking Rules

30
Tanzu for MySQL

In addition to the standard networking rules needed for Tanzu for MySQL, the operator must ensure leader-
follower-specific network rules are also set up as follows:

Leader-follower VMs bidirectionally communicate with each other over port 8008 for orchestration.

Leader-follower VMs bidirectionally communicate with each other over port 3306 for replication.

For information about the standard networking rules, see Required Networking Rules for On-Demand
Services.

About highly available clusters


This topic describes how Highly Available (HA) clusters work in VMware Tanzu for MySQL on Cloud
Foundry and provides information to help you decide whether to use HA clusters.

Highly available cluster topology


In an HA cluster topology, a cluster consists of three nodes. Each node contains the same set of data
synchronized across nodes. Data is simultaneously replicated across nodes or not written at all if
replication fails even on a single node.

With three nodes, HA clusters are highly available and resilient to failure because if a node loses contact
with the other two nodes, the other two nodes can remain in contact and continue to accept transactions. If
there is an odd number of nodes, the two nodes can represent the primary component.

For more information about cluster availability, see Percona XtraDB Cluster: Quorum and Availability of the
cluster in the Percona XtraDB Cluster documentation.

In HA topology, the Tanzu for MySQL service uses the following:

Three MySQL servers running Percona XtraDB as the MySQL engine and Galera to keep them in
sync.

Three Switchboard proxies that are co-located on the MySQL nodes. The proxies gracefully handle
failure of nodes, which enables fast failover to other nodes within the cluster. For more information,
see MySQL Proxy.

A jumpbox VM called mysql-monitor for monitoring, diagnosing and backing up MySQL nodes.
You can run the mysql-diag tool on the mysql-monitor VM to view the status of your cluster
nodes. For more information about mysql-diag, see Running mysql-diag.

The following diagram shows an HA cluster in three availability zones (AZs).

31
Tanzu for MySQL

See the following image for a detailed description.

View a larger version of this diagram.

The diagram shows an app communicating with a MySQL cluster using BOSH DNS. Each of the three
MySQL nodes is located in its own AZ. Each MySQL node contains a proxy and a server, and has its own
disk. The MySQL nodes communicate with each other across AZs, and data is replicated from the primary
to the secondary databases.

Traffic from apps and clients is sent round robin to the proxies on all nodes over BOSH DNS. The proxies
direct their traffic to the server on the primary node.

If the primary server fails, a secondary node is promoted to become the new primary. The proxies are then
updated to send traffic to the new primary.

For more information about other availability options supported by Tanzu for MySQL, see Availability
Options.

Using highly available deployments as a multi-site leader


You can configure a HA cluster running MySQL 8.0.x (or greater) to act as the leader in a multi-site
replication configuration. This configuration establishes a follower service instance of type multi‑site
replication in a second foundation, and establishes replication from your leader HA cluster to the follower
multi‑site replication instance.

For more information, see About multi-site replication.

Before you can use a highly available deployment as a multi-site leader, the operator must
enable Service-Gateway. You must create the leader instance with Service-Gateway
access. For general information about Service-Gateway access, including architecture and
use cases, see About Service-Gateway access.

32
Tanzu for MySQL

Infrastructure requirements for highly available


deployments
Before you decide to use HA clusters for Tanzu for MySQL service plans, consider the following
requirements.

Capacity planning
When you calculate IaaS usage, you must take into account that each HA instance requires three VMs.
Therefore, the resources used for an HA plan must be tripled. For more information, see Resource Usage
Planning for On-Demand Plans.

Availability zones
To minimize the impact of an availability zone (AZ) outage and to remove single points of failure, VMware
recommends that you provision three AZs if using HA deployments. With three AZs, nodes are deployed to
separate AZs.

For more information, see Availability Using Multiple AZs in VMware Tanzu for MySQL on Cloud Foundry
Recommended Usage and Limitations.

Networking Rules
In addition to the standard networking rules needed for Tanzu for MySQL, you must also configure HA
cluster-specific network rules.

For information about HA cluster specific networking rules, see Required Networking Rules for Highly
Available Cluster Plans.

Highly available cluster limitations


When deployed with HA topology, Tanzu for MySQL runs three nodes. This cluster arrangement imposes
some limitations that do not apply to single node or leader-follower MySQL database servers. For more
information about the difference between single node, leader-follower, and HA cluster topologies, see
Availability Options.

HA clusters perform validations at startup and during runtime to prevent you from using MySQL features
that are not supported. If a validation fails during startup, the server is halted and throws an error. If a
validation fails during runtime, the operation is denied and the server throws an error.

These validations are designed to ensure optimal operation for common cluster setups that do not require
experimental features and do not rely on operations that are not supported by HA clusters.

For more information about HA limitations, see Percona XtraDB Cluster Limitations in the Percona XtraDB
Cluster documentation.

Storage engine limitations


HA clusters support only the InnoDB storage engine. The InnoDB is the default storage engine for
new tables. Pre-existing tables that are not using InnoDB are at risk because they are not
replicated within a cluster.

33
Tanzu for MySQL

Large DDL statements can lock all schemas. This can be mitigated by using the Rolling Schema
Upgrade (RSU) method. For instructions on how to use the RSU method, see Using a rolling
schema upgrade (RSU).

Large transaction sizes can inhibit the performance of the cluster and apps using the cluster
because they are buffered in memory.

You cannot run a DML statement in parallel with a DDL statement, if both statements affect the
same tables. If they are expected in parallel, the DML and DDL statements are both applied
immediately to the cluster, and this causes errors.

Architecture limitations
All tables must have a primary key. You can use multi-column primary keys. This is because HA
clusters replicate using row-based replication. This ensures unique rows on each instance.

Explicit table locking is not supported. For more information, see EXPLICIT TABLE LOCKING in
the Percona XtraDB Cluster documentation.

By default, auto-increment variables are not sequential and each node has gaps in IDs. This
prevents auto-increment replication conflicts across the cluster. For more information, see
wsrep_auto_increment_control in the Percona XtraDB Cluster documentation.

You cannot change the session behavior of auto-increment variables. This is because the HA
cluster controls the behavior of these variables. For example, if an app runs SET SESSION
auto_increment_increment, the cluster ignores the change.

In InnoDB, some transactions can cause deadlocks. You can minimize deadlocks in your apps by
rewriting transactions and SQL statements. For more information about deadlocks, see Deadlocks
in InnoDB and How to Minimize and Handle Deadlocks in the MySQL documentation.

Table partitioning can cause performance issues in the cluster. This is as a result of the implicit
table locks that are used when running table partition commands.

Networking limitations
Round-trip latency between database nodes must be less than five seconds. Latency exceeding
this results in a network partition. If more than half of the nodes are partitioned, the cluster loses
quorum and becomes unusable until manually bootstrapped. For more information about
bootstrapping HA clusters, see Bootstrapping.

MySQL Server defaults for HA components


This section describes some defaults that Tanzu for MySQL applies to HA components.

State snapshot transfer process


When a new node is added to or rejoins a cluster, a primary node from the cluster is designated as the state
donor. The new node synchronizes with the donor through the State Snapshot Transfer (SST) process.

Tanzu for MySQL uses Xtrabackup for SST, which lets the state donor node continue accepting reads and
writes during the transfer. HA clusters use rsync (by default) to perform SST. This is fast, but rsync blocks

34
Tanzu for MySQL

requests during the transfer. For more information about Percona XtraBackup SST Configuration in the
Percona XtraDB Cluster documentation.

Large data file splitting enabled


By default, HA clusters split large Load Data commands into small manageable units. This deviates from
the standard behavior for MySQL.

For more information, see wsrep_load_data_splitting in the Percona XtraDB Cluster documentation.

Maximum transaction sizes


These are the maximum transaction sizes set for HA clusters:

The maximum write-set size that is allowed in HA clusters is 2 GB. For more information, see
wsrep_max_ws_size.

The maximum number of rows each write-set can contain has a default setting of 0, which is used
to indicate “no limit.” For more information, see wsrep_max_ws_rows.

MySQL proxy
Tanzu for MySQL uses a proxy to send client connections to the healthy MySQL database cluster nodes in
a highly available cluster plan. Using a proxy handles failure of nodes, enabling fast, failover to other nodes
within the cluster. When a node becomes unhealthy, the proxy closes all connections to the unhealthy node
and re-routes all subsequent connections to a healthy node.

The proxy used in Tanzu for MySQL is Switchboard. Switchboard was developed to replace HAProxy as the
proxy tier for the high availability cluster for MySQL databases in Tanzu for MySQL.

See the following image for a detailed description.

View a larger version of this diagram.

Switchboard offers the following features:

MySQL Server Access

35
Tanzu for MySQL

MySQL clients communicate with nodes through this network port. These connections are
automatically passed through to the nodes.

Switchboard and API

Operators can connect to Switchboard to view the state of the nodes. For more information about
monitoring proxy health status, see Monitoring Node Health.

About multi-site replication


This topic tells you how multi-site replication works in VMware Tanzu for MySQL on Cloud Foundry and
contains information to help you decide whether to use multi-site replication.

Overview of Multi-Site Replication


Multi-site replication in Tanzu for MySQL is a disaster recovery solution that lets developers provision two
instances across two foundations, and then configure cross-foundation replication from the “leader” instance
to the “follower” instance. Operators can configure the two foundations to be in the same data center or
spread across multiple data centers or geographical regions.

The foundation types are:

Primary Foundation: This foundation is deployed in your main data center. Generally, this data
center receives the majority of app traffic. Tanzu for MySQL assumes that the leader is deployed
on this foundation when it is healthy.

Secondary Foundation: This foundation is deployed in your failover data center. Generally, this
data center receives less app traffic than the primary foundation, or no traffic at all. Tanzu for
MySQL assumes that the follower is deployed on this foundation unless a developer triggers a
failover.

For information about enabling multi-site replication, see Preparing for multi-site replication.

For information for developers about using multi-site replication, see Using Tanzu for MySQL for multi-site
replication.

Multi-Site Replication Plans


In your secondary foundation, you always use multi‑site replication for the replication follower instance.

In your primary foundation, you have a choice of service plan types for your replication leader instance:

multi‑site replication: this provisions a single node instance that can support multi-site replication.
The resulting configuration resembles a leader-follower service instance, but with the two single-
node instances deployed and replicating across two separate foundations.

36
Tanzu for MySQL

View a larger version of this diagram.

High-Availability Cluster (MySQL 8.0 only): this provisions a HA cluster of three database VM
nodes in your leader foundation.

View a larger version of this diagram.

An HA cluster offers greater resilience to in-foundation outages. For more information about HA
clusters, see About highly available clusters.

Tanzu for MySQL supports using a High-availability cluster service instance as a


multi-site leader only for clusters running MySQL version 8.0.x or greater. The

37
Tanzu for MySQL

MySQL version is set in the plan definition section of Tanzu Operations Manager.
See also Configuring Service Plans.

To support multi-site replication from an HA cluster leader, the operator must introduce TCP Routing
by enabling Service-Gateway access for Tanzu for MySQL. For more information about Service-
Gateway access, see About Service-Gateway access and and Enabling Service-Gateway access.

Service instances of type "Leader-Follower" are not used for multi-site replication. This
page uses the terms "leader" and "follower" to describe configuration of the two supported
service instance types "Multi-Site Replication," and "HA cluster." For more information
about Leader-Follower service instances, see About Leader-Follower topology.

Multi-site replication benefits


Multi-site replication has the following benefits:

Resilience for service instances:

Developers can trigger a failover to maintain app uptime during a data center outage. For more
information, see Triggering multi-site replication failover and switchover.

Data center upgrades with zero downtime:

Operators can upgrade data centers without taking databases offline by triggering a switchover first.

Support for multiple cloud deployment models:

Operators can configure multi-site replication with a single cloud or hybrid cloud deployment model.
Both deployment models have the same end-user experience.

Support for active-passive and app-layer active-active disaster recovery:

For more information, see About Active-Passive topology and About App-Layer Active-Active
topology.

Active-passive topology
In an active-passive topology, when all foundations and workloads are healthy, all app traffic is directed to
the primary foundation. The secondary foundation receives no app traffic.

VMware recommends using this topology when your secondary foundation is scaled down, generally
inactive, and does not receive significant app traffic.

For information about active-passive failover and switchover, see About failover and switchover.

The following diagram describes the active-passive topology in a healthy state:

38
Tanzu for MySQL

View a larger version of this diagram.

As shown in the diagram:

The global DNS load balancer (GLB) directs traffic to the app in the primary foundation. This app
issues transactions to the leader service instance.

The follower service instance in the secondary foundation continuously replicates data from the
leader service instance in the primary foundation.

App-layer active-active topology


In an app-layer active-active topology, when all foundations and workloads are healthy, app traffic is
directed to both the primary and secondary foundation.

VMware recommends using this topology when both your primary and secondary foundations are scaled up
and are expected to serve traffic.

For information about app-layer active-active failover and switchover, see About failover and switchover.

The following diagram describes the app-layer active-active topology in a healthy state:

39
Tanzu for MySQL

View a larger version of this diagram.

As shown in the diagram:

The GLB directs traffic to the apps in the primary and secondary foundations. The apps in the
primary and secondary foundation issue transactions to the leader service instance.

The app in the secondary foundation issues transactions to the leader:

1. Connects to the follower service instance

2. Issues transactions to the follower service instance

3. Forwards the transactions from the follower service instance to the leader

The follower service instance in the secondary foundation continuously replicates data from the
leader service instance in the primary foundation.

In the Multi‑Site Replication plan, the app-layer active-active topology does not enable
multi-primary replication, so the follower service instance is read-only. Apps can only write
to the leader. The app-layer active-active topology does not enable multi-primary
(bidirectional) replication.

Failover and switchover


Tanzu for MySQL prioritizes data consistency over availability. Therefore, Tanzu for MySQL does not trigger
failover or switchover automatically and developers must manually trigger a failover or switchover. For
instructions about triggering failover or switchover, see see Triggering multi-site replication failover and
switchover.

40
Tanzu for MySQL

The following table describes when you can trigger a failover or switchover:

Trigger failover if... Trigger switchover if...

The leader MySQL service instance has crashed Both the leader and the follower MySQL service
or is unhealthy and is not automatically recovered instances are healthy.
by BOSH.
You plan to do foundation or data center upgrades
The leader MySQL service instance is destroyed or maintenance on your primary site; for example,
and unrecoverable. data center hardware upgrades that may disable
your leader service instance.
The availability zone (AZ) for the leader MySQL
service instance experiences an unexpected You plan to degrade performance on the primary
outage. site.

The data center for the leader MySQL service


instance experiences an unexpected outage.

For information about multi-site replication topologies in an healthy state, see About active-passive topology
and About app-layer active-active topology.

Failover
In both the active-passive and app-layer active-active topologies, if a developer triggers a failover, all app
traffic is directed to the leader in the secondary foundation and the primary foundation receives no app
traffic.

The following diagram describes what happens when you trigger a failover for a multi-site replication
topology:

View a larger version of this diagram.

41
Tanzu for MySQL

As shown in the diagram:

The GLB directs traffic to the app in the secondary foundation. This app issues transactions to the
leader service instance in the secondary foundation.

The leader service instance in the secondary foundation does not replicate data to another service
instance.

Active-passive switchover
In an active-passive topology, if a developer triggers a switchover, all app traffic is directed to the leader in
the secondary foundation. The primary foundation receives no app traffic.

The following diagram describes what happens when you trigger a switchover in an active-passive topology:

View a larger version of this diagram.

As shown in the diagram:

The GLB directs traffic to the app in the secondary foundation. This app issues transactions to the
leader service instance in the secondary foundation.

The leader service instance in the secondary foundation replicates data to the follower service
instance in the primary foundation.

App-layer active-active switchover


If a developer triggers a switchover, app traffic is still directed to both the primary and secondary
foundation. However, the leader service instance is in the secondary foundation.

The following diagram describes what happens when you trigger a switchover in an active-active topology:

42
Tanzu for MySQL

View a larger version of this diagram.

As shown in the diagram:

The GLB directs traffic to the apps in the primary and secondary foundations. The apps in the
primary and secondary foundation issue transactions to the leader service instance in the
secondary foundation.

The app in the primary foundation issues transactions to the leader:

1. Connects to the follower service instance

2. Issues transactions to the follower service instance

3. Forwards the transactions from the follower service instance to the leader

The follower service instance in the primary foundation continuously replicates data from the leader
service instance in the secondary foundation

About enabling external access


If external-access is enabled, replication can occur between two non-routable foundations. Replication
traffic goes through tcp-router.

The following diagram shows what happens when you enable external-access for multi-site replication:

43
Tanzu for MySQL

View a larger version of this diagram.

As shown in the diagram:

Before external-access is enabled, replication traffic goes directly to the follower.

After external-access is enabled, replication traffic goes through tcp-router to the follower.

Infrastructure requirements for multi-site configuration


Before you deploy a multi-site configuration, consider the following requirements.

Capacity planning
When calculating IaaS usage, you must take into account that each multi-site configuration deploys two
service instances: a primary either of type multi‑site replication or HA cluster, and a secondary of type
multi‑site replication.

Each multi‑site replication service instance deploys one VM. Therefore the smallest multi-site
configuration deploys two VMs (one per foundation), twice the footprint of a service instance
created from a single-node plan.

Each HA cluster service instance deploys four VMs. Therefore the largest multi-site configuration
deploys five VMs, five times the footprint of a service instance created from a single-node plan.

These configurations initially deploy a single VM in your secondary foundation. During failover and
switchover operations (which promote your secondary instance into a primary instance), you may
elect to scale up your multi‑site replication secondary into a HA cluster primary. And a switchover
from a HA cluster primary downscales that instance to a multi‑site replication secondary. These
scaling operations affect the distribution of VMs across your two foundations.

For more information, see Setting limits for on-demand service instances and Persistent Disk Usage.

Networking requirements

44
Tanzu for MySQL

You must consider that the multi‑site replication instances that are deployed in data centers that are
geographically farther apart experience higher latencies.

For information about the standard networking rules, see Required Networking Rules for multi-site
replication.

Availability zones
With the addition of a new feature that allows scaling up and down between single node multi-site leaders
and Highly Available (HA) leaders, extra care must be taken to ensure that the availability zones (AZ) are
compatible between the two plans.

To minimize impact of an availability zone (AZ) outage and to remove single points of failure, VMware
recommends that you provision three AZs when using HA deployments. With three AZs, nodes are deployed
to separate AZs.

To scale between the two plan types, the availability zones configured for the HA plan must
match those configured for the single node multi-site plan.

For more information, see Availability Using Multiple AZs in VMware Tanzu for MySQL on Cloud Foundry
Recommended Usage and Limitations.

Multi-site replication limitations


Multi-site replication has the following limitation:

Synchronous replication is not supported. For multi‑site replication plans, any data written to the
primary instance is asynchronously replicated to the secondary instance.

About Service-Gateway access


This topic describes the use of service-gateway access with VMware Tanzu for MySQL on Cloud Foundry
when using external clients that need access to a Tanzu for MySQL service instance.

Service-gateway access enables external clients that are not on the same foundation as a service instance
to connect to the service instance. This is also referred to as off-platform access because the external
clients, which are not hosted on the foundation, or platform, can access MySQL service instances that are
on the platform.

The external clients are typically apps or management tools such as MySQL Workbench. They can be on
another foundation or hosted outside of the foundation.

For related procedures, see:

Enabling Service-Gateway access

Create a service instance with Service-Gateway access

Use cases for service-gateway access in Tanzu for MySQL are:

Accessing a MySQL service instance from apps deployed in a different foundation

Using MySQL as a service for apps that are not deployed to Tanzu Platform for Cloud Foundry

45
Tanzu for MySQL

Architecture
Service-gateway access to MySQL service instances uses the TCP router in Tanzu Platform for Cloud
Foundry.

Any database requests that an app makes are forwarded through DNS to a load balancer that can route
traffic from outside to inside the foundation. The load balancer opens a range of ports that are reserved for
MySQL database traffic. When an app developer creates a service instance with service-gateway access
enabled, a port from the range is provisioned for that service instance. The load balancer then forwards the
requests for this MySQL service instance to the TCP router. The TCP router internally load balances
between the MySQL service instance nodes.

The following diagram shows how the traffic is routed from apps to the MySQL service instance nodes in
this case. Apps that are hosted outside the foundation and apps that are hosted on different foundations
use this route to access the service instance. Apps running on the same foundation connect directly to the
service instance without going through the load balancer or router.

VMware Tanzu for MySQL recommended usage and


limitations
This topic describes recommended use cases of VMware Tanzu for MySQL on Cloud Foundry and
limitations you might run into.

MySQL is a powerful open-source relational database that has been used by apps since the mid-90s.
Developers have relied on MySQL as a first step to storing, processing, and sharing data. As its user
community has grown, MySQL has become a robust system capable of handling a wide variety of use
cases and very significant workloads. Unlike other traditional databases that centralize and consolidate
data, MySQL lends itself to dedicated deployment supporting the “shared nothing” concept of building apps
in the cloud.

About on-demand plans


Each on-demand plan instance is deployed to its own VM and is suitable for production workloads.

The maximum-number of on-demand plan instances available to developers is set by the operator
and enforced on both a global and per-plan level quota.

46
Tanzu for MySQL

Operators can update the plan settings, including the VM size and disk size after the plans have
been created. Operators cannot downsize the VMs or disk size; this can cause data loss in
pre-existing instances.

All plans deploy a single VM of the specified size with the specified disk type.

You cannot scale down the size of VMs on the plan after they are deployed. This protects against
data loss.

The default maximum number of connections is set to 750.

Resource usage planning for on-demand plans


For information about setting limits and calculating costs for on-demand service instances, see Setting
limits for on-demand service instances.

Availability using multiple AZs


Tanzu for MySQL supports configuring multiple availability zones (AZs). However, assigning multiple AZs to
MySQL jobs does not guarantee high availability.

You can assign on-demand plans to any of the configured AZs.

BOSH randomly deploys service instances across configured AZs. This minimizes the impact of an
AZ outage and removes single points of failure.

For all Tanzu for MySQL plans, select three AZs. Choosing three AZs does not increase the
number of AZs assigned to service instances to three.

Downtime during redeploys


By default, Tanzu for MySQL does a rolling deploy during upgrades or when you apply configuration
changes. For single node and leader-follower plans, this results in Tanzu for MySQL being inaccessible to
apps for a brief period of time.

If you are using a HA cluster plan, Tanzu for MySQL uses rolling redeploys and your service does not incur
downtime. You can mitigate downtime by using HA cluster plans.

For more information about downtime in Tanzu for MySQL, see RPOs and RTOs.

Persistent disk usage


Persistent disks store your application data, InnoDB redo logs and binlogs. If your app is write-heavy,
increase your persistent disk to accommodate the binlogs. InnoDB redo logs use a static amount of disk
size. HA cluster jumpboxes require persistent disk sufficient to hold backup artifacts during certain
database restore operations.

For the amount of persistent disk required for your service plan, see the following sections:

Single node and leader-follower

Multi-site replication

HA cluster jumpbox

47
Tanzu for MySQL

HA cluster node

In the following discussion, N represents your expected application data volume in gigabytes.

Single node and leader-follower


You can configure the persistent disk size for single node or leader-follower VMs. For instructions, see
Configure service plans.

Use the following table to determine the amount of persistent disk space you need:

InnoDB Redo Logs Total Disk Size Minimum Total Disk Size

512 MB N GB + 512 MB 3 GB

The following diagram shows how persistent disk space is used:

Multi-Site Replication

48
Tanzu for MySQL

You can configure the persistent disk size for Multi-Site Replication VMs. For instructions, see Configure
service plans.

Use the following table to determine the amount of persistent disk space you need:

InnoDB Redo Logs Total Disk Size Minimum Total Disk Size

2 GB N GB + 2 GB 5 GB

The following diagram shows how persistent disk space is used:

HA Cluster Jumpbox
You can configure the persistent disk size for HA cluster jumpbox VMs. For instructions, see Configure
service plans.

Use the following table to determine the amount of persistent disk space you need:

49
Tanzu for MySQL

InnoDB Redo Logs Total Disk Size Minimum Total Disk Size

2 GB N GB + 2 GB 5 GB

The following diagram shows how persistent disk space is used:

HA cluster node
You can configure the persistent disk size for HA cluster node VMs. For instructions, see Configure service
plans. Remember to include binlogs in your size estimate for app data.

Use the following table to determine the amount of persistent disk space you need:

InnoDB Redo Logs Total Disk Size Minimum Total Disk Size

2 GB N GB + 2 GB 5 GB

The following diagram shows how persistent disk space is used:

50
Tanzu for MySQL

51
Tanzu for MySQL

VMware Tanzu for MySQL - Operator


Guide

This section covers the following areas:

Getting Started
Installing and configuring

Preparing for Multi‑Site Replication

Preparing for TLS

Setting limits for on-demand instances

Enabling Service-Gateway access

Controlling access to service plans by org

Managing VMware Tanzu for MySQL on Cloud Foundry


Upgrading

Backup and Restore


About Backups

Setting up Incremental Backups

Configuring Automated Backups

Manually restoring from backup

Accessing a database as an admin user

Rotating certificates

Resolving service interruptions

Running errands

Troubleshooting

Leader-Follower procedures
Triggering a Leader-Follower failover

Highly available clusters procedures


Bootstrapping

Running mysql-diag

About the replication canary

Getting Started with VMware Tanzu for MySQL - Operator

52
Tanzu for MySQL

Installing and configuring


Preparing for Multi‑Site Replication

Preparing for TLS

Setting limits for on-demand instances

Enabling Service-Gateway access

Controlling access to service plans by org

Installing and configuring VMware Tanzu for MySQL


You can install, configure, and deploy the VMware Tanzu for MySQL on Cloud Foundry tile. The Tanzu for
MySQL service enables you to create and use MySQL service instances on demand.

Tanzu Operations Manager admins can use Role-Based Access Control (RBAC) to manage
which operators can make deployment changes, view credentials, and manage user roles
in Tanzu Operations Manager.
Your role permissions might not permit you to do every procedure in this topic.
For more information about roles in Tanzu Operations Manager, see Roles in Tanzu
Operations Manager.

Prerequisites
Before you install the Tanzu for MySQL tile, you must:

Create an App Security Group for Tanzu for MySQL.

Enable the BOSH Resurrector.

Prepare for TLS. This is required to enable TLS and required for Multi‑Site Replication and HA
cluster plans.

Prepare for Multi‑Site Replication. This is required only if you want to replicate data across multiple
foundations or data centers.

Create an App Security Group for Tanzu for MySQL


To enable apps running on Tanzu Platform for Cloud Foundry to communicate with the MySQL service
network, you must create an App Security Group (ASG). The ASG enables smoke tests to run when you
first install the Tanzu for MySQL service and apps to access the service after it is installed.

To create an ASG for Tanzu for MySQL:

1. Go to Tanzu Ops Manager Installation Dashboard > BOSH Director.

2. Click Create Networks.

3. Click your services network and record the CIDR.

53
Tanzu for MySQL

4. Create a JSON file named mysql-asg.json using the following template:

[
{
"protocol": "tcp",
"destination": "CIDR",
"ports": "3306"
}
]

Where CIDR is the CIDR that you recorded in the previous step.

5. Create an ASG by running:

cf create-security-group p.mysql ./mysql-asg.json

6. Bind the ASG to all running apps by running:

cf bind-running-security-group p.mysql

For more information about ASGs, see App Security Groups.

Enable the BOSH Resurrector


VMware recommends activating the BOSH Resurrector when installing Tanzu for MySQL. The BOSH
Resurrector increases the availability of Tanzu for MySQL by restarting and resuming MySQL service.

The BOSH Resurrector does the following:

Reacts to hardware failures and network disruptions by restarting VMs on active, stable hosts

54
Tanzu for MySQL

Detects operating system failures by continuously monitoring VMs and restarts them as required

Continuously monitors the BOSH Agent running on each service instance VM and restarts the VM
as required

For more information about the BOSH Resurrector, see BOSH Resurrector.

To enable the BOSH Resurrector:

1. Go to Tanzu Ops Manager Installation Dashboard > BOSH Director.

2. Click Director Config.

3. Select the Enable VM Resurrector Plugin checkbox.

4. Click Save.

Download and import the tile


To download and import the Tanzu for MySQL tile:

1. Download the product file from Broadcom Support portal.

2. Go to the Tanzu Ops Manager Installation Dashboard and click Import a Product to upload the
product file.

3. Under Import a Product, click + next to the version number of Tanzu for MySQL. This adds the
tile to your staging area.

4. Click the newly-added Tanzu for MySQL tile to open its configuration panes.

Configuring the tile

55
Tanzu for MySQL

To configure the Tanzu for MySQL tile, do the following procedures.

Configure AZs and Networks


To configure an availability zone (AZ) to run the service broker and networks for the broker and MySQL
service instances:

1. Click Assign AZs and Networks.

2. Configure the fields as follows:

Field Instructions

Place Select the AZ that you want the MySQL broker VM to run in. The broker runs as a singleton job.
singleton
jobs in

Balance Ignore; not used.


other jobs in

Network Select a subnet for the MySQL broker. This is typically the same subnet that includes the Tanzu
Platform for Cloud Foundry component VMs.
This network is represented by the Default Network in the diagram in Default Network and Service
Network.

Service Select the subnet for the on-demand service instances. This network is represented by the Service
Network Network in the diagram in Default Network and Service Network.

If you are adding IPsec to encrypt MySQL communication, VMware recommends that you deploy
MySQL to its own network to avoid conflicts with services that are not IPsec compatible.

You cannot change the regions or networks after you Apply Changes.

3. Click Save.

Configuring service plans


Tanzu for MySQL enables you to configure as many as 18 service plans. Each service plan has a
corresponding section in the tile configuration; for example, Plan 1, Plan 2, and so on.

By default, plans 1 through 3 are active and plans 4 through 9 are inactive. The following procedures
describe how to change these defaults.

Do not set Plan 1 to Inactive. If you deactivate Plan 1, your installation fails when you
apply changes.

Review the following information about creating plans for restoring multi-node service instances:

If you offer leader-follower or highly available (HA) cluster plans, you must configure single-node or
Multi‑Site Replication plans that can be used to restore a multi-node plan from backup.

If you offer service plans of type… Then configure a service plan of type…

leader-follower Single node, with the persistent disk as large as the largest leader-follower plan
offered.

56
Tanzu for MySQL

HA cluster Multi‑Site Replication, with the persistent disk as large as the largest HA cluster plan
offered.

For information about how multi-node service instances are restored, see Restore a Service Instance in
Backing up and Restoring VMware Tanzu for MySQL on Cloud Foundry.

Procedure for configuring service plans


For each plan that you want to use in your deployment:

1. Click the section for the plan. For example, Plan 1.

2. Select the plan for your desired topology.

Plan
Service Plan Description
Name

Multi-Site db-multi- This plan provides a small dedicated MySQL single note instance that can be
Replication site-small configured into a Multi-Site Leader-Follower (cpu: 2, ram: 4 GB, ephemeral disk: 8 GB,
persistent disk: 20 GB).

Single Node db-small This plan provides a small dedicated MySQL instance (cpu: 1, ram: 1 GB, ephemeral
disk: 8 GB, persistent disk: 10 GB).

Leader- db-leader- This plan provides a small l/f pair.


Follower follower

HA Cluster db-ha- This plan provides a small dedicated MySQL HA Cluster instance (cpu: 2, ram: 2 GB,
small ephemeral disk: 8 GB, persistent disk: 20 GB).

3. Configure the fields as follows:

If you want to replicate data across multiple foundations or data centers, you must
configure a Multi‑Site Replication plan in both foundations using the same
configurations.

Field Description

MySQL MySQL 8.0 is selected as the default and only option.


Default
Version

57
Tanzu for MySQL

Service Plan Select one of the following options:


Access Enable (Default): Gives access to all orgs and displays the service plan to all
developers in the Marketplace.

Disable: Deactivates access to all orgs and hides the service plan to all developers in
the Marketplace. This deactivates creating new service instances of this plan.

Manual: Lets you manually control service access with the cf CLI. For more information,
see Controlling Access to Service Plans by Org.

Plan Name Accept the default or enter a name. This is the name that appears in the Marketplace for
developers.

Plan Accept the default or enter a description to help developers understand plan features. VMware
Description recommends adding VM type details and disk size to this field.

Plan Quota Enter the maximum number of service instances that can exist at one time. If the plan quota field is
blank, the plan quota is set to the global quota by default.
For information about the global quota, see Setting Limits for On-Demand Service Instances.

Paid Plan Select this box to indicate that this service plan is paid.

MySQL VM Select a VM type for the MySQL nodes.


Type

Jumpbox VM Only for highly available cluster plans. Select a VM type for the MySQL jumpbox node. This VM is
Type also called mysql-monitor.

MySQL Select a disk size. This disk stores the MySQL data.
Persistent For sizing recommendations, see Persistent Disk Usage.
Disk

Jumpbox Only for highly available cluster plans. Select a disk size. This disk stores backups.
Persistent For sizing recommendations, see Persistent Disk Usage.
Disk

MySQL BOSH deploys your service instances to the selected AZs. If more than one AZ is selected, BOSH
Availability randomizes which AZ to place each VM.
Zone(s)

Plan VM Specify a comma-separated list of supported VM extensions you want to apply to service
Extensions instances created under this plan.
You can manage VM Extensions in Tanzu Operations Manager or through the om CLI. For more
information, see Create or Update a VM Extension or om create-vm-extension in GitHub.
If you specify an extension that is not supported by Tanzu Operations Manager (not present in the
BOSH cloud config), then instance creation fail.

Transitioning between two different plans requires them to be in the same AZ's.
If you create Plan 1 in AZ1 and Plan 2 in AZ2, developers receive an error
and cannot continue if they try to upgrade from Plan 1 to Plan 2. This
prevents them from losing their data by orphaning their disk in AZ1.

If you anticipate configuring a HA cluster as a multi-site leader, then


assign the same set of Availability Zones (AZs) to both that HA Cluster
plan and your Multi‑Site Replication plan. This lets multi-site switchover
and failover procedures transition instances between the plans as needed.
This guidance applies both to your primary foundation (with your multi-site
leader instance) and to your secondary foundation (with your multi-site

58
Tanzu for MySQL

follower instance). The foundations may have different AZ's, but within
each foundation, any plans used for multi-site configuration should share a
common set of AZs.
If you need to manually migrate the data from one AZ to another, see About Data
Migration in Tanzu for MySQL.

4. Click Save.

Deactivate Service Plan (Optional)


To deactivate a service plan:

1. If the service plan has existing service instances:


1. Click the section for the plan. For example, Plan 2.

2. Under Service Plan Access, select Disable.

3. Click Save.

4. Return to the Tanzu Ops Manager Installation Dashboard and click Apply Changes to
redeploy.

5. When the deployment has completed, use the cf CLI or Apps Manager to delete all existing
service instances on the service plan.

6. Return to the Tanzu for MySQL tile configuration.

2. Click the section for the plan. For example, Plan 2.

3. Click Inactive.

4. Click Save.

Configure Global Settings


To configure global settings for all service instances:

1. Click Settings.

59
Tanzu for MySQL

2. Configure the fields as follows:

Field Instructions

Provide public IP addresses to Select Yes if either of the following apply:


all Service VMs Your service instances need an external backup, blobstore, or syslog
storage

You have configured BOSH to use an external blobstore

60
Tanzu for MySQL

Field Instructions

Maximum service instances Enter the global quota for all on-demand instances summed across every on-
demand plan. For information about determining global quotas, see Service Plan
Recommended Usage and Limitations.

Enable off-platform access of For information about configuring off-platform access, see Enabling Service-
MySQL service instances Gateway Access.

Tags Select Enabled and fill in the External TCP Domain and the External TCP
Port Range.
If required, select Enforce External Access for All Multi-Site Instances.
In Tags, enter a comma separated list of key-value pairs for tagging service
instance VMs and disks. The accepted format depends on the underlying Cloud
Provider. For example, Google Cloud Platform does not allow uppercase
characters.

3. Click Save.

Configuring MySQL
To set MySQL defaults and enable developers to customize their instances:

1. Click Mysql Configuration.

2. Configure the fields as follows:

Field Instructions

61
Tanzu for MySQL

Enable Lower Select this checkbox to store all table names in lowercase. This sets the MySQL server
Case Table system variable lower_case_table_names to 1 on all Tanzu for MySQL instances by default.
Names To permit developers to override this default, see the following checkbox.
For more information about lower_case_table_names, see the MySQL documentation.

Before you activate this feature, ensure that all tables have lowercase
names. Tables with uppercase names are inaccessible after enabling
lowercase table names.

Allow Developers Select this checkbox to permit developers to override the configured default Enable Lower
To Override Case Table Names value. For more information, see Optional Parameters for the Tanzu for
Lower Case Table MySQL Service Instances.
Names

Enable Local Infile Select this checkbox to activate data downloading from the local file system of the client.
VMware discourages selecting this checkbox. Before you activate local infile, review the
security issues associated with LOAD DATA LOCAL. See the MySQL documentation.

Limit binary log Select this checkbox to limit binary log disk usage and prevent disk space exhaustion. Limiting
disk use to 33% of the size of the MySQL binary logs reduces the risk of the persistent disk reaching capacity and
disk capacity causing service interruption.

Wait Timeout Enter the amount of time in seconds that MySQL waits to close inactive connections. For more
information about wait_timeout, see the MySQL documentation.

Configuring backups
To learn how backups work, see About Backups.

You must configure backups. You cannot deactivate this feature.

To configure backups:

1. Click Backups.

62
Tanzu for MySQL

2. Select a Backup configuration and follow the procedure for your storage solution in the
Configuring Automated Backups topic:

Ceph or Amazon S3: Tanzu for MySQL runs an Amazon S3 client that saves backups to
an S3 bucket, a Ceph storage cluster, or another S3-compatible endpoint certified by
VMware.
For information about using Ceph or Amazon S3 for backups, see Back up to Ceph or S3.

SCP: Tanzu for MySQL runs an Secure Copy Protocol (SCP) command that secure-copies
backups to a VM or physical machine operating outside of your deployment. This is the
fastest option.
SCP enables you to securely transfer files between two hosts. You can provision the
backup machine separately from your installation.
For information about using SCP for backups, see Back up with SCP.

GCS: Tanzu for MySQL runs a Google Cloud Storage (GCS) SDK that saves backups to
an GCS bucket.
For information about using GCS for backups, see Back up to GCS.

Azure: Tanzu for MySQL runs an Azure SDK that saves backups to an Azure storage
account.
For information about using Azure for backups, see Back up to Azure Storage.

Configuring security
To configure the security settings for the MySQL service, do one or both:

Enable TLS for the MySQL service

Store your service instance credentials in runtime CredHub

63
Tanzu for MySQL

To enable TLS for the MySQL service:

1. Do the procedures in Preparing for TLS.

2. Click Security.

3. For Enforce application TLS, select one of the following:

State Instructions

Unchecked This enables developers to configure their MySQL service VMs to use TLS.

Accept only TLS v1.2 connections: Selecting this checkbox enforces TLS
v1.2 as the minimum TLS version for client connections.

Checked This enables developers to configure their MySQL service VMs to use TLS and
requires all MySQL service VMs to only accept secure connections.

Accept only TLS v1.2 connections: Selecting this checkbox enforces TLS
v1.2 as the minimum TLS version for client connections.

Selecting Required breaks any apps that are not currently connecting over TLS.

Enforcing TLS v1.2 has the following effects:

For apps using MySQL Connector/J 5.1.44 and earlier, apps might break.

For apps using MySQL Connector/J 5.1.44 to 5.1.48, developers must rebind the apps.

The TLS v1.2 restriction is a global setting, so it applies to every service instance. Any apps in the
foundation using an old version of MySQL Connector/J fail to connect to MySQL after this new
setting is applied.

4. Click Save.

5. After deploying the tile, notify your developers that they must enable TLS for their service
instances and for their apps. See Using TLS.

To store your service instance credentials in runtime CredHub:

You can store your service instance credentials in runtime CredHub instead of the Cloud Controller
database (CCDB).

For more information about runtime CredHub, see CredHub.

64
Tanzu for MySQL

1. Ensure that you have configured the Tanzu Platform for CF tile to support securing service instance
credentials in runtime CredHub. For instructions, see Step 1: Configure the Tanzu Platform for CF
Tile.

2. Click Security.

3. Select the Enable Secure Service Instance Credentials checkbox.

4. Click Save.

5. After deploying the tile, notify the developers that they must unbind and rebind any existing service
instances bindings if they want to use secure service instance credentials. Instructions for the
developers follow:

1. Unbind the service instance from the app by running:

cf unbind-service APP SERVICE-INSTANCE

2. Rebind the service instance to the app by running:

cf bind-service APP SERVICE-INSTANCE

3. Restart the app to apply the new binding by running:

cf restart APP

4. Verify that the binding includes CredHub pointers in the VCAP_SERVICES environment
variable by running:

cf env APP

For example:

$ cf env my-app
Getting env variables for app my-app in org system / space exampl
e as admin...
OK
System-Provided:
{
"VCAP_SERVICES": {
"p.mysql": [
{
"credentials": {
"credhub-ref": "/c/548966e5-e333-4d65-8773-7b4e3bb6ca97/4a24
6b0b-83bb-46d0-b8ac-35a93374ae67/caf6e32e-7361-4869-9a57-54ab8ae6
7b3f/credentials"
},
[...]

If a developer rebinds an app to the TCF-MySQL service after unbinding, they must also
rebind any existing custom schemas to the app. When you rebind an app, stored code,

65
Tanzu for MySQL

programs, and triggers break. For more information about binding custom schemas, see
Use custom schemas.

Configuring monitoring
To activate monitoring and logging in the MySQL service:

1. Click Monitoring.

2. Configure the fields as follows and then Click Save:

Field Instructions

Metrics Enter the amount of time in seconds between the monitor polling for metrics. All service instances emit
Polling metrics about the health and status of the MySQL server.
Interval

Enable Select this checkbox to collect user statistics. You can use these statistics to better understand
User server activity and identify load sources. For more information about user statistics, see the Percona
Statistics documentation.
Logging

Enable Select this checkbox to record what queries are processed using the Percona Audit Log Plugin. For
Server more information, see the Percona documentation
Activity
Logging MySQL audit logs are not forwarded to the syslog server because they can
contain personally identifying information (PII) and secrets.

Enable Select this checkbox to create a read-only admin user named roadmin on each service instance.
Read Only This user can be used for auditing and monitoring without risking changing any data. This is because
Admin User roadmin cannot make changes to data.

For information about retrieving the credentials for roadmin, see Retrieve Admin and Read-Only
Admin Credentials for a Service Instance. The read-only admin user is always named roadmin,
however, the password varies by service instance.

66
Tanzu for MySQL

Configuring system logging


To activate RFC 5424 system logging for the MySQL broker and service instance VMs:

1. In the Settings tab, click Syslog.

2. Click Yes.

3. Configure the fields as follows:

67
Tanzu for MySQL

Field Instructions

Address Enter the IP address or hostname of the syslog server for sending logs. For example:
logmanager.example.com.

Port Enter the port of the syslog server for sending logs. For example: 29279.

Transport Select the protocol you want to use to send system logs. VMware recommends using TCP.
Protocol

Enable TLS If you select TCP, you can also select to send logs encrypted over TLS.

Permitted Peer Enter either:


The accepted fingerprint in SHA1 format.

The name of the remote peer. For example: *.example.com.

SSL Certificate Enter the SSL certificates for the syslog server. This ensures that the logs are transported
securely.

If your syslog server is external to your deployment, you might need to select Provide public IP
addresses to all Service VMs on the Settings page. See Configure Global Settings.

4. Click Save.

Configuring service instance upgrades


This section configures the upgrade-all-service-instances errand. Tanzu for MySQL uses this errand
to upgrade service instances. For more information about the upgrade-all-service-instances errand,
see upgrade-all-service-instances.

To configure service instance upgrades.

1. Click Service Instance Upgrades.

2. Configure the fields as follows:

Field Instructions

68
Tanzu for MySQL

Number of Enter the maximum number of service instances that can upgrade at the same time. The minimum
simultaneous value is 0 and the maximum is 1 less than the number of BOSH workers. Increasing this value
upgrades reduces the runtime of service instance upgrades.

To determine the number of BOSH workers, go to BOSH Director > Director


Config and locate the value of Director Workers.

Number of Enter the number of service instances to upgrade first before upgrading the rest of the instances.
upgrade Increasing this value enables service instance upgrades to fail faster.
canary
instances

BOSH Enter the amount of time in seconds to wait for BOSH to respond before timing out when upgrading
Upgrade service instances. Increasing this value enables service instance upgrades to fail faster.
Timeout

3. Click Save.

Review errands (Optional)


Errands are scripts that run at specific times to do various tasks. Tanzu for MySQL can run errands to
manage the broker and service instances. You do not need to change the default configurations for errands.

The Delete All Service Instances and Deregister Broker errand does necessary cleanup
tasks when you delete the Tanzu for MySQL tile or redefine plans.

VMware recommends that you do not set this errand to Off. Setting this errand to Off can
cause problems when attempting to install the tile again or re-define plans.

Tanzu for MySQL uses the following types of errands:

Post-Deploy Errands: These errands run when you click Apply Changes.

Pre-Delete Errands: These errands run before you delete the Tanzu for MySQL tile.

Other uses for errands:

Tanzu for MySQL uses errands to configure leader-follower service instances. For information about
leader-follower errands, see Errands.

You can use errands when troubleshooting the broker or service instances. For information about
using errands for troubleshooting, see Run Service Broker Errands to Manage Brokers and
Instances.

To review errands:

1. Click Errands.

69
Tanzu for MySQL

2. Review the settings for the following errands:

Errand Description

Post-Deploy Errands

Register On-demand MySQL Broker Registers a broker with the Cloud Controller and lists it in the
Marketplace.

Smoke Tests Validates basic MySQL operations.

70
Tanzu for MySQL

Validate no IP-based bindings in use before Checks if service instances have:


upgrade-all-service-instances app bindings or service keys using IP addresses, or

a TLS certificate that is signed with an IP address

If either is true, the installation fails.

Upgrade all On-demand MySQL Service Upgrades existing instances of a service to its latest installed version.
Instances
If you want developers to be able to individually upgrade service
instances, set this errand to OFF.

Pre-Delete Errands

Delete All Service Instances and Deletes all service instances and deregisters the broker.
Deregister Broker For more information about individual service instance upgrades, see
About Individual Service Instance Upgrades.

Verifying stemcell version and apply all changes


To verify your stemcell version and apply all changes:

1. Click Stemcell Library. For more information about using the Stemcell Library, see Importing and
Managing Stemcells.

2. Verify and, if necessary, import a new stemcell version.

3. Go to Tanzu Ops Manager Dashboard > Review Pending Changes.

4. Click Apply Changes.

For information about the stemcells that are compatible with Tanzu for MySQL, see Release Notes or
Broadcom Support portal.

Preparing for multi-site replication


This topic tells you how to prepare foundations for multi-site replication using VMware Tanzu for MySQL on
Cloud Foundry.

For information about multi-site architecture, see About multi-site replication.

Prerequisites
Before you configure multi-site replication for Tanzu for MySQL, you must confirm that you have done the
following:

1. Created two Tanzu Platform for Cloud Foundry foundations that support a current version of Tanzu
for MySQL.

2. Deployed both foundations.

1. Routable Foundations: The leader and follower service instances must be able to connect
to each other on their respective IP addresses. You must verify that the CIDR ranges for
the leader and follower nodes do not overlap.

2. Non-routable Foundations: Follow the steps provided in Enabling Service-Gateway


Access. For more information about this architecture, see Enable External Access

71
Tanzu for MySQL

3. Determined which of the following disaster recovery strategies you want to use for your
foundations: For more information, see:

About active-passive topology

About App-Layer Active-Active topology

4. Configured a global DNS load balancer to point to the two Tanzu Platform for CF foundations and
their local load balancers. For more information, see Configure your GLB.

Enable Multi-Site replication


After two foundations are created, you can enable multi-site replication by doing the following:

1. Configure networking rules. For more information, see Required networking rules for multi-site
replication.

2. Enable TLS for both foundations. When you paste the contents of your TLS CA certificate in
Preparing for TLS, paste both TLS CA certificates as shown in the following image:

3. Configure a multi‑site replication plan on both foundations. See Configure service plans.

4. If you want to allow developers to use high-availability instances as multi-site leaders, configure a
high-availability plan on both foundations. Note that you must assign the same Availability Zones
(AZs) to both your high-availability plan and any multi‑site replication. For more information see
Configuring Service Plans.

Preparing for TLS


This topic gives you an overview of how to prepare for using Transport Layer Security (TLS) with VMware
Tanzu for MySQL on Cloud Foundry to secure communication between apps and service instances.

This procedure involves restarting all of the VMs in your deployment to apply a CA
certificate. The operation can take a long time to complete.

TLS-enabled client connections to MySQL require TLS v1.2.

When you use TLS, a TCF-MySQL server with a certificate is provisioned. With this certificate, apps and
clients can establish an encrypted connection with the service.

Using BOSH CredHub, Tanzu Operations Manager generates a server certificate using a Certificate
Authority (CA) certificate.

If you do not want to use the CA certificate generated, you can provide your own CA certificate and add it
through the CredHub CLI. For an overview of the purpose and capabilities of the CredHub component, see
CredHub.

Apps and clients use this CA certificate to verify that the server certificate is trustworthy. A trustworthy
server certificate allows apps and clients to securely communicate with the TCF-MySQL server.

Tanzu Platform for Cloud Foundry shares the CA certificate public component:

72
Tanzu for MySQL

Tanzu Platform for CF provisions a copy of the CA certificate in the trusted store of each
container’s operating system.
Apps written in Java and Spring automatically discover the CA certificate in the trusted store.
Apps not written in Java and Spring can retrieve the public component of the CA certificate from
VCAP_SERVICES and use it to establish an encrypted connection with the data service.

Generated or Provided CA Certificate


Tanzu Operations Manager can generate a CA certificate for TLS to use.

Alternatively, you can choose to provide your own CA certificate for TLS to use.

Workflow
The workflow you follow to prepare for TLS depends on whether you use the CA certificate generated by
Tanzu Operations Manager or if you bring your own CA certificate.

Using the Generated CA Certificate


To use the CA certificate that Tanzu Operations Manager generates through CredHub, follow this workflow
to enable TLS for TCF-MySQL:

1. An operator adds the CredHub-generated certificate to Tanzu Operations Manager by performing the
procedures:

1. Find the CredHub Credentials in Tanzu Operations Manager

2. Add the CA Certificate

2. An operator enables TLS in the tile configuration while installing TCF-MySQL. See Configure TLS in
TCF-MySQL.

3. A developer edits their app to communicate securely with the TCF-MySQL server:

For Java and Spring apps: See Activate TLS for Java and Spring Apps.

For all other apps: See Activate TLS for Non-Spring Apps.

Providing Your Own CA Certificate


To provide your own CA certificate instead of using the one that Tanzu Operations Manager generates,
follow this workflow to enable TLS for VMware Tanzu for MySQL on Cloud Foundry:

1. An operator provides a CA certificate to CredHub by performing the procedures:

1. Find the CredHub Credentials in Tanzu Operations Manager.

2. Set a Custom CA Certificate.

3. Add the CA Certificate.

2. An operator enables TLS in the tile configuration while installing TCF-MySQL. See Configure TLS in
TCF-MySQL.

3. A developer edits their app to communicate securely with the TCF-MySQL server:

For Java and Spring apps: See Activate TLS for Java and Spring Apps.

73
Tanzu for MySQL

For all other apps: See Activate TLS for Non-Spring Apps.

Find the CredHub Credentials in Tanzu Operations


Manager
To find the BOSH CredHub client name and client secret:

1. In the Tanzu Ops Manager Installation Dashboard, click the BOSH Director tile.

2. Click the Credentials tab.

3. In the BOSH Director section, click the link to the BOSH Commandline Credentials.

Click here to view a larger version of this image

4. Record the values for BOSH_CLIENT and BOSH_CLIENT_SECRET.

Here is an example of the credentials page:

{"credential":"BOSH_CLIENT=ops_manager
BOSH_CLIENT_SECRET=abCdE1FgHIjkL2m3n-3PqrsT4EUVwXy5
BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate
BOSH_ENVIRONMENT=10.0.0.5 bosh "}

74
Tanzu for MySQL

The BOSH_CLIENT is the BOSH CredHub client name and the BOSH_CLIENT_SECRET is the BOSH
CredHub client secret.

Set a Custom CA Certificate


Do this procedure if you are providing your own custom CA certificate instead of using the one generated by
Tanzu Operations Manager or CredHub.

Prerequisite: To complete this procedure, you must have the CredHub CLI. For installation instructions,
see credhub-cli on GitHub.

To add a custom CA Certificate to CredHub:

1. Record the information needed to log in to the BOSH Director VM by following the procedure in
Gather Credential and IP Address Information.

2. Log in to the Tanzu Operations Manager VM by following the procedure in Log in to the Tanzu
Operations Manager VM with SSH.

3. Set the API target of the CredHub CLI as your CredHub server by running:

credhub api \
https://BOSH-DIRECTOR-IP:8844 \
--ca-cert=/var/tempest/workspaces/default/root_ca_certificate

Where BOSH-DIRECTOR-IP is the IP address of the BOSH Director VM.

For example:

$ credhub api \

https://10.0.0.5:8844 \

--ca-cert=/var/tempest/workspaces/default/root\_ca\_certificate

4. Log in to CredHub by running:

credhub login \
--client-name=CREDHUB-CLIENT-NAME \
--client-secret=CREDHUB-CLIENT-SECRET

Where:

CREDHUB-CLIENT-NAME is the value you recorded for BOSH_CLIENT in Find the CredHub
Credentials in Tanzu Operations Manager.

CREDHUB-CLIENT-SECRET is the value you recorded for BOSH_CLIENT_SECRET in Find the


CredHub Credentials in Tanzu Operations Manager.

For example:

$ credhub login \

--client-name=credhub \

75
Tanzu for MySQL

--client-secret=abcdefghijklm123456789

5. Use the CredHub CLI to provide a CA certificate. Your deployment can have multiple CA
certificates. VMware recommends a dedicated CA certificate for services. Create a new file called
root.pem with the contents of the certificate. Then, run the following command, specifying the path
to root.pem and the private key for the certificate. For example:

$ credhub set \

--name="/services/tls_ca" \

--type="certificate" \

--certificate=./root.pem \

--private=ERKSOSMFF...

Add the CA Certificate


Prerequisite: To complete this procedure, you must have the CredHub CLI. For installation instructions,
see credhub-cli on GitHub.

To add the CA Certificate to Tanzu Operations Manager:

1. Record the CA certificate by running:

credhub get \
--name=/services/tls_ca \
-k ca

2. Go to Tanzu Ops Manager Installation Dashboard > BOSH Director > Security.

3. Append the contents of the CA certificate you recorded in an earlier step into Trusted Certificates.

4. Click Save.

Configure TLS in TCF-MySQL


To configure TLS in the TCF-MySQL tile:

1. Follow the procedure in Configure Security.

2. Navigate to Tanzu Ops Manager Installation Dashboard > Review Pending Changes.

3. Ensure that the CA certificate is deployed to all VMs by selecting:

Tanzu Platform for Cloud Foundry

VMware Tanzu for MySQL on Cloud Foundry

The Upgrade All On-Demand Service Instances errand

4. Click Apply Changes. This restarts all the VMs in your deployment and applies your CA certificate.

76
Tanzu for MySQL

Setting limits for on-demand service instances


On-demand provisioning is intended to accelerate app development by eliminating the need for development
teams to request and wait for operators to create a service instance. However, to control costs, operations
teams and administrators must ensure responsible use of resources.

There are several ways to control the provisioning of on-demand service instances by setting various
quotas at these levels:

Global

Plan

Org

Space

After you set quotas, you can:

View current org and space-level quotas

Monitor quota use and service instance count

Calculate resource costs for on-demand plans

Create global-level quotas


Each on-demand service has a separate service broker. A global quota at the service level sets the
maximum number of service instances that can be created by a given service broker. If a service has more
than one plan, then the number of service instances for all plans combined cannot exceed the global quota
for the service.

The operator sets a global quota for each service tile independently. For example, if you have two service
tiles, you must set a separate global service quota for each one.

When the global quota is reached for a service, no more instances of that service can be created unless the
quota is increased, or if some instances of that service are deleted.

Create plan-level quotas


A service might offer one or more plans. You can set a separate quota per plan so that instances of that
plan cannot exceed the plan quota. For a service with multiple plans, the total number of instances created
for all plans combined cannot exceed the global quota for the service. If the plan quota field is blank, the
plan quota is set to the global quota by default.

When the plan quota is reached, no more instances of that plan can be created unless the plan quota is
increased or some instances of that plan are deleted.

Create and set org-level quotas


An org-level quota applies to all on-demand services and sets the maximum number of service instances
that can be created in an organization, within the foundation. For example, if you set your org-level quota to
100, developers can create up to 100 service instances in that org using any combination of on-demand
services.

77
Tanzu for MySQL

When this quota is reached, no more service instances of any kind can be created in the org unless the
quota is increased or in some service instances are deleted.

To create and set an org-level quota, do the following:

1. Run the following command to create a quota for service instances at the org level:

cf create-quota QUOTA-NAME -m TOTAL-MEMORY -i INSTANCE-MEMORY -r ROUTES -s SERV


ICE-INSTANCES --allow-paid-service-plans

Where:

QUOTA-NAME - A name for this quota

TOTAL-MEMORY - Maximum memory used by all service instances combined

INSTANCE-MEMORY - Maximum memory used by any single service instance

ROUTES - Maximum number of routes allowed for all service instances combined

SERVICE-INSTANCES - Maximum number of service instances allowed for the org

For example:

cf create-quota myquota -m 1024mb -i 16gb -r 30 -s 50 --allow-paid-serv


ice-plans

2. Associate the quota you just created with a specific org by running the following command:

cf set-quota ORG-NAME QUOTA-NAME

For example:

cf set-quota dev_org myquota

For more information about managing org-level quotas, see Creating and modifying quota plans.

Create and set space-level quotas


A space-level service quota applies to all on-demand services and sets the maximum number of service
instances that can be created within a given space in a foundation. For example, if you set your space-level
quota to 100, developers can create up to 100 service instances in that space using any combination of on-
demand services.

When this quota is reached, no more service instances of any kind can be created in the space unless the
quota is updated or in some service instances are deleted.

To create and set a space-level quota, do the following:

1. Run the following command to create the quota:

cf create-space-quota QUOTA-NAME -m TOTAL-MEMORY -i INSTANCE-MEMORY -r ROUTES -


s SERVICE-INSTANCES --allow-paid-service-plans

Where:

78
Tanzu for MySQL

QUOTA-NAME - A name for this quota

TOTAL-MEMORY - Maximum memory used by all service instances combined

INSTANCE-MEMORY - Maximum memory used by any single service instance

ROUTES - Maximum number of routes allowed for all service instances combined

SERVICE-INSTANCES - Maximum number of service instances allowed for the org

For example:

cf create-space-quota myspacequota -m 1024mb -i 16gb -r 30 -s 50 --allo


w-paid-service-plans

2. Associate the quota you previously created with a specific space by running the following
command:

cf set-space-quota SPACE-NAME QUOTA-NAME

For example:

cf set-space-quota myspace myspacequota

For more information on managing space-level quotas, see Creating and modifying quota plans.

View current org and space-level quotas


To view org quotas, run the following command.

cf org ORG-NAME

To view space quotas, run the following command:

cf space SPACE-NAME

For more information about managing org and space-level quotas, see Creating and modifying quota plans.

Monitor quota use and service instance count


Service-level and plan-level quota use and total number of service instances are available through the on-
demand broker metrics emitted to Loggregator.

Metric Name Description

on-demand-broker/SERVICE-NAME/quota_remaining Quota remaining for all instances across all plans

on-demand-broker/SERVICE-NAME/PLAN-NAME/ Quota remaining for a specific plan


quota_remaining

on-demand-broker/SERVICE-NAME/total_instances Total instances created across all plans

on-demand-broker/SERVICE-NAME/PLAN-NAME/ Total instances created for a specific plan


total_instances

79
Tanzu for MySQL

Quota metrics are not emitted if no quota has been set.

You can also view service instance usage information in Apps Manager. For more information, see
Reporting instance usage with Apps Manager.

Calculate resource costs for on-demand plans


On-demand plans use dedicated VMs, disks, and various other resources from an IaaS such as AWS. To
calculate maximum resource cost for plans individually or combined, multiply the quota by the cost of the
resources selected in the plan configurations. The specific costs depend on your IaaS.

To view configurations for your Tanzu for MySQL on-demand plan, do the following:

1. Go to Tanzu Ops Manager Installation Dashboard > Tanzu for MySQL > Settings.

2. Click the section for the plan you want to view; for example, Plan 1.

The following images show examples that include the VM type and persistent disk selected for the server
VMs, and the quota for the plan.

Example: Single Node and Leader Follower

Example: Galera

80
Tanzu for MySQL

Although you can limit on-demand instances with plan quotas and a global quota, as
described in the preceding topics, IaaS resource usage still varies based on the number of
on-demand instances provisioned.

Calculate maximum resource cost per on-demand plan


To calculate the maximum cost of VMs and persistent disk for each plan, do the following calculation:

plan quota x cost of selected resources

For example, if you selected the options in the previous image, you have selected a VM type micro and a
persistent disk type 20 GB, and the plan quota is 15. The VM and persistent disk types have an associated
cost for the IaaS you are using. Therefore, to calculate the maximum cost of resources for this plan,
multiply the cost of the resources selected by the plan quota:

(15 x cost of micro VM type) + (15 x cost of 20 GB persistent disk) = max cost per plan

Calculate maximum resource cost for all on-demand plans


To calculate the maximum cost for all plans combined, add together the maximum costs for each plan. This
assumes that the sum of your individual plan quotas is less than the global quota.

81
Tanzu for MySQL

For example:

(plan1 quota x plan1 resource cost) + ( plan2 quota x plan2 resource cost) = max cost for all plans

Calculate actual resource cost of all on-demand plans


To calculate the current actual resource cost across all your on-demand plans:

1. Find the number of instances currently provisioned for each active plan by looking at the
total_instance metric for that plan.

2. Multiply the total_instance count for each plan by that plan’s resource costs. Record the costs
for each plan.

3. Add up the costs to get your total current resource costs.

For example:

(plan1 total_instances x plan1 resource cost) + (plan2 total_instances x plan2 resource cost) =
current cost for all plans

Enabling Service-Gateway access


This topic tells you how to enable service-gateway access in Tanzu for MySQL.

Service-gateway access enables external clients to connect to a MySQL service. The clients are typically
apps running external to the foundation, apps on a different foundation, and management tools such as
MySQL Workbench.

For a more detailed overview, see About Service-Gateway access.

To enable service-gateway access for an on-demand offering:

1. Activate TCP routing using the Tanzu Platform for Cloud Foundry tile.

2. Configure the firewall to allow incoming traffic to the TCP router.

3. Configure the load balancer in the IaaS to redirect traffic to the TCP router.

4. Create a DNS record that maps to the load balancer.

5. Activate service-gateway access.

VMware recommends that you configure Transport Layer Security (TLS) alongside service-
gateway access to prevent man-in-the-middle attacks. For instructions for configuring TLS,
see Configure security.

Enable TCP Routing using the tile


TCP routing is turned off by default. To activate TCP routing Tanzu Platform for Cloud Foundry:

1. Go to the Networking pane of the Tanzu Platform for CF tile.

2. Under Enable TCP requests to apps through specific ports on the TCP router, select Enable
TCP routing.

82
Tanzu for MySQL

3. For TCP routing ports, enter one or more ports to which the load balancer forwards requests. For
example, 1024 for a single port or 1024–1123 for a range of ports.

4. Go to Tanzu Ops Manager Installation Dashboard > Review Pending Changes.

5. Click Apply Changes for the Tanzu Platform for CF tile to create the TCP router.

6. From the Status tab of the Tanzu Platform for CF tile, record the cloud identity (CID) of the TCP
router.

Configure the firewall to allow incoming traffic to the TCP


Router
To allow traffic to the TCP router depend on your IaaS:

Allow incoming traffic to the TCP router VM created in Activate TCP Routing using the Tanzu
Platform for Cloud Foundry tile.

For more detailed information, see the documentation for your IaaS.

Configure the Load Balancer in the IaaS to redirect traffic to


the TCP Router
To configure the load balancer:

1. Use the IaaS console and the CID you recorded earlier to find the VM that runs the TCP router.

2. Create an external TCP load balancer that points to the VM running the TCP router.

3. Configure a distinct external port range that does not overlap with any of the following:

The TCP networking port or port range that you configured in Activate TCP Routing using
the Tanzu Platform for CF tile.

The port range configured for service-gateway access for other service tiles, such as
VMware Tanzu RabbitMQ.

For example, if your TCP routing port range is 1024–1123, and ports 1124–1223 are reserved for
Tanzu RabbitMQ service instances, then your load balancer port range for service gateway must
not overlap 1024-1223.

Each Tanzu for MySQL service instance using service-gateway access requires a
unique port. Ensure that the port range configured has enough capacity to
accommodate all the service instances you need. The start port and the end port
are both inclusive.

4. Record this port range.

83
Tanzu for MySQL

Create a DNS record that maps to the Load Balancer


To create a DNS record and prepare to map it:

1. Following the documentation for your IaaS, create a new DNS record of type A that maps to the
external IP address of the load balancer created in Configure the Load Balancer in the IaaS to
redirect traffic to the TCP Router.

2. Record the domain used for this DNS record.

Enable Service-Gateway access


When service-gateway access is enabled, all developers have the ability to create a service instance that is
available to apps outside the foundation.

For Tanzu for MySQL, service-gateway access is enabled globally. Access is not tied to certain service
plans, as in Tanzu RabbitMQ.

To configure service-gateway access for the foundation:

1. Go to the Settings tab in the Tanzu for MySQL tile.

2. Scroll down and click Settings on the left.

3. Under Enable off-platform access of MySQL service instances, click Enabled.

84
Tanzu for MySQL

This activates the feature and makes the External TCP Domain, External TCP Port Range, and
Enable External Access for All Multi-Site Instances fields visible.

4. Configure the fields as follows:

Field Instructions

External TCP Set to the DNS entry for the external load balancer that you recorded in Create a DNS
Domain Record That Maps to the Load Balancer.

External TCP Port Set to the range of ports you configured for the external load balancer for MySQL service
Range instances in Configure the Load Balancer in the IaaS to Redirect Traffic to the TCP Router.

If service-gateway access is deactivated and then activated again, app developers


must create new service keys to obtain a new set of credentials for service-
gateway access.

5. Go back to Tanzu Ops Manager Installation Dashboard > Review Pending Changes.

6. Click Apply Changes to apply the changes to the Tanzu for MySQL tile.

Turn off Service-Gateway access

If service-gateway access is turned off and then enabled again, app developers must
create new service keys to obtain a new set of credentials for service-gateway access.

To turn off service-gateway access:

1. Go to the Settings pane in the Tanzu for MySQL tile.

2. For Enable off-platform access of MySQL service instances, click Disabled.

3. Go back to Tanzu Ops Manager Installation Dashboard > Review Pending Changes.

4. Click Apply Changes to apply the changes to the Tanzu for MySQL tile.

Developer workflow

85
Tanzu for MySQL

For instructions for app developers, see Create a service instance with Service-Gateway access.

Controlling access to service plans by org


You can control access to service plans by org in VMware Tanzu for MySQL on Cloud Foundry. You can
also set limits for the number of service instances globally and per plan.

For more information, see Setting limits for on-demand instances.

Control access to service plan by orgs


You can control which Cloud Foundry orgs are able to access specific service plans in Tanzu for MySQL.
By default, active service plans are visible to all orgs. Controlling which orgs have access to a specific
service plan enables you to ensure that the resource-intensive service plans are available only to the orgs
that explicitly need them.

To configure Tanzu for MySQL to control service-plan access:

1. Set the Service Plan Access field to Manual on any active service plan.

For more information, see Configure active service plans.

2. Click Save.

3. Return to the Tanzu Ops Manager Installation Dashboard and click Review Pending Changes.

4. Click Apply Changes.

5. For each org that you want to use the service plan, do the following:

1. Log in to the Cloud Foundry Command Line Interface (cf CLI) as an admin user:

cf login

2. Activate service access to the org:

cf enable-service-access p.mysql -p PLAN -o ORGANIZATION

Where:

PLAN: The name of the specific plan to enable, set to manual in step 1

ORGANIZATION: The name of the org that needs access to PLAN

For example,

$ cf enable-service-access p.mysql -o prodteam -p db-large


Enabling access to plan db-large of service p.mysql for org prodt
eam as admin...
OK

The org can now use the plan.

For information about modifying and viewing service-plan access, see Managing access to service plans.

Operator Guide - Managing VMware Tanzu for MySQL

86
Tanzu for MySQL

The following topics cover subjects related to managing VMware Tanzu for MySQL on Cloud Foundry

Upgrading VMware Tanzu for MySQL

Backup and Restore


About Backups

Setting up Incremental Backups

Configuring Automated Backups

Manually restoring from backup

Accessing a database as an admin user

Rotating certificates

Resolving service interruptions

Running service broker errands

Troubleshooting

Leader-Follower procedures
Triggering a Leader-Follower failover

Highly available cluster procedures


Bootstrapping

Running mysql-diag

About the replication canary

Data at rest full-disk encryption

Upgrading VMware Tanzu for MySQL


This topic explains how to upgrade the VMware Tanzu for MySQL on Cloud Foundry service and existing
service instances. It also explains the service interruptions that can result from service changes and
upgrades and from failures at the process, VM, and IaaS levels.

For product versions and upgrade paths, see Upgrade Planner.

Upgrade Tanzu for MySQL


To upgrade the Tanzu for MySQL service, follow the Tanzu Operations Manager process that you use to
install the service for the first time. Your configuration settings migrate to the new version automatically.

To upgrade Tanzu for MySQL:

1. Review the Release Notes for the version you are upgrading to.

2. Download the Ubuntu Jammy stemcell from Broadcom Support portal, and import it into the Tanzu
Operations Manager Stemcell Library. For instructions, see Verify stemcell version and apply all
changes.

3. Download the desired version of Tanzu for MySQL from Broadcom Support portal.

87
Tanzu for MySQL

4. Go to the Tanzu Ops Manager Installation Dashboard and click Import a Product to upload the
product file.

5. Under the Import a Product button, click + next to Tanzu for MySQL. This adds the tile to your
staging area.

6. Click the newly-added Tanzu for MySQL tile to review its configuration panes. Click Save on any
panes where you make changes.

To decrease the runtime for service instance upgrades, configure the upgrade-all-service-
instances errand in the tile. For instructions about configuring this errand, see Configure service
instance upgrades.

7. (Optional) If you want developers to be able to upgrade service instances individually, go to to the
Errands pane. For Upgrade all On-demand MySQL Service Instances, select Off.

By default, the upgrade-all-service-instances errand runs after each upgrade. For more
information, see About individual service instance upgrades.

As of Tanzu for MySQL v3.2.0, only MySQL 8.0 is supported. When upgrading,
you must update any plans previously configured with a MySQL Default Version
of 5.7, and specify 8.0.

On upgrade, all previously created MySQL v5.7 service instances are updated to
MySQL v8.0, either when the platform operator runs the upgrade-all-service-
instances errand or when the developer upgrades an individual service instance
using the cf CLI.

8. Go to Tanzu Ops Manager Dashboard > Review Pending Changes. For more information about
this Tanzu Operations Manager page, see Reviewing pending product changes.

9. For the Tanzu for MySQL tile, enable the Register On-demand MySQL Broker errand if it is not
already enabled.

10. Click Apply Changes.

Upgrading the Tanzu for MySQL service and service instances can temporarily interrupt the service. For
more information, see Service interruptions.

About individual service instance upgrades


After you upgrade the TCF-MySQL tile, existing service instances must be upgraded to use the latest
version of the tile. Developers cannot create new bindings to service instances that have not been
upgraded.

To decrease the runtime for service instance upgrades, developers can upgrade individual on-demand
service instances using the Cloud Foundry Command Line Interface (cf CLI).

Individual service instance upgrade is possible only if:

You deactivate the upgrade-all-service-instances errand when upgrading the tile.


By default, TCF-MySQL runs this errand when you upgrade the tile, but this operation can take a
long time.

88
Tanzu for MySQL

You ensure that the register-broker errand is run during upgrades.


For more information about the register broker errand, see register-broker.

Developers can upgrade individual service instances by following the procedure in Upgrade an Individual
Service Instance.

Upgrading from MySQL 5.7 to 8.0


After upgrading to Tanzu for MySQL v3.x, you can upgrade your service instances from Percona 5.7 to
Percona 8.0. Percona 8.0 includes significant changes. Tanzu for MySQL v3.x addresses the major
upgrade incompatibilities, but in some cases, client applications that are bound to a service instance that
uses Percona v5.7 might have compatibility issues when the service instance is upgraded to use Percona
8.0.

To better understand the possible compatibility issues, consider the following options:

Read the compatibility documentation from Percona. This provides details about where application
SQL might fail to execute correctly.

Consider making a backup of the current (Percona 5.7) database and restoring from the backup to a
new 8.0 service instance in a non-production environment. See Backup and Restore. Then connect
the relevant application and execute the appropriate tests for functionality and performance, if
available.

Percona has a set of compatibility testing tools that can be used to run existing queries against an
upgraded schema. This process provides a mechanism to thoroughly test the application against
the upgraded database. It can highlight potential incompatibility errors and performance changes.

MySQL also offers a comprehensive upgrade guide with a special section highlighting best
practices.

The tools and documentation listed here provide an optional source of feedback that can
provide additional confidence when evaluating an upgrade from MySQL 5.7 to 8.0. No
individual tool can guarantee a successful upgrade. It is your responsibility to fully review
your unique database configuration.

About MySQL 5.7 to 8.0 upgrades


Tanzu for MySQL tile versions 3.2 and later support only MySQL 8.0. The version drop-down menu only
lists 8.0, but may show a blank version on plans previously configured for MySQL 5.7. You must select 8.0
from the drop-down menu on the Tanzu Operations Manager configuration pages before you can update
service instances for these plans. This forced manual selection prevents you from accidentally upgrading
your users from 5.7 to 8.0.

89
Tanzu for MySQL

If you change a plan's MySQL version from 5.7 to 8.0, any subsequent updates to a
service instance created with that plan upgrades that instance's MySQL from 5.7 to 8.0.
This includes both batch updates to all service instances through the upgrade-all-
service-instances errand, and individual service instance updates using a cf update-
service --upgrade command.
Service instances cen be upgraded; downgrades are not supported.

When upgrading a highly available (HA) cluster from MySQL 5.7 to MySQL 8.0, VMware
recommends that you first validate the health of the HA cluster by running the mysql-diag
tool. To run mysql-diag, see Running mysql-diag.
A "highly available (HA) cluster" refers to any service instance created from a tile plan
configured with the "HA cluster" topology.

If mysql-diag reports that the HA cluster is unhealthy, follow the mysql-diag


recommendations to bring the cluster to a healthy state before upgrading the cluster to
MySQL 8.0.

Service interruptions
Service changes, upgrades, and failures at the process, VM, and IaaS level can cause outages in the
Tanzu for MySQL service.

Read this section if:

You are planning an upgrade.

You are experiencing a service interruption and are wondering why.

You are planning to update or change a service instance and want to know if it might cause a
service interruption.

90
Tanzu for MySQL

Stemcell or service update


An operator updates a stemcell version or their version of Tanzu for MySQL.

Impact: Apps lose access to the MySQL service while Tanzu Operations Manager updates the
service instance they are bound to. The service resumes within 10–15 minutes.

Required Actions: None. If the update deploys successfully, apps reconnect automatically.

Plan change
A developer changes their service instance to provide a different service plan, using cf update-service or
Apps Manager.

For example:

cf update-service SERVICE-INSTANCE -p NEW-PLAN

Impact: Apps lose access to the MySQL service while Tanzu Operations Manager updates the
service instance they are bound to. The service resumes within 10–15 minutes.

Required Actions: None. If the plan change deploys successfully, the apps reconnect
automatically.

Service broker deployments


Automated backups are not taken during service broker deployments.

When the service broker is unavailable, such as during upgrades and re-deployments, automated backups
fail. Automated backups resume according to schedule when the service broker is online again.

For general information about backups, see Backing up and restoring VMware Tanzu for MySQL on Cloud
Foundry and Configuring automated backups.

Backing up and Restoring


These topics cover the various types of backups you can perform for Tanzu for MySQL.

About Backups

Configuring Full Backups

Configuring Incremental Backups

Manually Restoring a Full Backup

About Backups for Tanzu for MySQL


This topic covers the different types of backups operators can configure and when to use each one for
Tanzu for MySQL.

Full backups: see Configuring Full Backups

Incremental backups: see Configuring Incremental Backups

You might want to back up or restore a service instance in the following use cases:

91
Tanzu for MySQL

Disaster recovery

Troubleshooting

Testing

The backup and restore capability described in this topic restores a running service instance’s backup to a
new instance. It is not intended to list or restore backups created by a deleted service instance. For more
information about restoring a backup from a deleted service instance, see Manually Restoring From
Backup.

The backup procedures assume that you are using the adbr plug-in v0.3.0 or later. See Prerequisite: adbr
plug-in for instructions.

About full backups


Full backups use the Percona Xtrabackup tool, which does not lock schemas that use the default InnoDB
transactional storage engine. Full backups do briefly lock non-transactional operations, specifically DDL and
MyISAM (non-InnoDB) storage engine operations. Developers should be aware of these backup DDL and
MyISAM locks, and consider any impact on their running applications. For more information about locking
behavior during full backups, see the Percona documentation.

Full backups fail if the service broker is unavailable (for example, during an upgrade).

About full backup files


When Tanzu for MySQL runs a full backup, it uploads one file:

The encrypted data backup file, artifact

Backup artifacts are organized under subfolders in external storage. They are stored under p.mysql >
service-instance_GUID > yyyy > mm > dd.

Old backup artifacts (from pre-2.9.3 instances) stored in the root directory can still be
accessed through the cf CLI using the adbr plug-in.

About incremental backups


Incremental backups supplement full backups. They automatically back up a service instance’s new
transactions as they occur. The operator configures how frequently new transactions get backed up (the
default is every 15 minutes). These incremental backups occur continuously, in tandem with any full
backups of the service instance.

When you restore a full backup from a source instance, you can optionally restore up to that source’s
“latest” transaction. The resulting “incremental restore” includes the full backup contents, plus all of the
source’s more recent transactions (that is, transactions after the full backup, and up to the most recent
incremental backup).

Incremental restores always contain a full restore in their execution: a single adbr command restores a full
backup, and then applies incremental transactions to that restored database. You trigger an incremental
restore by issuing a restore to the “latest” restore point, using the new --restore-point option in the adbr
cli:

92
Tanzu for MySQL

cf adbr restore ${TARGET_INSTANCE_ID} ${FULL_BACKUP_ID} --restore-point=latest

Incremental backups do not replace full backups. Incremental backups always


restore on top of and relative to a full backup. Continue making full backups to
take advantage of incremental backups and restores.

For an incremental restore to succeed, the source instance's incremental backups


must be continuously activated from the time the source instance's full backup
was taken. Incremental backup must also be enabled on the destination instance
being restored.

About incremental backup files


Incremental backups work by copying mysql binary log files from a running service instance to the same
external storage that is configured for full backups. This copying does not lock any database operations
(unlike the Percona Xtrabackup tool used for full backups.)

Incremental backups store their files under folder p.mysql > service-instance_GUID > binlogs.

When they are enabled, incremental backups run continuously on the service instance, copying binlog files
to the configured external storage. These files accumulate in external storage over time, and operators may
want to setup retention policies to manage their external storage.

When setting up retention policies, be aware that incremental restores require the external
storage to contain all copied binary log files from the time the full backup was created.

Prerequisite: adbr plug-in


Before you can manually back up or restore a service instance, you must have installed the
ApplicationDataBackupRestore (adbr) plug-in for the Cloud Foundry Command Line Interface (cf CLI) tool.

For the procedures on this page, you need the adbr plug-in v0.3.0 or later.

To install the adbr plug-in, run:

cf install-plugin -r CF-Community "ApplicationDataBackupRestore"

Configuring full backups


You can configure physical full backups for VMware Tanzu for MySQL on Cloud Foundry.

For App Developer backup procedures, see Backing up and Restoring.

Configuring automatic full backups


You can configure Tanzu for MySQL to automatically back up databases to external storage. Tanzu for
MySQL backs up the entire data directory for each service instance.

93
Tanzu for MySQL

Tanzu for MySQL takes full backups of your database on a schedule. You configure this schedule with a
cron expression.

Configuring a cron expression overrides the default schedule for your service instance.
Developers can override the default for their service instance. For more information, see
Backup Schedule.

To configure full backups, follow the procedure for your external storage solution:

Back up using SCP

Back up to Amazon S3 or Ceph

Back up to Amazon S3 with Instance Profile

Back up to GCS

Back up to Azure Storage

You can use Healthwatch to confirm automated full backups. See:

Use the adbr plug-in to list backups

Use Healthwatch to confirm automated full backups

Monitoring and KPIs for VMware Tanzu for MySQL

Back up using SCP


Secure copy protocol (SCP) enables operators to use any storage solution on the destination VM. This is
the fastest method for backing up your database.

When you configure backups with SCP, Tanzu for MySQL runs an SCP command that uses SFTP to
securely copy backups to a VM or a physical machine operating outside of your deployment. You provision
the backup machine separately from your installation.

To back up your database using SCP:

Create a Public and Private Key Pair

Configure Backups in Tanzu Operations Manager

Create a public and private Key‑Pair


Tanzu for MySQL accesses a remote host as a user with a private key for authentication. VMware
recommends that this user and key-pair is used only for Tanzu for MySQL.

1. Determine the remote host that you use to store backups for Tanzu for MySQL. Ensure that the
MySQL service instances can access the remote host.

VMware recommends using a VM outside your deployment for the destination of


SCP backups. To do this, you might need to enable public IPs for the MySQL
VMs.

2. (Recommended) Create a new user for Tanzu for MySQL on the destination VM.

94
Tanzu for MySQL

3. (Recommended) Create a new public and private key-pair for authenticating as the above user on
the destination VM.

Configure backups in Tanzu Operations Manager


Use Tanzu Operations Manager to configure Tanzu for MySQL backups to use SCP.

1. In Tanzu Operations Manager, open the Tanzu for MySQL tile Backups pane.

2. Select SCP.

3. Configure the fields as follows:

Field Instructions

Username Enter the user you created in Create a Public and Private Key‑Pair.

Private Key Enter the private key you created in Create a Public and Private Key‑Pair.
Store the public key that is used for SSH and SCP access on the destination VM.

Hostname Enter the IP address or DNS entry that is used to access the destination VM.

Destination Directory Enter the directory to which Tanzu for MySQL uploads backups.

95
Tanzu for MySQL

Field Instructions

SCP Port Enter the SCP port number for SSH. This is usually port 22.

Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the
schedule for taking backups for each service instance. This overrides the
default schedule for your service instance.
Test your cron expression using a website such as Crontab Guru.
Developers can override the default for their service instance; see Backup
Schedule.

Fingerprint (Optional) Enter the fingerprint for the destination VM public key. The fingerprint
allows detection of any changes to the destination VM.

Back up to Amazon S3 or Ceph


When you configure backups for Amazon S3 or Ceph, Tanzu for MySQL runs an Amazon S3 client that
saves the backups to one of the following:

an Amazon S3 bucket

a Ceph storage cluster

another S3-compatible endpoint certified by VMware

For information about:

Amazon S3 buckets, see the Amazon documentation.

Ceph storage clusters, see the Ceph documentation.

To back up your database to Amazon S3 or Ceph:

Create a custom policy and access key

Configure backups in Tanzu Operations Manager

Create a Custom policy and access key


Tanzu for MySQL accesses your S3 bucket through a user account. VMware recommends that this
account be only used by Tanzu for MySQL. You must apply a minimal policy that enables the user account
upload backups to your S3 bucket. Then give the policy the permission to list and upload to buckets.

The procedure in this section assumes that you are using an Amazon S3 bucket. If you are using a Ceph or
another S3-compatible bucket to create the policy and access key, follow the documentation for your
storage solution. For more information about Ceph S3 bucket policies, see the Ceph documentation.

To create a policy and access key in Amazon Web Services (AWS):

1. Create a policy for your Tanzu for MySQL user account.

In AWS, create a new custom policy by following this procedure in the AWS documentation.

Paste in the following permissions:

{
"Version": "2012-10-17",
"Statement": [
{

96
Tanzu for MySQL

"Sid": "MySQLBackupPolicy",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::MY_BUCKET_NAME/*",
"arn:aws:s3:::MY_BUCKET_NAME"
]
}
]
}

2. Record the Access Key ID and Secret Access Key user credentials for a new user account by
following this procedure in the AWS documentation. Ensure that you select Programmatic access
and Attach existing policies to user directly. You must attach the policy you created in the
previous step.

Configure backups in Tanzu Operations Manager


Use Tanzu Operations Manager to connect Tanzu for MySQL to your S3 account.

Prerequisite: Before beginning this procedure, you must have an S3 bucket in which to store the backups.

1. In Tanzu Operations Manager, open the Tanzu for MySQL tile Backups pane.

2. Select Ceph or Amazon S3.

97
Tanzu for MySQL

3. Configure the fields as follows:

Field Instructions

Access Key ID Enter the S3 Access Key ID and Secret Access Key that you created in Create a
Custom Policy and Access Key.
Secret Access Key

Endpoint URL Enter the S3-compatible endpoint URL for uploading backups.
The URL must start with http:// or https://.
The default is https://s3.amazonaws.com.
If you are using a public S3 endpoint, see the S3 Endpoint procedure in Step 3:
Director Config Page.

Region Enter the region where your bucket is located.

Bucket Name Enter the name of your bucket.


Do not include an s3:// prefix, a trailing /, or underscores. VMware
recommends using the naming convention DEPLOYMENT-backups. For example,
sandbox-backups.

98
Tanzu for MySQL

Field Instructions

Force path style access to The default behavior in Tanzu for MySQL 2.9 and later uses a virtual-style URL.
bucket Select this checkbox if you use:
Amazon S3, and your bucket name is not compatible with virtual
hosted-style URLs.

An S3-compatible endpoint such as Minio that might require path-style


URLs.

If you are using a blobstore that uses a specific set of domains in its server
certificate, add a new wildcard domain or use path-style URLs if supported by
the blobstore. For general information about the deprecation of S3 path-style
URLs, see AWS blog posts: Amazon S3 Path Deprecation Plan – The Rest of the
Story and the subsequent Update to Amazon S3 Path Deprecation Plan.

Bucket Path (Optional) Enter the path in the bucket to which to store backups.
You can use this to keep the backups from this foundation separate from those
of other foundations that might also backup to this bucket. For example,
Foundation-1.

Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the
schedule for taking backups for each service instance. This overrides the
default schedule for your service instance.
Test your cron expression using a website such as Crontab Guru.
Developers can override the default for their service instance; see Backup
Schedule.

Back up to Amazon S3 with instance profile


When you configure backups for Amazon S3 with Instance Profile, Tanzu for MySQL allows the Identity and
Access Management (IAM) user or role used by BOSH to pass the new backups IAM role to a new EC2
instance.

You can use the procedure in this section to allow Tanzu for MySQL to upload backups to Amazon S3
without static credentials (Access and Secret Access Key ID).

Configuring this backup method requires operators to run the upgrade-all-service-


instances errand during Apply Changes.

Backups fail until the service instance is upgraded.

Prerequisite: You must be running Tanzu Platform for Cloud Foundry on AWS.

The steps for configuring backups for an Amazon S3 with Instance Profile are:

1. Create an IAM Role with a custom policy

2. Add a policy to the existing Tanzu Operations Manager user or role

3. Configure a VM Extension

1. Create a VM Extension in Tanzu Operations Manager

2. Apply the VM Extension to the dedicated-mysql-broker job

99
Tanzu for MySQL

3. Set the VM Extension Name

4. Apply changes and upgrade all service instances

5. (Optional) Verify that the IAM Role is associated with MySQL service instances

Create an IAM role with a custom policy


First, you must create a policy for your Tanzu for MySQL user account.

For more information about AWS identity and access management, see the AWS documentation.

For more information about users, groups, and roles in AWS, see the AWS documentation.

To create an IAM Role with a custom policy:

1. In AWS, create an IAM role with a new custom policy by following this procedure in the AWS
documentation.
Paste in the following permissions:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListObjects"
],
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
]
}
]
}

Where BUCKET-NAME is the name of the bucket.

2. Record the Amazon Resource Name (ARN) of this new IAM role. This is used in Add a Policy to the
Existing Tanzu Operations Manager User or Role.

3. Record the name of the Instance Profile associated with this new IAM role. This is used in Create a
VM Extension in Tanzu Operations Manager.

Add a policy to the existing Tanzu Operations Manager user or role


You must add a new policy to the existing Tanzu Operations Manager IAM user or role that is configured in
the AWS Config pane of the BOSH Director for AWS tile. This policy allows the IAM user or role used by
BOSH to pass the new backups IAM role to a new EC2 instance.

Depending on your configuration, this is either a user or a role. To find the existing user or role and add a
policy:

100
Tanzu for MySQL

1. Log into Tanzu Operations Manager. To log in, see Log in to Tanzu Operations Manager for the First
Time.

2. Click the BOSH Director for AWS tile.

3. Select AWS Config.

The following screenshots show the instructions, depending on the type of AWS Config that is
already configured:

Use AWS Keys: You must find the existing IAM user associated with the static credentials that are
used here. The name of the IAM user is not listed here in the BOSH Director for AWS tile UI.

To find and retrieve your AWS Key information and find the existing IAM user, use the AWS Identity
and Access Management (IAM) credentials that you generated in Step 3: Create an IAM User for
Tanzu Operations Manager in Preparing to Deploy Tanzu Operations Manager on AWS Manually.

101
Tanzu for MySQL

102
Tanzu for MySQL

Use AWS Instance Profile: Find the name of the existing IAM role in the AWS IAM Instance
Profile field.

For more information about this role, see Create an IAM role or user for Tanzu Operations Manager
in Preparing to Deploy Tanzu Operations Manager on AWS Manually.

103
Tanzu for MySQL

104
Tanzu for MySQL

4. On the AWS Management Console, add a new policy to that IAM User or Role:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowToCreateInstanceWithMySQLBackupstInstanceProfile",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"arn:aws:iam::540390724081:role/MYSQL-BACKUPS-ROLE"
]
}
]
}

Where MYSQL-BACKUPS-ROLE is the ARN of the role created in the previous section.

Configure a VM Extension
Follow these three steps to configure a VM extension:

Step 1: Create a VM Extension in Tanzu Operations Manager

Step 2: Apply the VM Extension to the dedicated-mysql-broker job

Step 3: Set the VM Extension name

Step 1: Create a VM Extension in Tanzu Operations Manager

There are two methods that you can use to create a VM Extension in Tanzu Operations Manager:

Using the Tanzu Operations Manager API directly. For more information, see Create or Update a
VM Extension in the Tanzu Operations Manager documentation.

Using the Tanzu Operations Manager CLI (om) to create the VM extension. For information
about create-vm-extension, see om create-vm-extension in GitHub.

JSON example to specify the instance profile name:

{
"name": "VM-EXTENSION-NAME",
"cloud_properties": {
"iam_instance_profile": "INSTANCE-PROFILE-NAME"
}
}

Where:

VM-EXTENSION-NAME is the unique VM extension name that Tanzu Operations Manager manages

INSTANCE-PROFILE-NAME is the name of the instance profile created in Create an IAM Role with a
Custom Policy.

Step 2: Apply the VM Extension to the dedicated-mysql-broker job

You can use one of the following methods to apply the VM extension to the dedicated-mysql-broker job
in the Tanzu for MySQL tile:

105
Tanzu for MySQL

Use the Tanzu Operations Manager API directly. See Apply VM Extensions to a Job in the Tanzu
Operations Manager documentation.

Use the om CLI to configure the tile. Add the additional_vm_extensions key in the resource-
config section of the product configuration and use the om CLI.

For information about configuring using a YAML configuration file, see om configure-product in GitHub.

Step 3: Set the VM Extension name

Now that you have created and applied the VM extension, you must set it in the Tanzu for MySQL tile.

To set the VM extension name:

1. Log into Tanzu Operations Manager. To log in, see Log in to Tanzu Operations Manager for the First
Time.

2. Click the Tanzu for MySQL tile.

3. Select Backups.

4. Select Amazon S3 (with Instance Profiles)

106
Tanzu for MySQL

5. Configure the fields as follows:

Field Instructions

Instance Profile VM Extension Enter the VM-EXTENSION-NAME that you created in Create a VM Extension in
Name Tanzu Operations Manager.

Endpoint URL Enter the S3-compatible endpoint URL for uploading backups.
The URL must start with http:// or https://.
The default is https://s3.amazonaws.com.
If you are using a public S3 endpoint, see the S3 Endpoint procedure in Step 3:
Director Config Page in Configuring BOSH Director on AWS.

Region Enter the region where your bucket is located.

Bucket Name Enter the name of your bucket.


Do not include an s3:// prefix, a trailing /, or underscores. VMware
recommends using the naming convention DEPLOYMENT-backups. For example,
sandbox-backups.

107
Tanzu for MySQL

Force path style access to The default behavior in Tanzu for MySQL 2.9 and later uses a virtual-style URL.
bucket Select this checkbox if you use:
Amazon S3 and your bucket name is not compatible with virtual
hosted-style URLs.

An S3-compatible endpoint such as Minio that might require path-style


URLs.

If you are using a blobstore that uses a specific set of domains in its server
certificate, add a new wildcard domain or use path-style URLs, if supported by
the blobstore. For general information about the deprecation of S3 path-style
URLs, see AWS blog posts: Amazon S3 Path Deprecation Plan – The Rest of the
Story and the subsequent Update to Amazon S3 Path Deprecation Plan.

Bucket Path (Optional) Enter the path in the bucket to store backups.
You can use this to keep the backups from this foundation separate from those
of other foundations that might also backup to this bucket. For example,
Foundation-1.

Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the
schedule for taking backups for each service instance. This overrides the
default schedule for your service instance. Test your cron expression using a
utility such as Crontab Guru.
Developers can override the default for their service instance; see Backup
Schedule.

6. Click Save.

Apply Changes and upgrade all service instances


The changes to your service instances are not complete until you apply your configuration changes.

This allows the service instances to begin using the instance profile instead of static credentials for backup
and restore. Static credentials are not provided to existing service instances and backups fail until you
upgrade the service instances.

To apply changes and upgrade all service instances for Tanzu for MySQL:

1. Return to the Tanzu Ops Manager Installation Dashboard.

2. Click Review Pending Changes.

3. Deselect the checkboxes for all products except BOSH Director and Tanzu for MySQL.

4. Verify that the checkbox for the Upgrade all On-demand MySQL Service Instances errand is
selected.

5. Click Apply Changes.

(Optional) Verify that the IAM role is associated with MySQL service
instances
To verify that the IAM role is associated with the MySQL service instances:

1. On the AWS Management Console, find any EC2 instance that begins with mysql/GUID.

2. Verify that the IAM Role is present in the details for the instance.

108
Tanzu for MySQL

Back up to GCS
When you configure backups for a Google Cloud Storage (GCS) bucket, Tanzu for MySQL runs a GCS SDK
that saves backups to a GCS bucket.

For information about GCS buckets, see the GCS documentation.

To back up your database to Google Cloud Storage (GCS), see:

Create a service account and private key

Configure backups in Tanzu Operations Manager

Create a service account and private key


Tanzu for MySQL accesses your GCS bucket through a service account. VMware recommends that this
account be used only by Tanzu for MySQL. You must apply a minimal policy that enables the service
account to upload backups to your GCS bucket.

The service account needs the following permissions:

List and upload to buckets

You can also enable the following optional permissions:

(Optional) Create buckets if they do not already exist

To create a service account and private key in GCS:

1. Create a new service account by following this procedure in the GCS documentation.
When you create the service account:

1. Enter a unique name for the service account name.

2. Add the Storage Admin role.

3. Create and download a private key JSON file.

Configure backups in Tanzu Operations Manager


Use Tanzu Operations Manager to connect Tanzu for MySQL to your GCS account.

1. In Tanzu Operations Manager, open the Backups pane.

2. Select GCS.

109
Tanzu for MySQL

3. Configure the fields as follows:

Field Instructions

Project ID Enter the Project ID for the Google Cloud project that you are using.

Bucket name Enter the bucket name that Tanzu for MySQL uploads backups to.

Bucket Path (Optional) Enter the path in the bucket to store backups.
You can use this to keep the backups from this foundation separate
from those of other foundations that might also backup to this
bucket. For example, Foundation-1.

Service Account JSON Enter the contents of the service account JSON file that you
downloaded when creating a service account in Create a Service
Account and Private Key.

110
Tanzu for MySQL

Cron Schedule Enter a cron expression using standard syntax. The cron
expression sets the schedule for taking backups for each service
instance. This overrides the default schedule for your service
instance. Test your cron expression using a utility such as Crontab
Guru. Developers can override the default for their service instance.
For more information, see Backup Schedule.

Back up to Azure storage


When you configure backups for Azure Storage, Tanzu for MySQL runs an Azure SDK that saves backups
to an Azure storage account.

For information about Azure Storage, see the Azure documentation.

To back up your database to Azure Storage:

Create a storage account and access key

Configure backups in Tanzu Operations Manager

Create a storage account and access key


Tanzu for MySQL accesses your Azure Storage account through a storage access key. VMware
recommends that this account be used only by Tanzu for MySQL. You must apply a minimal policy that
enables the storage account upload backups to your Azure Storage.

The storage account needs the following permissions:

List and upload to buckets

(Optional) Create buckets if they do not already exist

To create a storage account and access key:

1. Create a new storage account. Follow this procedure in the Azure documentation.

2. View your access key. Follow this procedure in the Azure documentation.

Configure backups in Tanzu Operations Manager


To back up your database to your Azure Storage account:

1. In Tanzu Operations Manager, open the Backups pane.

2. Select Azure.

111
Tanzu for MySQL

3. Configure the fields as follows:

Field Instructions

Account Enter the Azure Storage account name that you created in Create a Storage
Account and Access Key.

112
Tanzu for MySQL

Azure Storage Access Key Enter one of the storage access keys that you viewed in Create a Storage
Account and Access Key.

Container Name Enter the container name that Tanzu for MySQL uploads backups to.

Blob Store Base URL To use an on-premise blob storage, enter the hostname of the blob storage. By
default, backups are sent to the public Azure blob storage. The Blob Store Base
URL must follow the format: my-storage-account.my-custom.domain/MY-
CONTAINER-NAME.

Bucket Path (Optional) Enter the path in the bucket to store backups.
You can use this to keep the backups from this foundation separate from those
of other foundations that might also backup to this bucket. For example,
Foundation-1.

Cron Schedule Enter a cron expression using standard syntax. The cron expression sets the
schedule for taking backups for each service instance. This overrides the
default schedule for your service instance. Test your cron expression using a
utility such as Crontab Guru. Developers can override the default for their
service instance. For more information, see Backup Schedule.

Use Healthwatch to confirm full backups


You can use Healthwatch to confirm that full backups are being taken on the schedule you configured. For
information about configuring the full backup schedule, see Configuring automatic full backups.

For each full backup of every service instance, Tanzu for MySQL emits a metric called
last_successful_backup.

1. Monitor the last_successful_backup metric through Healthwatch.

For details about the metric, see Hours since last successful backup in Monitoring and KPIs for
VMware Tanzu for MySQL on Cloud Foundry.

A healthy backup metric shows a saw-tooth plot. The blue stepped line is the time elapsed since
the last backup. The dotted red line represents the scheduled time when an automated full backup
is expected to occur. When a full backup is taken, the time elapsed resets to zero.

View a larger version of this image

Configuring incremental backups


This topic describes how to set up and run incremental backups for Tanzu for MySQL. The incremental
backup features were introduced as part of Release 10.0.

113
Tanzu for MySQL

To allow application developers to run incremental backup and restore:

1. If you have not already done this, configure full backups as described in Configuring full backups.
Full backups are required to use incremental backups. The Backup Configuration storage is used
for both incremental and full backups.

2. Go to the Ops Manager Installation Dashboard and click Settings.

3. On the left panel, click Backups.

4. Under Configure blob store for MySQL backups, set Incremental Backups to Activated.

5. Configure the backup interval. Enter the number of seconds for incremental backups to wait
between backing up new transactions to your configured storage. The default is 900 seconds (15
minutes).

Note that Backup Configuration > Cron Schedule applies only to automated full
backups, and has no affect on incremental backups.

Manually restoring from backup


The manual procedures in this restore topic for VMware Tanzu for MySQL on Cloud Foundry are primarily
intended for disaster recovery or for migrating data to a different foundation, and are intended to be run by
the system operator. Restoring a Tanzu for MySQL service instance replaces all of its data and running
state.

If you are restoring a service instance to the same foundation and you still have the original service
instance, follow the instructions in Backing up and restoring VMware Tanzu for MySQL on Cloud Foundry.

This topic tells you how to manually restore a VMware Tanzu for MySQL on Cloud Foundry service instance
from a full backup in the following cases:

114
Tanzu for MySQL

You have lost or deleted the service instance that backup came from.

You are restoring to a different foundation.

VMware recommends that you always configure a single instance plan to streamline the
restore process for leader-follower and HA cluster plans.

Identify and download the backup artifact

This section describes how to retrieve a backup artifact created from Tanzu for MySQL
v3.3 or later. To retrieve a backup artifact from Tanzu for MySQL v3.2 or earlier, consult
this same section in that earlier version's documentation. After retrieving an earlier artifact,
continue following the instructions on this page to Restore and Restage your new service
instance from that earlier artifact.

The procedure you use to find and download the backup artifact depends on whether the service instance
was deleted.
Follow the procedure for your situation:

If restoring a backup from a deleted service instance

If restoring to a different foundation

If restoring a backup from a deleted service instance


If you are restoring a backup from a lost or deleted instance, you cannot follow the instructions in Backing
up and restoring Tanzu for MySQL because you do not have:

the GUID for the service instance

the timestamp for the backup

The instructions in this section describe how to find these items and then download the backup artifact.

1. Find the GUID of the service instance by searching the broker logs. Search the broker log for
entries about deleted deployments, such as the following:

[on-demand-service-broker] 2020/08/21 23:48:04.821405 Request DELETE /v


2/service_instances/6c1db434-29ef-47c4-9f22-59fe53676b07 Completed 202
in 545.989368ms | Start Time: 2020/08/21 23:48:04.275382

[on-demand-service-broker] [8db9f496-e83f-4aa6-82f0-528ddbff4c0a] 2020/


08/21 23:53:09.978376 BOSH task ID 348 status: done delete deployment f
or instance 6c1db434-29ef-47c4-9f22-59fe53676b07: Description: delete d
eployment service-instance_6c1db434-29ef-47c4-9f22-59fe53676b07 Result:
/deployments/service-instance_6c1db434-29ef-47c4-9f22-59fe53676b07

In the log entries example, the GUID of the service instance begins with 6c1db.

For information about broker logs, see Access broker and instance logs and VMs.

115
Tanzu for MySQL

2. Log in to your backup storage system. Your backup storage system is whatever you configured in
Configuring automated backups. For example, it might be an S3 bucket or a file system on a
backup host (SCP).

3. Identify and download the backup artifact. It is a single file named artifact in a directory named
with the service instance GUID and an epoch timestamp. For example, ../6c1db434-29ef-47c4-
9f22-59fe53676b07_1598049440/artifact.

4. Record the backup ID, which is the name of the directory. In the example, the backup ID is
6c1db434-29ef-47c4-9f22-59fe53676b07_1598049440.

If restoring to a different foundation


If you have a backup of a service instance taken from one foundation, and you want to restore the backup
to a different foundation, you cannot follow the instructions in Backing up and restoring VMware Tanzu for
MySQL on Cloud Foundry.

Instead, follow these instructions to identify and download the backup artifact.

1. Find the GUID of the service instance by running:

cf service SERVICE-INSTANCE-NAME --guid

Where SERVICE-INSTANCE-NAME is the service instance from which the backup was taken.

For example:

$ cf service my-instance --guid


6c1db434-29ef-47c4-9f22-59fe53676b07

2. List the backups available and determine the one you want to download:

cf adbr list-backups SERVICE-INSTANCE-NAME

Where SERVICE-INSTANCE-NAME is the service instance from which the backup was taken.

For example:

$ cf adbr list-backups my-instance


Getting backups of service instance my-instance in org…
Backup ID Time of Backup
6c1db434-29ef-47c4-9f22-59fe53676b07_1598049440 Fri Aug 21 22:37:20 U
TC 2020

3. Download the backup artifact. Your backup storage system is whatever you configured in
Configuring automated backups. For example, it might be an S3 bucket or a file system on a
backup host (SCP).

Retrieve backup encryption key


Each backup artifact has its own encryption key, and these are stored in runtime CredHub.

To retrieve the backup encryption key:

116
Tanzu for MySQL

1. To find the GUID for the MySQL service broker VM, run:

bosh deployments

2. To SSH onto the broker VM, run:

bosh -d pivotal-mysql-GUID ssh

Where GUID is the GUID you recorded in step 1.

For example:

$ bosh -d pivotal-mysql-f7f3ce3767943537c36a ssh

3. Fetch the credentials to authenticate with the CredHub:

1. Go to the Tanzu Ops Manager Installation Dashboard.

2. In the Tanzu Platform for Cloud Foundry tile, click the Credentials tab.

3. Record the credentials for CredHub Admin Client Client Credentials.

4. Set the path for the CredHub CLI and authenticate with runtime CredHub:

export PATH=/var/vcap/packages/credhub-cli/bin:$PATH
credhub api https://credhub.service.cf.internal:8844 --ca-cert /var/vcap/jobs/a
dbr-api/config/credhub_ca.pem
credhub login --client-name CREDHUB-CLIENT --client-secret CREDHUB-CLIENT-SECRE
T

Where:

CREDHUB-CLIENT is the identity of the credential

CREDHUB-CLIENT-SECRET is the password of the credential

For example:

$ export PATH=/var/vcap/packages/credhub-cli/bin:$PATH
$ credhub api https://credhub.service.cf.internal:8844 --ca-cert /var/v
cap/jobs/adbr-api/config/credhub_ca.pem
$ credhub login --client-name credhub_admin_client --client-secret o2i3
0fj2fjvjoi3j

5. To obtain the encryption key for the backup, run:

credhub get -n /tanzu-mysql/backups/BACKUP-ID

Where is BACKUP-ID is the name of the backup artifact you identified.

For example:

$ credhub get -n /tanzu-mysql/backups/6c1db434-29ef-47c4-9f22-59fe53676


b07_1598049440
id: b918cda9-0c8b-4011-bbba-f78bdb5ceea4
name: /tanzu-mysql/backups/6c1db434-29ef-47c4-9f22-59fe53676b07_1598049

117
Tanzu for MySQL

440
type: password
value: NWjbvbB3pjOC
version_created_at: "2020-08-21T22:37:20Z"

6. Record the value of the password. This is the backup encryption key that need when you restore
the backup.

In this example, it is NWjbvbB3pjOC. Use this key when you restore the backup.

Create and prepare a new service instance for restore


You can only restore single node and leader-follower backup artifacts to a single node service instance.
Ensure that the persistent disk in the single node plan is at least as large as the persistent disk of your
largest leader-follower.

For information about persistent disk sizing recommendations, see Persistent disk usage.

To prepare a new service instance for restore:

1. Create a new MySQL service instance by running:

cf create-service p.mysql NEW-PLAN NEW-INSTANCE-NAME

Where:

NEW-PLAN is the name of the service plan for your new service instance. The plan you
choose depends on the service instance topology that you are restoring. If the topology
that you are restoring is:

Single node or leader-follower: Select a single node plan.

Multi‑Site Replication or HA cluster: Select a Multi‑Site Replication plan.

NEW-INSTANCE-NAME is the name of the new service instance you want to create.

For more information, see Create a service instance.

2. Monitor the status of the service instance creation by running:

watch cf service NEW-INSTANCE-NAME

Where NEW-INSTANCE-NAME is the name of the new service instance.

3. Locate and record the GUID associated with your new service instance by running:

cf service NEW-INSTANCE-NAME --guid

4. From the Tanzu Operations Manager VM, find and record the BOSH instance GUID for your service
instance by running:

bosh -e BOSH-ENVIRONMENT -d service-instance_GUID instances

Where GUID is the service instance GUID you previously recorded.

For example:

118
Tanzu for MySQL

$ bosh -e my-env -d service-instance_12345678-90ab-cdef-1234-567890abcd


ef instances
Deployment 'service-instance_12345678-90ab-cdef-1234-567890abcdef'

Instance Process State AZ


IPs
mysql/d7ff8d46-c3e8-449f-aea7-5a05b0a1929c running us-central1
-a 10.0.8.10
1 instances

The BOSH instance GUID is the value after mysql/.

5. Copy the downloaded backup to the new service instance by running:

bosh -e BOSH-ENVIRONMENT -d service-instance_GUID \


scp mysql-backup-artifact \
mysql/BOSH-INSTANCE-GUID:DESTINATION-PATH

Where:

GUID is the service instance GUID.

mysql-backup-artifact is the backup artifact file you downloaded in Identify and


download the backup artifact.

BOSH-INSTANCE-GUID is the BOSH instance GUID you recorded in the previous step.

DESTINATION-PATH is where the backup file saves on the BOSH VM.

For example:

$ bosh -e my-env -d service-instance_12345678-90ab-cdef-1234-567890abcd


ef \
scp mysql-backup-artifact \
mysql/d7ff8d46-c3e8-449f-aea7-5a05b0a1929c:/tmp/

Restore the service instance

Restoring a service instance is destructive. VMware recommends that you only restore to
a new and unused service instance.

You can restore a single node, leader-follower, HA cluster service, or Multi‑Site Replication instance using
the restore utility. The restore utility is packaged with the VMware Tanzu for MySQL on Cloud Foundry tile.

The restore utility does the following:

Deletes any existing data

Decrypts the backup artifact

Restores the backup artifact into the MySQL data directory

To restore a service instance:

119
Tanzu for MySQL

1. Use the BOSH CLI to SSH into the newly created MySQL service instance by following the
procedure in SSH into the BOSH Director VM.

2. After securely logging in to MySQL, become root by running:

sudo su

3. Restore the backup artifact into the data directory by running:

mysql-restore --encryption-key ENCRYPTION-KEY --restore-file RESTORE-FILE-PATH

Where:

ENCRYPTION-KEY is the backup encryption key you recorded in Retrieve backup encryption
key.

RESTORE-FILE-PATH is the full path on the BOSH VM where the backup artifact exists.

For example:

$ mysql-restore --encryption-key NWjbvbB3pjOC --restore-file /tmp/mysql


-backup-artifact

Restage the service instance


After you restore your single node, leader-follower, HA cluster service instance, or Multi‑Site Replication,
you must restage your new service instance. For Multi‑Site Replication plans, you must also re-establish
replication between the leader and follower service instances.

To restage your service instance:

1. If you are restoring a leader-follower service instance, update the plan by running:

cf update-service NEW-INSTANCE-NAME -p LEADER-FOLLOWER-PLAN

2. If you are restoring a HA cluster service instance, update the plan by running:

cf update-service NEW-INSTANCE-NAME -p HA-CLUSTER-PLAN

3. If you are restoring a Multi‑Site Replication service instance, you must re-establish replication:

1. Create a follower Multi‑Site Replication service instance. Follow the procedure in Create
Multi‑Site Replication Service Instances.
Ensure that you only create a follower service instance in the secondary foundation.

2. Configure replication between the leader service instance you restored and the follower
instance. Follow the procedure in Configure Multi‑Site Replication.

4. Find out if the app is currently bound to a MySQL service instance by running:

cf services

5. If the previous step shows that the app is currently bound to a MySQL instance, unbind it by
running:

120
Tanzu for MySQL

cf unbind-service APP-NAME OLD-INSTANCE-NAME

6. Update your app to bind to the new service instance:

cf bind-service APP-NAME NEW-INSTANCE-NAME

7. Restage your app to make the changes take effect:

cf restage APP-NAME

Your app must be running and able to access the restored data.

If a developer rebinds an app to the TCF-MySQL service after unbinding, they must also
rebind any existing custom schemas to the app. When you rebind an app, stored code,
programs, and triggers break. For more information about binding custom schemas, see
Use custom schemas.

Accessing a database as an admin user


In VMware Tanzu for MySQL on Cloud Foundry, you can access a database as an admin user with
CredHub credentials or BOSH SSH.

When you access a database as an admin user, you can perform actions that cannot be done as a normal
binding user.

You can do the following as an admin user:

Add users

Create new schemas

View system schemas

You can choose to access your database service instance as an admin in one of the following ways:

Using BOSH SSH: If your BOSH agent is healthy, you can BOSH SSH into the MySQL VM. This
option can be faster. See Connect to MySQL with BOSH SSH.

Using CredHub Credentials: If your BOSH agent is unhealthy, you can use this option. See
Connect to MySQL with CredHub Credentials.

Connect to MySQL with BOSH SSH


To connect to MySQL using BOSH SSH:

1. BOSH SSH into your node by following the procedure in BOSH SSH in the Tanzu Operations
Manager documentation.

2. Connect to your MySQL VM by running:

mysql --defaults-file=/var/vcap/jobs/pxc-mysql/config/mylogin.cnf

Connect to MySQL with CredHub credentials

121
Tanzu for MySQL

To retrieve the admin credentials for a service instance from BOSH CredHub:

1. Use the cf CLI to find the GUID associated with the service instance for which you want to retrieve
credentials by running:

cf service SERVICE-INSTANCE-NAME --guid

For example:
$ cf service my-service-instance --guid 12345678-90ab-cdef-1234-567890a
bcdef

If you do not know the name of the service instance, you can list service instances in the space
with cf services.

2. Follow the steps in Gather Credential and IP Address information and Log in to the Tanzu
Operations Manager VM with SSH of Advanced Troubleshooting with the BOSH CLI to SSH into
the Tanzu Operations Manager VM.

3. From the Tanzu Operations Manager VM, log in to your BOSH Director with the BOSH CLI. See
Authenticate with the BOSH Director VM in Advanced Troubleshooting with the BOSH CLI.

4. Find the values for BOSH_CLIENT and BOSH_CLIENT_SECRET:

1. In the Tanzu Ops Manager Installation Dashboard, click the BOSH Director tile.

2. Click the Credentials tab.

3. In the BOSH Director section, click the link to the BOSH Commandline Credentials .

4. Record the values for BOSH_CLIENT and BOSH_CLIENT_SECRET.

5. Set the API target of the CredHub CLI to your BOSH CredHub server by running:

credhub api https://BOSH-DIRECTOR-IP:8844 \


--ca-cert=/var/tempest/workspaces/default/root_ca_certificate

Where BOSH-DIRECTOR-IP is the IP address of the BOSH Director VM.

For example:
$ credhub api https://10.0.0.5:8844 \
--ca-cert=/var/tempest/workspaces/default/root_ca_certificate

6. Log in to CredHub by running:

credhub login \
--client-name=BOSH-CLIENT \
--client-secret=BOSH-CLIENT-SECRET

For example:

$ credhub login \
--client-name=credhub \
--client-secret=abcdefghijklm123456789

122
Tanzu for MySQL

7. Use the CredHub CLI to retrieve the credentials by doing one of following :

Retrieve the password for the admin user by running:

credhub get -n /p-bosh/service-instance_GUID/admin_password

In the output, the password appears under value. Record the password.
For example:
$ credhub get \
-n /p-bosh/service-instance_70d30bb6-7f30-441a-a87c-05a5e4afff2
6/admin_password

id: d6e5bd10-3b60-4a1a-9e01-c76da688b847
name: /p-bosh/service-instance_70d30bb6-7f30-441a-a87c-05a5e4af
ff26/admin_password
type: password
value: UMF2DXsqNPPlCNWMdVMcNv7RC3Wi10
version_created_at: 2018-04-02T23:16:09Z

Retrieve the password for the read-only admin user by running:

credhub get -n /p-bosh/service-instance_GUID/read_only_admin_password

In the output, the password appears under value. Record the password.

8. Record the IP of your service instance. See Connect Using an IP Address.

9. Connect to your database by doing one of following:


Connect using a management tool. See Using management tools for TCF-MySQL.

Connect directly from your workstation using the MySQL client by running:

mysql -h IP-ADDRESS -u admin -P 3306 -p

When prompted for a password, enter the password you recorded.

Rotating certificates
You can check expiration dates and rotate certificates used by VMware Tanzu for MySQL on Cloud
Foundry.

Rotate services TLS certificate authority


To rotate the Services TLS CA and its leaf certificates, use one of the following procedures:

Tanzu Operations Manager v3.0: See Rotate the Services TLS CA and its leaf certificates.

Tanzu Operations Manager v2.10: See Rotate the Services TLS CA and its leaf certificates.

Tanzu Operations Manager v2.9 and later are compatible with CredHub Maestro. TCF-MySQL v2.8 and later
are compatible with CredHub Maestro.

123
Tanzu for MySQL

Certificates used by Tanzu for MySQL


If you are using Tanzu Operations Manager v2.9 or later, you can rotate all MySQL certificates in the
following table using CredHub Maestro. For Tanzu Operations Manager v2.9 and earlier, you can rotate the
Services TLS CA using a manual procedure.

For more information about procedures to use to rotate certificates, see Rotate services TLS certificate
authority.

The following table lists the certificates used by Tanzu for MySQL:

Rotated by Tanzu for


Certificate
MySQL?

/services/tls_ca No

/opsmgr/pivotal-mysql-GUID/adbr_api_cert No

/p-bosh/pivotal-mysql-GUID/agent_ca_2_9_x No

/p-bosh/pivotal-mysql-GUID/agent_client_ssl_2_9_x No

/p-bosh/pivotal-mysql-GUID/agent_server_ssl_2_9_x No

/p-bosh/pivotal-mysql-GUID/services_tls_accessor_cert No

/p-bosh/service-instance_GUID/adbr_agent_cert No

/p-bosh/service-instance_GUID/agent_ca No

/p-bosh/service-instance_GUID/agent_client_tls No

/p-bosh/service-instance_GUID/agent_server_tls No

/p-bosh/service-instance_GUID/mysql_server_tls No

/p-bosh/service-instance_GUID/pxc_internal_ca No

/p-bosh/service-instance_GUID/pxc_tls_ca No

/p-bosh/service-instance_GUID/pxc_tls_server No

/p-bosh/service-instance_GUID/restore_ca No

/p-bosh/service-instance_GUID/restore_client_tls No

/p-bosh/service-instance_GUID/restore_server_tls No

/p-bosh/service-instance_GUID/streaming_backup_ca Yes

/p-bosh/service-instance_GUID/streaming_backup_server_cert Yes

In the preceding table, GUID is the GUID for the service instance. To find the GUID for your service
instance, follow the procedure in Find information about your service instance.

If you are using a PXC-type database, Tanzu for MySQL rotates the Galera certificate by renaming it.

Resolving service interruptions

124
Tanzu for MySQL

Learn about events in the lifecycle of a VMware Tanzu for MySQL on Cloud Foundry service instance that
might cause temporary service interruptions and what you can do about it.

Stemcell or service update


An operator can update a stemcell version or their version of Tanzu for MySQL.

Impact: Apps lose access to the MySQL service while Tanzu Operations Manager updates the
service instance the apps are bound to. The service resumes within 10-15 minutes.

Required Actions: None. If the update deploys successfully, the apps automatically reconnect.

Plan change
A developer can change their service instance to provide a different service plan using cf update-service
or Apps Manager.

Impact: Apps lose access to the MySQL service while Tanzu Operations Manager updates the
service instance the apps are bound to. The service resumes within 10-15 minutes.

Required Actions: None. If the plan change deploys successfully, apps reconnect automatically.

VM Process failure
A process like the MySQL server, for example, fails on the service instance VM.

Impact:
BOSH (monit) brings the process back automatically.

Depending on the process and what it was doing, the service can experience 60-120
seconds of downtime.

Until the process resumes, apps might be unable to use MySQL, metrics or logging can
stop, and other features might be interrupted.

Required Actions: None. If the process resumes cleanly and without manual intervention, apps
reconnect automatically.

VM Failure
A Tanzu for MySQL VM fails and goes offline due to a virtualization problem or a host hardware problem.

Impact:
If the BOSH Resurrector is enabled (recommended), BOSH can detect the failure, recreate
the VM, and reattach the same persistent disk and IP address.

Downtime largely depends on how quickly the Resurrector notices, usually 1-2 minutes,
and how long it takes the IaaS to create a replacement VM.

If the Resurrector is not enabled, some IaaSes, vSphere, for example, have similar
resurrection or HA features.

Apps cannot connect to MySQL until the VM is recreated and the My SQL server process
has resumed.

125
Tanzu for MySQL

Based on prior experience with BOSH Resurrector, typical downtime is 8-10 minutes.

Required Actions:
If the VM is part of a leader-follower pair, when the VM comes back, it is read-only. You
must run the configure-leader-follower errand to ensure that the leader VM is writable.
For more information, see configure-leader-follower in Running Errands.

If the VM is not part of a leader-follower pair, when the VM comes back, no further action is
required for the app developer to continue operations.

AZ Failure
An availability zone (AZ) goes offline entirely or loses connectivity to other AZs (net split). This causes
service interruption in multi-AZ deployments where Diego has placed multiple instances of an app that uses
MySQL in different AZs.

Impact:
Some app instances may still be able to connect and continue operating.

App instances in the other AZs are not able to connect.

Downtime: Unknown

Required Actions: Recovery of the app; database connection must be automatic. Depending on
the app, manual intervention might be required to check data consistency.

Region failure
An entire region fails, bringing components offline.

Impact:
The entire installation must be brought back up manually.

Downtime: Unknown

Required Actions: Each service instance might need to be restored individually depending on the
restored state of the platform.

Running service broker errands


This topic describes each of the service broker errands that apply to VMware Tanzu for MySQL on Cloud
Foundry, and how you can run an errand using the BOSH CLI.

Errands can manage service brokers and run mass operations on service instances created by brokers. To
run an errand, see Run an Errand.

Run an errand
To run an errand:

1. View the BOSH deployment name for your MySQL service broker by running:

bosh deployments

126
Tanzu for MySQL

2. Run:

bosh -d pivotal-mysql-GUID run-errand ERRAND-NAME

Where:

pivotal-mysql-GUID is the BOSH deployment name for your MySQL service broker.

ERRAND-NAME is the name of the errand you want to run.

For example:

$ bosh -d pivotal-mysql-e3ddd36247fe5b923caf run-errand find-deprecated


-bindings

Available errands
The following sections describe each service broker errand that you can run. To run an errand, see Run an
Errand.

find-deprecated-bindings.

smoke-tests

configure-leader-follower

make-leader

make-read-only

upgrade-all-service-instances

register-broker

delete-all-service-instances-and-deregister-broker

recreate-all-service-instances

orphan-deployments

inspect

bootstrap

find-deprecated-bindings
The find-deprecated-bindings errand does the following:

Lists app bindings and services keys that are deprecated in Tanzu for MySQL v2.10. The bindings
are deprecated because they do not require TLS.

Exits whether or not a deprecated binding is found.

VMware recommends that operators configure bindings to require TLS. For more information, see Configure
TLS.

The find-deprecated-bindings errand has the following example output:

Stdout +---------------------------+-------------------------------------
-+------------------------+--------------------------+--------------------+--

127
Tanzu for MySQL

-----------------+-----------------------------+
| SERVICE | SERVICE GUI
| ORG | SPACE | APP OR SERVICE KEY |
TYPE | REASON |
+---------------------------+-------------------------------------
-+------------------------+--------------------------+--------------------+--
-----------------+-----------------------------+
| tlsDB | a999db0b-176e-4ac8-8342-d72b338d1f0c
| MYSQL-ORG-upgrade-test | MYSQL-SPACE-upgrade-test | user-cli | Se
rviceKeyBinding | no tls |
| tlsDB | a999db0b-176e-4ac8-8342-d72b338d1f0c
| MYSQL-ORG-upgrade-test | MYSQL-SPACE-upgrade-test | user-cli | Se
rviceKeyBinding | no dns: hostname="10.0.8.6" |
| upgrade-outdated-instance | 34f26746-fb46-4f14-87bc-e1ddce26f340
| MYSQL-ORG-upgrade-test | MYSQL-SPACE-upgrade-test | cs-accept | Ap
pBinding | no dns: hostname="10.0.8.5" |
| tlsDB | a999db0b-176e-4ac8-8342-d72b338d1f0c
| MYSQL-ORG-upgrade-test | MYSQL-SPACE-upgrade-test | cs-accept-tls | Ap
pBinding | no dns: hostname="10.0.8.6" |
+---------------------------+-------------------------------------
-+------------------------+--------------------------+--------------------+--
-----------------+-----------------------------+

smoke-tests
The smoke-tests errand does the following:

Validates that the service broker has been installed and configured correctly.

Creates and deletes a new space and service instance that conducts tests.

If this errand runs successfully, Tanzu for MySQL has installed successfully.

configure-leader-follower
The configure-leader-follower errand does the following:

Configures replication on the follower and ensures the leader is writable

Runs after every create or update of a leader-follower instance

Fails and alerts operators with BOSH errand output if the service instance is in a unhealthy state

This errand is used to trigger a leader-follower failover. You can use this errand to create custom failover
scripts. For more information, see Triggering a Leader-Follower Failover.

make-leader
The make-leader errand does the following:

Promotes a follower VM to a leader

Removes replication configuration from the VM, waits for all transactions to be applied to the VM,
and sets the VM as writable

Fails if the original leader is still writeable. This protects against data divergence.

128
Tanzu for MySQL

This errand is used to trigger a leader-follower failover. You can use this errand to create custom failover
scripts. For more information, see Triggering a Leader-Follower Failover.

make-read-only
The make-read-only errand does the following:

Demotes a leader VM to a follower

Sets the VM to read-only and guarantees that apps can no longer write to this VM

Relays all in-flight transactions on the former leader VM to the follower VM if the follower VM is
accessible

This errand is used to trigger a leader-follower failover. You can use this errand to create custom failover
scripts. For more information, see Triggering a Leader-Follower Failover.

upgrade-all-service-instances
The upgrade-all-service-instances errand:

Collects all the service instances that the on-demand broker has registered

Issues an upgrade command and deploys the a new manifest to the on-demand broker for each
service instance

Adds to a retry list any instances that have ongoing BOSH tasks at the time of upgrade

Retries any instances in the retry list until all instances are upgraded

When you make changes to the plan configuration, the errand upgrades all the TCF-MySQL service
instances to the latest version of the plan.

If any instance fails to upgrade, the errand fails immediately. This prevents systemic problems from
spreading to the rest of your service instances.

register-broker
The register-broker errand:

Registers the service broker with Cloud Controller

Activates service access for any plans that are enabled on the tile

Deactivates service access for any plans that are deactivated on the tile

Does nothing for any plans that are set to manual on the tile

Run this errand whenever the broker is re-deployed with new catalog metadata to update the Marketplace.

Plans with deactivated service access are only visible to admin Cloud Foundry users. Non-admin Cloud
Foundry users, including Org Managers and Space Managers, cannot see these plans.

delete-all-service-instances-and-deregister-broker

This errand destroys all on-demand service instances and deregisters the broker from the
Cloud Controller. Use it with extreme caution.

129
Tanzu for MySQL

The delete-all-service-instances-and-deregister-broker errand does the following:

Deactivates service access to the service offering for all orgs and spaces. The errand deactivates
service access to ensure that new instances cannot be provisioned during the lifetime of the
errand.

Unbinds all apps from the service instances

Runs any pre-delete errands for each instance

Deletes the BOSH deployment of each service instance

Checks for deletion failure of each instance, which results in the errand failing immediately

Determines whether any instances have been created while the errand was running. If new
instances are detected, the errand returns an error. In this case, VMware recommends running the
errand again.

Deregisters the broker from the Cloud Controller

Tanzu Operations Manager runs this errand only when deleting Tanzu for MySQL. Running this errand
removes all service instances and their data.

recreate-all-service-instances
The recreate-all-service-instances errand recreates all service instance VMs managed by an on-
demand broker.

You might want use this errand in the following cases:

Rotating the Tanzu Operations Manager root certificate authority (CA). For more information about
rotating CAs, see Rotating CAs and Leaf Certificates.

Fully restoring the platform during disaster recovery or migration.

orphan-deployments
A service instance is defined as “orphaned” when the BOSH deployment for the instance is still running, but
the service is no longer registered in Cloud Foundry.

The orphan-deployments errand collates a list of service deployments that have no matching service
instances in Cloud Foundry and returns the list to the operator. It is then up to the operator to remove the
orphaned BOSH deployments.

To run the errand, run the following command:

bosh -d DEPLOYMENT-NAME run-errand orphan-deployments

If orphan deployments exist: The errand script does the following:

Exits with exit code 10

Outputs a list of deployment names under a [stdout] header

Provides a detailed error message under a [stderr] header

For example:

130
Tanzu for MySQL

[stdout]
[{"deployment\_name":"service-instance\_80e3c5a7-80be-49f0-8512-44840f3
c4d1b"}]

[stderr]
Orphan BOSH deployments detected with no corresponding service instance
in Cloud Foundry. Before deleting any deployment it is recommended to v
erify the service instance no longer exists in Cloud Foundry and any da
ta is safe to delete.

Errand 'orphan-deployments' completed with error (exit code 10)

These details are also available through the BOSH /tasks/ API endpoint for use in scripting:

$ curl 'https://bosh-user:bosh-password@bosh-url:25555/tasks/task-id/ou
tput?type=result' | jq .
{
"exit_code": 10,
"stdout": "[{"deployment_name":"service-instance_80e3c5a7-80be-49f0-8
512-44840f3c4d1b"}]\n",
"stderr": "Orphan BOSH deployments detected with no corresponding ser
vice instance in Cloud Foundry. Before deleting any deployment it is re
commended to verify the service instance no longer exists in Cloud Foun
dry and any data is safe to delete.\n",
"logs": {
"blobstore_id": "d830c4bf-8086-4bc2-8c1d-54d3a3c6d88d"
}
}

If no orphan deployments exist: The errand script does the following:

Exits with exit code 0

Stdout is an empty list of deployments

Stderr is None

[stdout]
[]

[stderr]
None

Errand 'orphan-deployments' completed successfully (exit code 0)

If the errand encounters an error during running: The errand script does the following:

Exits with exit 1

Stdout is empty

Any error messages appear under stderr

To clean up orphaned instances, run the following command on each instance:

131
Tanzu for MySQL

bosh delete-deployment service-instance_SERVICE-INSTANCE-GUID

Running this command might leave IaaS resources in an unusable state.

inspect
When performing failover or simply debugging a leader/follower deployment, the inspect errand is crucial in
getting quick feedback before and after any errands or configuration issues.

Running the inspect errand results in the following output:

Configuration: leader
IP Address: 10.244.16.7
Has Data: true
Read Only: false
GTID Executed: some-gtid
Replication Configured: false

Possible responses include:

Output field Possible responses Notes

Configuration leader unknown is the initial assessment before configure-leader-follower has


follower been run.
unknown

IP address IP address

Has Data true


false

Read Only true


false

GTID Executed gtid for your service


instance

Replication true
Configured false

bootstrap
The errand evaluates if quorum has been lost on a cluster, and if so, it bootstraps the cluster. Before
running the errand, ensure that there are no network partitions. After network partitions have been resolved,
the cluster is in a state where the errand can be run.

There are three possible responses:

Command succeeds: All jobs report as running.

Error: bootstrap is not required - The cluster is already healthy.

Error: could not reach node - One or more nodes are not reachable (that is, the VM exist but
is in an unknown state.)

If you see an error message, see Run the bootstrap errand for detailed instructions.

132
Tanzu for MySQL

Troubleshooting VMware Tanzu for MySQL


This topic provides you with basic instructions for troubleshooting on-demand VMware Tanzu for MySQL on
Cloud Foundry.

For information about temporary Tanzu for MySQL service interruptions, see Service interruptions.

Troubleshoot errors
This section provides information about how to troubleshoot specific errors or error messages.

Failed Installation

Failed Installation

Symptom Tanzu for MySQL fails to install.

Cause Reasons for a failed installation include:


Certificate issues: The on-demand broker (ODB) requires valid certificates.

Deploy fails. This could be due to a variety of reasons.

Networking problems:

Cloud Foundry cannot reach the Tanzu for MySQL broker

Cloud Foundry cannot reach the service instances

The service network cannot access the BOSH director

The Register broker errand fails.

The smoke test errand fails.

Resource sizing issues: These occur when the resource sizes selected for a
given plan are lower than Tanzu for MySQL requires to function.

Other service-specific issues.

Solution To troubleshoot:
Certificate issues. Ensure that your certificates are valid and generate new
ones if necessary. To generate new certificates, contact Support.

Deploy fails. View the logs using Tanzu Operations Manager to determine
why the deploy is failing.

Networking problems. See Networking problems.

Register broker errand fails. See Register broker errand.

Resource sizing issues. Check your resource configuration in Tanzu


Operations Manager and ensure that the configuration matches that
recommended by the service.

Cannot Create or Delete Service Instances

133
Tanzu for MySQL

Cannot Create or Delete Service Instances

Symptom Developers report errors such as:

Instance provisioning failed: There was a problem comple


ting your
request. Contact your operations team providing
the following information: service: redis-acceptance, se
rvice-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c7008
9, broker-request-id: 63da3a35-24aa-4183-aec6-db8294506b
ac, task-id: 442, operation: create

Cause Reasons include:


Problems with the deployment manifest

Authentication errors

Network errors

Quota errors

Solution To troubleshoot:
1. If the BOSH error shows a problem with the deployment manifest, open the manifest
in a text editor to inspect it.

2. To continue troubleshooting, SSH Into the BOSH Director VM and target the Tanzu
for MySQL instance using the instructions in parsing a Cloud Foundry error
message.

3. Retrieve the BOSH task ID from the error message and run: bosh task TASK-ID

4. If you need more information, access the broker logs and use the broker-
request-id from the previous error message to search the logs for more
information. Search for:

Authentication errors

Network errors

Quota errors

Broker Request Timeouts

Broker Request Timeouts

Symptom Developers report errors such as:

Server error, status code: 504, error code: 10001, messa


ge: The request to the service broker timed out: http
s://BROKER-URL/v2/service_instances/e34046d3-2379-40d0-a
318-d54fc7a5b13f/service_bindings/aa635a3b-ef6d-41c3-a23
f-55752f3f651b

134
Tanzu for MySQL

Broker Request Timeouts

Cause Cloud Foundry might not be connected to the service broker, or there might be a large number
of queued tasks.

Solution To troubleshoot:
1. Confirm that Cloud Foundry (CF) is connected to the service broker.

2. Check the BOSH queue size:


1. Log in to BOSH as an admin.

2. Run bosh tasks

If there are a large number of queued tasks, the system might be overloaded.
BOSH is configured with two workers and one status worker, which might not be
sufficient resources for the level of load.

3. If the task queue is long, advise the app developers to try again after the system
load has gone down.

Instance Does Not Exist

Instance Does Not Exist

Symptom Developers report errors such as:

Server error, status code: 502, error code: 10001, messa


ge: Service broker error: instance does not exist

Cause The instance might have been deleted.

Solution To troubleshoot:
1. Confirm that the Tanzu for MySQL instance exists in BOSH and obtain the GUID by
running: cf service MY-INSTANCE --guid

2. Using the GUID that you obtained previously, run: bosh -d service-
instance_GUID vms

If the BOSH deployment is not found, it has been deleted from BOSH. Contact Support for
further assistance.

Cannot Bind to or Unbind from Service Instances

135
Tanzu for MySQL

Cannot Bind to or Unbind from Service Instances

Symptom Developers report errors such as:

Server error, status code: 502, error code: 10001, messa


ge: Service broker error: There was a problem completing
your request. Please contact your operations team provid
ing the following information: service: example-service,
service-instance-guid: 8d69de6c-88c6-4283-b8bc-1c4610371
4e2, broker-request-id: 15f4f87e-200a-4b1a-b76c-1c4b6597
c2e1, operation: bind

Cause This might be due to authentication or network errors.

Solution To find out the exact problem with the binding process:
1. Access the service broker logs.

2. Search the logs for the broker-request-id string listed in the error message.

3. Check for:

Authentication errors

Network errors

4. Contact Support for further assistance if you are unable to resolve the problem.

Cannot Connect to a Service Instance

Cannot Connect to a Service Instance

Symptom Developers report that their app cannot use service instances that they have successfully
created and bound.

Cause The error might originate from the service or it might be network-related.

Solution To resolve this issue, ask the user to send application logs that show the connection error. If
the error originates from the service, follow Tanzu for MySQL-specific instructions.
If the issue appears to be network-related:
1. Check that the application security groups are configured correctly. Access can be
configured for the service network the tile is deployed to.

2. Ensure that the network the Tanzu Platform for CF tile is deployed to has network
access to the service network. You can find the network definition for this service
network in the BOSH Director tile.

3. In Tanzu Operations Manager, go into the service tile and see the service network
that is configured in the Networks tab.

4. In Tanzu Operations Manager, go into the Tanzu Platform for CF tile and see the
network it is assigned to. Make sure that these networks can access each other.

Service instances can also become temporarily inaccessible during upgrades and VM or
network failures. See Service interruptions for more information.

Upgrade All Service Instances Errand Fails

136
Tanzu for MySQL

Upgrade All Service Instances Errand Fails

Symptom The upgrade-all-service-instances errand fails.

Cause There might be a problem with a particular instance.

Solution To troubleshoot:
1. Look at the errand output in the Tanzu Operations Manager log.

2. If an instance has failed to upgrade, debug and fix it before running the errand again
to prevent any failure issues from spreading to other on-demand instances.

3. After the Tanzu Operations Manager log no longer lists the deployment as failing,
re-run the errand to upgrade the rest of the instances.

Missing Logs and Metrics

Missing Logs and Metrics

Symptom No logs are being emitted by the on-demand broker.

Cause Syslog might not be configured correctly, or you might have network access issues.

Solution To troubleshoot:
1. Ensure that you have configured syslog for the tile.

2. Check that your syslog forwarding address is correct in Tanzu Operations


Manager.

3. Ensure that you have network connectivity between the networks the tile is using
and the syslog destination. If the destination is external, you need to use the public
ip VM extension feature available in your Tanzu Operations Manager tile
configuration settings.

4. Verify that Loggregator is emitting metrics:


1. Install the cf log-cache plug-in. For instructions, see the Log Cache cf
CLI Plugin GitHub repository.

2. Find logs from your service instance by running: cf tail -f


SERVICE_INSTANCE

3. If no metrics appear within five minutes, verify that the broker network has
access to the Loggregator system on all required ports.

5. If you are unable to resolve the issue, contact Support.

MySQL Load is High with Large Number of CredHub Encryption


Keys

137
Tanzu for MySQL

MySQL Load is High with Large Number of CredHub Encryption Keys

Symptom MySQL load is high

CredHub queriesare slow

Cause There is a large number of CredHub encryption keys.

Solution For information about resolving this issue, see Cleaning up Tanzu Platform for Cloud Foundry
CredHub in the VMware Tanzu Support Hub.

Leader-Follower Service Instance Errors


This section provides solutions for the following errands:

Unable to determine leader and follower

Both leader and follower instances are writable

Both leader and follower instances are read-only

Unable to Determine Leader and Follower

Symptom This problem happens when the configure-leader-follower errand fails because it cannot
determine the VM roles.

The configure-leader-follower errand exits with 1 and the errand logs contain the
following:

$ Unable to determine leader and follower based on trans


action history.

Cause Something has happened to the instances, such as a failure or manual intervention. As a
result, there is not enough information available to determine the correct state and topology
without operator intervention to resolve the issue.

138
Tanzu for MySQL

Unable to Determine Leader and Follower

Solution Use the inspect errand to determine which instance can be the leader. Then, using the
orchestration errands and backup/restore, you can put the service instance into a safe
topology, and then rerun the configure-leader-follower errand. This is shown in the
following example.

This example shows one outcome that the inspect errand can return:
1. Use the inspect errand to retrieve relevant information about the two VMs:

$ bosh -e my-env -d my-dep run-errand inspect


[...]
Instance mysql/4ecad54b-0704-47eb-8eef-eb2
28cab9724
Exit Code 0
Stdout -
Stderr 2017/12/11 18:25:54 Started executi
ng command: inspect
2017/12/11 18:25:54 Started GET htt
ps://127.0.0.1:8443/status
2017/12/11 18:25:54
Has Data: false
Read Only: true
GTID Executed: 1d774323-de9e-11e7-b
e01-42010a001014:1-25
Replication Configured: false
Instance mysql/e0b94ade-0114-4d49-a929-ce1
616d8beda
Exit Code 0
Stdout -
Stderr 2017/12/11 18:25:54 Started executi
ng command: inspect
2017/12/11 18:25:54
Started GET https://127.0.0.1:8443/
status
2017/12/11 18:25:54
Has Data: true
Read Only: true
GTID Executed: 1d774323-de9e-11e7-b
e01-42010a001014:1-25
Replication Configured: true
2 errand(s)
Succeeded

In the previous scenario, the first instance is missing data but does not have
replication configured. The second instance has data, and also has replication
configured. The following instructions resolve this by copying data to the first
instance, and resuming replication.

2. Take a backup of the second instance using the steps in Create a Tanzu for MySQL
Logical Backup.

139
Tanzu for MySQL

Unable to Determine Leader and Follower

3. Restore the backup artifact to the first instance using the steps in Restore from a
Tanzu for MySQL Logical Backup.
At this point, the instances have equivalent data.

4. Run the configure-leader-follower errand to reconfigure replication: bosh -e


ENVIRONMENT -d DEPLOYMENT \ run-errand configure-leader-follower \
--instance=mysql/GUID-OF-LEADER
For example:

$ bosh -e my-env -d my-dep \


run-errand configure-leader-follower \
--instance=mysql/4ecad54b-0704-47eb-8eef
-eb228cab9724

Both Leader and Follower Instances are Writable

Symptom This happens when the configure-leader-follower errand fails because both VMs are
writable and the VMs might hold different data.

The configure–leader-follower errand exits with 1 and the errand logs contain the
following:

$ Both mysql instances are writable. Please ensure no di


vergent data and set one instance to read-only mode.

Cause Tanzu for MySQL tries to ensure that there is only one writable instance of the leader-follower
pair at any given time. However, in certain situations, such as network partitions, or manual
intervention outside of the provided BOSH errands, it is possible for both instances to be
writable.

The service instances remain in this state until an operator resolves the issue. The problem
must be resolved to ensure that the correct instance is promoted and to reduce the potential
for data divergence.

140
Tanzu for MySQL

Both Leader and Follower Instances are Writable

Solution 1. Use the inspect errand to retrieve the GTID Executed set for each VM:

$ bosh -e my-env -d my-dep run-errand inspect


[...]
Instance mysql/4ecad54b-0704-47eb-8eef-eb2
28cab9724
Exit Code 0
Stdout -
Stderr 2017/12/11 18:25:54 Started execu
ting command: inspect
2017/12/11 18:25:54 Started GET h
ttps:127.0.0.1:8443/status
2017/12/11 18:25:54
Has Data: true
Read Only: false
GTID Executed: 1d774323-de9e-11e7
-be01-42010a001014:1-23
Replication Configured: false

Instance mysql/e0b94ade-0114-4d49-a929-c
e1616d8beda
Exit Code 0
Stdout -
Stderr 2017/12/11 18:25:54 Started exe
cuting command: inspect
2017/12/11 18:25:54 Started GET ht
tps:127.0.0.1:8443/status
2017/12/11 18:25:54
Has Data: true
Read Only: false
GTID Executed: 1d774323-de9e-11e7-
be01-42010a001014:1-25
Replication Configured: false

2 errand(s)

Succeeded

If the GTID Executed sets for both instances are the same, continue to Step 2. If
they are different, continue to Step 4.

2. Look at the value of GTID Executed for both instances.


If the range after the GUID is equivalent, either instance can be made
read-only, as described in Step 3.

If one instance has a range that is a subset of the other, the instance with
the subset must be made read-only, as described in Step 3.

141
Tanzu for MySQL

Both Leader and Follower Instances are Writable

3. Based on the information you gathered in the previous step, run the make-read-
only errand to make the appropriate instance read-only:
bosh -e ENVIRONMENT -d DEPLOYMENT \ run-errand make-read-only \ --
instance=mysql/MYSQL-SUBSET-INSTANCE
For example:

$ bosh -e my-env -d my-dep \


run-errand make-read-only \
--instance=mysql/e0b94ade-0114-4d49-
a929-ce1616d8beda
[...]
succeeded

4. If the GTID Executed sets are neither equivalent nor subsets, data has diverged
and you must determine what data has diverged as part of the following procedure:
1. Use the make-read-only errand to set both instances to read-only to
prevent further data divergence. bosh -e ENVIRONMENT -d DEPLOYMENT
\ run-errand make-read-only \ --instance=mysql/MYSQL-
INSTANCE
For example:

$ bosh -e my-env -d my-dep \


run-errand make-read-only
\
--instance=mysql/e0b94ade
-0114-4d49-a929-ce1616d8beda
[...]
succeeded

2. Take a backup of both instances using the steps in Create a Tanzu for
MySQL Logical Backup.

3. Manually inspect the data on each instance to determine the


discrepancies and put the data on the instance that is further ahead; this
instance has the higher GTID Executed set, and is the new leader.

4. Migrate all appropriate data to the new leader instance.

5. After putting all data on the leader, ssh onto the follower:
bosh -e ENVIRONMENT -d DEPLOYMENT ssh mysql/GUID-OF-
FOLLOWER
For example:

$ bosh -e my-env -d my-dep ssh mysql/e0b94a


de-0114-4d49-a929-ce1616d8beda

6. Become root with the command: sudo su.

7. Stop the mysql process with the command: monit stop mysql.

8. Delete the data directory of the follower with the command: rm -rf
/var/vcap/store/mysql.

9. Start the mysql process with the command: monit start mysql.

10. Use the configure-leader-follower errand to copy the leader data to


the follower and resume replication:

142
Tanzu for MySQL

Both Leader and Follower Instances are Writable

bosh -e ENVIRONMENT -d DEPLOYMENT \ run-errand configure-


leader-follower \ --instance=mysql/GUID-OF-LEADER
For example:

$ bosh -e my-env -d my-dep \


run-errand configure-leader
-follower \
--instance=mysql/4ecad54b-0
704-47eb-8eef-eb228cab9724

Both Leader and Follower Instances are Read-Only

Symptom Developers report that apps cannot write to the database. In a leader-follower topology, the
leader VM is writable and the follower VM is read-only. However, if both VMs are read-only,
apps cannot write to the database.

Cause This problem happens if the leader VM fails and the BOSH Resurrector is activated. When the
leader is resurrected, it is set as read-only.

143
Tanzu for MySQL

Both Leader and Follower Instances are Read-Only

Solution 1. Use the inspect errand to confirm that both VMs are in a read-only state:
bosh -e ENVIRONMENT -d DEPLOYMENT run-errand inspect

2. Examine the output and locate the information about the leader-follower Tanzu for
MySQL VMs:

Instance mysql/4eexample54b-0704-47eb-8eef-e
b2example724
Exit Code 0
Stdout -
Stderr 2017/12/11 18:25:54 Started executi
ng command: inspect
2017/12/11 18:25:54 Started GET http
s:999.0.0.1:8443/status
2017/12/11 18:25:54
Has Data: true
Read Only: true
GTID Executed: 1d779999-de9e-11e7-be0
1-42010a009999:1-23
Replication Configured: true

Instance mysql/e0exampleade-0114-4d49-a929-c
example8beda
Exit Code 0
Stdout -
Stderr 2017/12/11 18:25:54 Started executi
ng command: inspect
2017/12/11 18:25:54 Started GET http
s:999.0.0.1:8443/status
2017/12/11 18:25:54
Has Data: true
Read Only: true
GTID Executed: 1d779999-de9e-11e7-be0
1-42010a009999:1-25
Replication Configured: false

2 errand(s)

Succeeded

3. If Read Only is set to true for both VMs, make the leader writable using the
following command:
bosh -e ENVIRONMENT -d DEPLOYMENT \ run-errand configure-leader-
follower \ --instance=mysql/GUID-OF-LEADER
For example, if the second instance is the leader:

144
Tanzu for MySQL

Both Leader and Follower Instances are Read-Only

$ bosh -e my-env -d my-dep \


run-errand configure-leader-follower \
--instance=mysql/e0exampleade-0114-4d49-a929
-cexample8beda

Inoperable app and database errors


This section provides a solution for the following errors:

Persistent Disk is Full

Cannot Access Database Table

Persistent Disk is Full

Symptom Developers report that read, write, and cf CLI operations do not work. Developers cannot
upgrade to a larger Tanzu for MySQL service plan to free up disk space.

If your persistent disk is full, apps become inoperable. In this state, read, write, and Cloud
Foundry CLI (cf CLI) operations do not work.

Cause This problem happens if your persistent disk is full. When you use the BOSH CLI to target
your deployment, you see that instances are at 100% persistent disk usage.

Available disk space can be increased by deleting log files. After deleting logs, you can then
upgrade to a larger Tanzu for MySQL service plan.

You can also turn off binary logging before developers do large data uploads or if their
databases have a high transaction volume.

Solution To resolve this issue, do one of the following:


If your persistent disk is already full, delete binary logs. See Tanzu for MySQL
hangs when server VM persistent disk is full.

Deleting binary logs is a destructive procedure and can result in


MySQL data loss. Use this procedure only with the assistance
of Support.

If the majority of your persistent disk are binary logs but it is not currently
full, turn off binary logging. See Binary Logs Filling up the Persistent Disk.

Cannot Access Database Table

Symptom When you query an existing table, you see an error similar to the following:

ERROR 1146 (42S02): Table 'mysql.foobar' doesn't exist

145
Tanzu for MySQL

Cannot Access Database Table

Cause This error occurs if you created an uppercase table name and then activated lowercase table
names.
You activate lowercase table names either by:
Setting the optional enable_lower_case_table_names parameter to true with the
cf CLI. For more information, see Lowercase table names.

Selecting Enable Lower Case Table Names in the MySQL Configuration pane of
the tile. For more information, see Configure MySQL.

Solution To resolve this issue:


1. Deactivate lowercase table names by doing one of the following:
Set the optional enable_lower_case_table_names parameter to false
with the cf CLI. For instructions, see Set Optional Parameters.

Activate lowercase table names in the tile:


1. Deselect Enable Lower Case Table Names in the Mysql
Configuration pane of the tile.

2. Go to the Ops Manager Installation Dashboard, click Review


Pending Changes, and then click Apply Changes.

2. (Optional) If you want to activate lowercase table names again, rename your table to
lowercase and then activate lowercase table names.

Highly available cluster errors


This section provides solutions for the following errands:

Unresponsive Node in a Highly Available Cluster

Many Replication Errors in Logs for Highly Available Clusters

Unresponsive Node in a Highly Available Cluster

Symptom A client connected to a Tanzu for MySQL cluster node reports the following error:

WSREP has not yet prepared this node for application use

Some clients might instead return the following:

unknown error

Cause If the client is connected to a Tanzu for MySQL cluster node and that node loses connection
to the rest of the cluster, the node stops accepting writes. If the connection to this node is
made through the proxy, the proxy automatically re-routes further connections to a different
node.

146
Tanzu for MySQL

Unresponsive Node in a Highly Available Cluster

A node can become unresponsive for a number of reasons. For solutions, see the following:
Solution
Network Latency: If network latency causes a node to become unresponsive, the
node drops but eventually rejoins. The node automatically rejoins only if one node
has left the cluster. Consult your IaaS network settings to reduce your network
latency.

MySQL Process Failure: If the MySQL process fails, monit then BOSH restores
the process. If the process is not restored, use bosh logs to retrieve logs from the
failing database or mysql jobs, and inspect the error logs returned. For more
information, see the Downloading logs section.

Firewall Rule Change: If your firewall rules change, it might prevent a node from
communicating with the rest of the cluster. This causes the node to become
unresponsive. In this case, the logs show the node leaving the cluster but do not
show network latency errors.

To confirm that the node is unresponsive because of a firewall rule change, SSH
from a responsive node to the unresponsive node. If you cannot connect, the node
is unresponsive due to a firewall rule change. Change your firewall rules to allow the
unresponsive node to rejoin the cluster.

VM Failure: If you cannot SSH into a node and you are not detecting either network
latency or firewall issues, your node might be down due to VM failure. To confirm
that the node is unresponsive and to re-create the VM, see Recreate a Corrupted
VM in a Highly Available Cluster.

Node Unable to Rejoin: If a detached existing node fails to join the cluster, its
sequence_number might be higher than those of the nodes with quorum. A higher
sequence_number on the detached node indicates that it has recent changes to the
data that the primary component lacks. You can verify this by looking at the node’s
error log at /var/vcap/sys/log/pxc-mysql/mysql.err.log.

To restore the cluster, complete one of the following:


If the detached node has a higher sequence number than the primary
component, do the procedures in Bootstrapping.

If bootstrapping does not restore the cluster, you can manually force the
node to rejoin the cluster. This removes all of the unsynchronized data
from the detached server node and creates a new copy of the cluster
data on the node. For more information, see Force a node to rejoin a
highly available cluster manually.
Forcing a node to rejoin the cluster is a destructive procedure. Only do
this procedure with the assistance of Support.

147
Tanzu for MySQL

Many Replication Errors in Logs for Highly Available Clusters

Symptom You see many replication errors in the MySQL logs, like the following:

160318 9:25:16 [Warning] WSREP: RBR event 1 Query apply


warning: 1, 16992456
160318 9:25:16 [Warning] WSREP: Ignoring error for TO is
olated action: source: abcd1234-abcd-1234-abcd-1234abcd1
234 version: 3 local: 0 state: APPLYING flags: 65 conn_i
d: 246804 trx_id: -1 seqnos (l: 865022, g: 16992456, s:
16992455, d: 16992455, ts: 2530660989030983)
160318 9:25:16 [ERROR] Slave SQL: Error 'Duplicate colum
n name 'number'' on query. Default database: 'cf_0123456
_1234_abcd_1234_abcd1234abcd'. Query: 'ALTER TABLE ...'

Cause This problem happens when there are errors in SQL statements.

Solution For solutions for the replication errors in MySQL log files, see the following table:
Additional Error Solution

ALTER TABLE errors Fix the ALTER TABLE error.


This error can occur when an app issues an invalid data
definition statement. Other nodes log this problem as a
replication error because they fail to replicate the ALTER
TABLE.

If you see replication errors, but no ALTER TABLE or persistent disk or memory issues, you
can ignore the replication errors.

Failed backups

Automated Backups or adbr Plug-in Backups Fail

Symptom If an automated backup or a backup initiated from the ApplicationDataBackupRestore (adbr)


plug-in fails, you see the following symptoms:
The backup fails.

The adbr-api logs for the broker show:

backup failed with response: 502 Bad Gateway: Regi


stered endpoint failed to handle the request.

The gorouter logs on the Tanzu Platform for CF deployment show:

adbr-api.SYSTEM-DOMAIN - [2021-01-20T19:30:00.9110
80271Z] "POST /service_instances/acb85c98-151e-4f1
3-9f0f-de057ef18d67/backup HTTP/1.1" 502 ... x_cf_
routererror:"endpoint_failure" ...

Where SYSTEM-DOMAIN is your system domain.

148
Tanzu for MySQL

Automated Backups or adbr Plug-in Backups Fail

Cause Port 2345, which allows communication between the Tanzu Platform for CF and ODB
components, is closed.

Open port 2345 from the Tanzu Platform for CF component to the ODB component. See
Solution
Required networking rules for Tanzu for MySQL.

Troubleshoot components
This section provides guidance on checking for and fixing issues in on-demand service components.

BOSH problems
Large BOSH queue: On-demand service brokers add tasks to the BOSH request queue, which can
back up and cause delay under heavy loads. An app developer who requests a new Tanzu for
MySQL instance sees create in progress in the Cloud Foundry (cf CLI) until BOSH processes
the queued request.

Users of Tanzu Operations Manager can configure the number of BOSH workers. The default
number is 5.

Configuration
Service instances in failing state: The VM or Disk type that you configured in the plan page of the
tile in Tanzu Operations Manager might not be large enough for the Tanzu for MySQL service
instance to start. See tile-specific guidance on resource requirements.

Authentication
UAA changes: If you have rotated any UAA user credentials, you might see authentication issues
in the service broker logs.

To resolve this, redeploy the VMware Tanzu for MySQL tile in Tanzu Operations Manager. This
provides the broker with the latest configuration.

You must ensure that any changes to UAA credentials are reflected in the Tanzu
Operations Manager credentials tab of the VMware Tanzu Platform for Cloud
Foundry tile.

Networking
Common networking issues include:

Networking Issue Solution

Latency when connecting to the Tanzu for MySQL service Try again or improve network performance.
instance to create or delete a binding.

149
Tanzu for MySQL

Firewall rules are blocking connections from the Tanzu for Open the Tanzu for MySQL tile in Tanzu Operations Manager
MySQL service broker to the service instance. and check the two networks configured in the Networks
pane. Ensure that these networks allow access to each other.

Firewall rules are blocking connections from the service Ensure that service instances can access the Director so
network to the BOSH director network. that the BOSH agents can report in.

Apps cannot access the service network. Configure Cloud Foundry application security groups (ASG)
to allow runtime access to the service network.

Problems accessing the BOSH UAA or the BOSH director. Follow network troubleshooting and check that the BOSH
director is online.

To validate service broker connectivity to service instances:

1. View the BOSH deployment name for your service broker by running:

bosh deployments

2. SSH into the Tanzu for MySQL service broker by running:

bosh -d DEPLOYMENT-NAME ssh

3. If no BOSH task-id appears in the error message, look in the broker log using the broker-
request-id from the task.

To validate app access to service instance:

Use cf ssh to access to the app container, then try connecting to the Tanzu for MySQL service instance
using the binding included in the VCAP_SERVICES environment variable.

Quotas
Plan quota issues: Developers report errors such as:

Message: Service broker error: The quota for this service plan has been exceede
d.
Please contact your Operator for help.

1. Check your current plan quota.

2. Increase the plan quota.

3. Log in to Tanzu Operations Manager.

4. Reconfigure the quota on the plan page.

5. Deploy the tile.

6. Find who is using the plan quota and take the appropriate action.

Global quota issues: Developers report errors such as:

Message: Service broker error: The quota for this service has been exceeded.
Please contact your Operator for help.

1. Check your current global quota.

150
Tanzu for MySQL

2. Increase the global quota.

3. Log in to Tanzu Operations Manager.

4. Reconfigure the quota on the on-demand settings page.

5. Deploy the tile.

6. Find out who is using the quota and take the appropriate action.

Failing jobs and unhealthy instances


To find out if there is a problem with the Tanzu for MySQL deployment:

1. Inspect the VMs by running:

bosh -d service-instance_GUID vms --vitals

2. For additional information, run:

bosh -d service-instance_GUID instances --ps --vitals

If the VM is failing, follow the service-specific information. Any unadvised corrective actions (such as
running BOSH restart on a VM) can cause problems in the service instance.

A failing process or failing VM might come back automatically after a temporary service outage. See VM
process failure and VM failure in Resolving service interruptions.

AZ or region failure
Failures at the IaaS level, such as Availability Zone (AZ) or region failures, can interrupt service and require
manual restoration. See AZ failure and Region failure in Resolving service interruptions.

Techniques for troubleshooting


This section contains instructions for interacting with the on-demand service broker and on-demand service
instance BOSH deployments, and for performing general maintenance and housekeeping tasks.

Parse a Cloud Foundry error message


Failed operations (create, update, bind, unbind, delete) result in an error message. You can retrieve the error
message later by running the cf CLI command cf service INSTANCE-NAME.

$ cf service myservice

Service instance: myservice


Service: super-db
Bound apps:
Tags:
Plan: dedicated-vm
Description: Dedicated Instance
Documentation url:
Dashboard:

Last Operation
Status: create failed

151
Tanzu for MySQL

Message: Instance provisioning failed: There was a problem completing your request.
Please contact your operations team providing the following information:
service: redis-acceptance,
service-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c70089,
broker-request-id: 63da3a35-24aa-4183-aec6-db8294506bac,
task-id: 442,
operation: create
Started: 2017-03-13T10:16:55Z
Updated: 2017-03-13T10:17:58Z

Use the information in the Message field to debug further. Provide this information to Support when filing a
ticket.

The task-id field maps to the BOSH task ID. For more information about a failed BOSH task, use the
bosh task TASK-ID command.

The broker-request-guid maps to the portion of the On-Demand Broker log containing the failed step.
Access the broker log through your syslog aggregator, or access BOSH logs for the broker by typing bosh
logs broker 0. If you have more than one broker instance, repeat this process for each instance.

Access broker and instance logs and VMs


Before following these procedures, log in to the cf CLI and the BOSH CLI.

Access Broker Logs and VMs: You can access logs using Tanzu Operations Manager by clicking
on the Logs tab in the tile and downloading the broker logs.

To access logs using the BOSH CLI, do the following:

1. Identify the on-demand broker (ODB) deployment by running:

bosh deployments

2. View VMs in the deployment by running:

bosh -d DEPLOYMENT-NAME instances

3. SSH onto the VM by running:

bosh -d DEPLOYMENT-NAME ssh

4. Download the broker logs by running:

bosh -d DEPLOYMENT-NAME logs

The archive generated by BOSH includes the following logs:

Log Name Description

broker.stdout.log Requests to the on-demand broker and the actions the broker performs while
orchestrating the request (e.g., generating a manifest and calling BOSH). Start here
when troubleshooting.

bpm.log Control script logs for starting and stopping the on-demand broker

post-start.stderr.log Errors that occur during post-start verification

152
Tanzu for MySQL

Log Name Description

post-start.stdout.log Post-start verification

drain.stderr.log Errors that occur while running the drain script

Access service instance logs and VMs:

1. To target an individual service instance deployment, retrieve the GUID of your service
instance with the following cf CLI command:

cf service MY-SERVICE --guid

2. To view VMs in the deployment, run:

bosh -d service-instance_GUID instances

3. To SSH into a VM, run:

bosh -d service-instance_GUID ssh

4. To download the instance logs, run:

bosh -d service-instance_GUID logs

Run service broker errands to manage brokers and instances


From the BOSH CLI, you can run service broker errands that manage the service brokers and perform
mass operations on the service instances that the brokers created. These service broker errands include:

register-broker: registers a broker with the Cloud Controller and lists it in the Marketplace.

deregister-broker: deregisters a broker with the Cloud Controller and removes it from the
Marketplace.

upgrade-all-service-instances: upgrades existing instances of a service to its latest installed


version.

delete-all-service-instances: deletes all instances of service.

orphan-deployments: detects “orphan” instances that are running on BOSH but not registered with
the Cloud Controller.

To run an errand, run the following command:

bosh -d DEPLOYMENT-NAME run-errand ERRAND-NAME

For example:

bosh -d my-deployment run-errand deregister-broker

For details of available errands and their operation, see Running service broker errands

Reinstall a tile

153
Tanzu for MySQL

To install Tanzu for MySQL, see Installing and configuring Tanzu for MySQL.

Plan 1 must be active in your tile configuration in order for the install to complete
successfully. Do not de-activate Plan 1 within the Tanzu for MySQL tile configuration
before applying changes.

View resource saturation and scaling


To view usage statistics for any service, do the following:

1. Run the following command:

bosh -d DEPLOYMENT-NAME vms --vitals

2. To view process-level information, run:

bosh -d DEPLOYMENT-NAME instances --ps

Identify apps using a service instance


To identify which apps are using a specific service instance from the name of the BOSH deployment:

1. Take the deployment name and strip the service-instance_. This leaves you with the GUID.

2. Log in to CF as an admin.

3. Get a list of all service bindings by running the following:

cf curl /v2/service_instances/GUID/service_bindings

4. The output from the curl gives you a list of resources, with each item referencing a service
binding, which contains the APP-URL. To find the name, org, and space for the app, run:

1. cf curl APP-URL and record the app name under entity.name.

2. cf curl SPACE-URL to obtain the space, using the entity.space_url from the curl.
Record the space name under entity.name.

3. cf curl ORGANIZATION-URL to obtain the org, using the entity.organization_url from


the curl. Record the organization name under entity.name.

When you run cf curl, ensure that you query all pages, because the responses are limited to a certain
number of bindings per page. The default is 50. To find the next page, curl the value under next_url.

Monitor quota saturation and service instance count


Quota saturation and total number of service instances are available through ODB metrics emitted to
Loggregator. The metric names are shown in the following table:

Quota metrics are not emitted if no quota has been set.

154
Tanzu for MySQL

Metric Name Description

on-demand-broker/SERVICE-NAME-MARKETPLACE/quota_remaining Global quota remaining for all instances across all


plans

on-demand-broker/SERVICE-NAME-MARKETPLACE/PLAN- Quota remaining for a particular plan


NAME/quota_remaining

on-demand-broker/SERVICE-NAME-MARKETPLACE/total_instances Total instances created across all plans

on-demand-broker/SERVICE-NAME-MARKETPLACE/PLAN- Total instances created for a given plan


NAME/total_instances

Techniques for troubleshooting highly available clusters


If your cluster is experiencing downtime or is in a degraded state, VMware recommends gathering
information to diagnose the type of failure the cluster is experiencing using the following workflow:

1. Consult solutions for common errors. See Highly Available Cluster Troubleshooting Errors.

2. Use mysql-diag to view a summary of the network, disk, and replication state of each cluster
node. Depending on the output from mysql-diag, you might recover your cluster with the following
troubleshooting techniques:

To force a node to rejoin the cluster, see Force a node to rejoin a highly available cluster
manually.

To re-create a corrupted VM, see Recreate a corrupted VM in a highly available cluster.

To check if replication is working, see Check replication in a highly available cluster. For
more information about mysql-diag, see Running mysql-diag.

3. Run bosh logs, targeting each of the VMs in your Tanzu for MySQL cluster, proxies, and jumpbox
to retrieve the VM logs. You must run bosh logs before attempting recovery because any failures
in the recovery procedure can result in logs being lost or made inaccessible.
For more information, see the Downloading logs section.

4. If you are uncertain about the recovery steps to take, submit a ticket through Support. When you
submit a ticket, provide the following information:

mysql-diag output: A summary of the network, disk, and replication state. See Running
mysql-diag for how to run mysql-diag.

downloaded logs: Logs from your Tanzu for MySQL cluster, proxies, and jumpbox VM.
For more information,see Downloading logs.

Deployment environment: This is the environment that Tanzu for MySQL is running in
such as VMware Tanzu Platform for Cloud Foundry or a service tile.

Version numbers: The versions of the installed Tanzu Operations Manager, Tanzu
Platform for Cloud Foundry, and Tanzu for MySQL.

Do not attempt to resolve cluster issues by reconfiguring the cluster, such as changing the
number of nodes or networks. Use only the diagnosis steps in this document. If you are
unsure how to proceed, contact Support.

155
Tanzu for MySQL

Force a node to rejoin a highly available cluster manually


If a detached node fails to rejoin the cluster after a configured grace period, you can manually force the
node to rejoin the cluster. This procedure removes all the data on the node, forces the node to join the
cluster, and creates a new copy of the cluster data on the node.

Before following this procedure, try to bootstrap the cluster. For more information, see Bootstrapping.

If you manually force a node to rejoin the cluster, data stored on the local node is lost. Do
not force nodes to rejoin the cluster if you want to preserve unsynchronized data. Only do
this procedure with the assistance of Support.

To manually force a node to rejoin the cluster, do the following:

1. SSH into the node by following the procedure in SSH into the BOSH Director VM.

2. Become root. Run:

sudo su

3. Shut down the mysqld process on the node by running:

monit stop galera-init

4. Remove the unsynchronized data on the node by running:

rm -rf /var/vcap/store/pxc-mysql

5. Prepare the node before restarting by running:

/var/vcap/jobs/pxc-mysql/bin/pre-start

6. Restart the mysqld process by running:

monit start galera-init

Recreate a corrupted VM in a highly available cluster


To re-create a corrupted VM:

1. To log in to the BOSH Director VM, follow these procedures:

1. Gather the information needed to log in to the BOSH Director VM. Follow the procedure in
Gather Credential and IP Address Information.

2. Log in to the Tanzu Operations Manager VM. Follow the procedure in Log in to the Tanzu
Operations Manager VM with SSH.

3. Log in to the BOSH Director VM. Follow the procedure in SSH Into the BOSH Director VM.

2. Identify and re-create the unresponsive node with bosh cloudcheck. Follow the procedure in BOSH
Cloud Check and run Recreate VM using last known apply spec.
Recreating a node clears the logs. Ensure that the node is completely down before recreating it.

156
Tanzu for MySQL

Re-create only one node. Do not re-create the entire cluster. If more than one node is down, contact
Support.

Check replication status in a highly available cluster


If you see stale data in your cluster, you can check whether replication is functioning normally.

To check the replication status, do the following:

1. To log in to the BOSH Director VM, follow these procedures:

1. Gather the information needed to log in to the BOSH Director VM. Follow the procedure in
Gather credential and IP Address information.

2. Log in to the Tanzu Operations Manager VM. Follow the procedure in Log in to the Tanzu
Operations Manager VM with SSH.

3. Create a dummy database in the first node by running:

mysql -h FIRST-NODE-IP-ADDRESS \ -u YOUR-IDENTITY \ -p -e "create databas


e verify_healthy;"

Where:

FIRST-NODE-IP-ADDRESS is the IP address of the first node you recorded in step


1.

YOUR-IDENTITY is the value of identity that you recorded in step 1.

2. Create a dummy table in the dummy database by running:

mysql -h FIRST-NODE-IP-ADDRESS \ -u your-identity \ -p -D verify_healthy \ -e


"create table dummy_table (id int not null primary key auto_increment, info tex
t) \ engine='innodb';"

3. Insert data into the dummy table by running:

mysql -h FIRST-NODE-IP-ADDRESS \ -u YOUR-IDENTITY \ -p -D verify_healthy \ -e


"insert into dummy_table(info) values ('dummy data'),('more dummy data'),('even
more dummy data');"

4. Query the table and verify that the three rows of dummy data exist on the first node by running:

mysql -h FIRST-NODE-IP-ADDRESS \ -u YOUR-IDENTITY \ -p -D verify_healthy \ -e


"select * from dummy_table;"

When prompted for a password, provide the password value recorded in step 1. The command
returns output similar to the following:

+----+----------------------+
| id | info |
+----+----------------------+
| 4 | dummy data |
| 7 | more dummy data |

157
Tanzu for MySQL

| 10 | even more dummy data |


+----+----------------------+

5. Verify that the other nodes contain the same dummy data by doing the following for each of the
remaining MySQL server IP addresses:

1. Query the dummy table by running :

mysql -h NEXT-NODE-IP-ADDRESS \ -u YOUR-IDENTITY \ -p -D verify\_healthy


\ -e "select * from dummy_table;"

When prompted for a password, provide the password value recorded in step 1.

2. Verify that the node contains the same three rows of dummy data as the other nodes by
running:

mysql -h NEXT-NODE-IP-ADDRESS \
-u YOUR-IDENTITY \
-p -D verify\\_healthy \
-e "select \* from dummy\\_table;"

When prompted for a password, provide the password value recorded in step 1.

3. Verify that the previous command returns output similar to the following:

+----+----------------------+
| id | info |
+----+----------------------+
| 4 | dummy data |
| 7 | more dummy data |
| 10 | even more dummy data |
+----+----------------------+

6. The next step is determined by the results:

If each MySQL server instance does not return the same result: Before making any
changes to your deployment, contact Support.

If each MySQL server instance returns the same result: You can safely proceed to
scaling down your cluster to a single node.

Tools for Troubleshooting


The troubleshooting techniques use these tools.

Downloading logs
The following are steps to gather logs from your MySQL cluster nodes, MySQL proxies, and, with highly
available clusters, the jumpbox VM.

1. From Tanzu Operations Manager, open your BOSH Director tile > Credentials tab.

2. Click Bosh Commandline Credentials Link to Credential. A short plaintext file opens.

3. From the plaintext file, record the values listed:

158
Tanzu for MySQL

BOSH_CLIENT

BOSH_CLIENT_SECRET

BOSH_CA_CERT

BOSH_ENVIRONMENT

4. From the BOSH CLI, run bosh deployments and record the name of the BOSH deployment that
deployed MySQL for TCF-MySQL.

5. SSH into your Tanzu Operations Manager VM. For information about how to do this, see Gather
Credential and IP Address Information and SSH into Tanzu Operations Manager.

6. Set local environment variables to the same BOSH variable values that you recorded earlier,
including BOSH_DEPLOYMENT for the deployment name you recorded above. For example:

$ export BOSH_CLIENT=ops_manager \
BOSH_CLIENT_SECRET=a123bc-E_4Ke3fb-gImbl3xw4a7meW0rY \
BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate \
BOSH_ENVIRONMENT=10.0.0.5 \
BOSH_DEPLOYMENT=pivotal-mysql-14c4

If you connect to your BOSH director through a gateway, you also need to set
variables BOSH_GW_HOST, BOSH_GW_USER, and BOSH_GW_PRIVATE_KEY.

7. Use the bosh logs command to retrieve logs for any instances in your deployment that are named
database or prefixed with mysql (such as mysql-jumpbox).

The following lines show one way to perform this.


.For more information, see the bosh logs documentation.

$ tempdir="$(mktemp -d -t MYSQLLOGS-XXXXXX)"
echo Saving logfiles to "${tempdir}"
for node in $(bosh instances --column="Instance" | grep -E "(database|mysql.
*)/"); do
echo -e "\nDownloading logs for: ${node}"
bosh logs --dir="${tempdir}" ${node}
done
tar czf "${tempdir}/mysql-logs.tar.gz" ./*
echo Bundled logfiles are in "${tempdir}/mysql-logs.tar.gz"

8. Download the retrieved logfiles to your local laptop for inspection. Use bosh scp from your local
workstation to retrieve files on a BOSH VM. For more information, see the bosh scp
documentation.

mysql-diag
mysql-diag outputs the current status of a highly available (HA) MySQL cluster in TCF-MySQL and
suggests recovery actions if the cluster fails. For more information, see Running mysql-diag.

Knowledge Base (Community)

159
Tanzu for MySQL

Find the answer to your question and browse product discussions and solutions by searching the Broadcom
Knowledge Base.

File a support ticket


You can file a ticket with Support. Be sure to provide the error message from cf service YOUR-SERVICE-
INSTANCE.

To expedite troubleshooting, provide your service broker logs and your service instance logs. If your cf
service YOUR-SERVICE-INSTANCE output includes a task-id, provide the BOSH task output.

About data migration in VMware Tanzu for MySQL


This topic describes how to trigger a failover of apps from the leader to the follower.

Triggering a Leader-Follower failover

Triggering a Leader-Follower failover


You can trigger a failover of apps from the leader to the follower in a VMware Tanzu for MySQL on Cloud
Foundry installation.

You might want to trigger a failover in the following scenarios:

You want to take the leader VM down to do planned maintenance.

The performance of the leader VM degrades.

The leader VM fails unexpectedly.

The AZ where the leader VM is located goes offline unexpectedly.

You can use the following metrics to determine if you need to trigger a failover:

/p.mysql/available: This metric monitors whether the MySQL server is currently available. For
more information, see Server availability.

/p.mysql/follower/seconds_behind_master: This metric monitors how far behind the follower is


in applying writes from the leader. For more information, see Leader-Follower metrics.

/p.mysql/follower/seconds_since_leader_heartbeat: This metric monitors the number of


seconds that elapse between the leader heartbeat and the replication of the heartbeat in the
follower. For more information, see Leader-Follower metrics.

For information about errands used to trigger failover, see configure-leader-follower, make-leader, and make-
read-only.

To trigger a failover:

1. Retrieve information.

2. Promote the Follower.

3. Clean up former Leader VM (Optional).

4. Configure the new Follower.

5. Unbind and rebind the app.

160
Tanzu for MySQL

Retrieve information
To retrieve the information necessary for stopping the leader and promoting the follower:

1. Log in to your deployment by running:

cf login API-URL

When prompted, enter your credentials.

2. Target the org and space where the leader-follower service instance is located by running:

cf target -o DESTINATION-ORG -s DESTINATION-SPACE

3. Find and record the GUID of the service instance. If you don’t know the name of the service
instance, you can list the service instances in the space by running cf services first.

cf service SERVICE-INSTANCE-NAME --guid

Where SERVICE-INSTANCE-NAME is the name of the leader-follower service instance.

For example:

$ cf service my-lf-instance --guid


82ddc607-710a-404e-b1b8-a7e3ea7ec063

4. SSH into the Tanzu Operations Manager VM. Follow the procedures in Gather credential and IP
Address information and SSH into Tanzu Operations Manager.

5. From the Tanzu Operations Manager VM, log in to your BOSH Director with the BOSH CLI. For
more information about logging in with the BOSH CLI, see Log in to the BOSH Director.

6. Use the BOSH CLI to run the inspect errand. Run:

bosh -d service-instance_GUID run-errand inspect

Where GUID is the GUID of the leader-follower service instance you recorded.
For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
run-errand inspect

7. See the output about the leader-follower MySQL VMs and identify the instance marked Role:
leader.

For example output:

Instance mysql/ca0ed8b5-7590-4cde-bba8-7ca2935f2bd0
Exit Code 0
Stdout 2018/04/03 18:08:46 Started executing command: inspect
2018/04/03 18:08:46
IP Address: 10.0.8.11

161
Tanzu for MySQL

Role: leader
Read Only: false
Replication Configured: false
Replication Mode: async
Has Data: true
GTID Executed: 82ddc607-710a-404e-b1b8-a7e3ea7ec063:1-18
2018/04/03 18:08:46 Successfully executed command: inspect
Stderr -

Instance mysql/37e4b6bc-2ed6-4bd2-84d1-e59a91f5e7f8
Exit Code 0
Stdout 2018/04/03 18:08:46 Started executing command: inspect
2018/04/03 18:08:46
IP Address: 10.0.8.10
Role: follower
Read Only: true
Replication Configured: true
Replication Mode: async
Has Data: true
GTID Executed: 82ddc607-710a-404e-b1b8-a7e3ea7ec063:1-18
2018/04/03 18:08:46 Successfully executed command: inspect

8. Record the index of the instance marked Role: leader. In this example output, the index of the
leader VM is ca0ed8b5-7590-4cde-bba8-7ca2935f2bd0.

9. Record the index of the other instance, which is the follower VM. In this example output, the index
of the follower VM is 37e4b6bc-2ed6-4bd2-84d1-e59a91f5e7f8.

10. If you still have access to the AZ where the leader VM is located, find out if the leader VM is in the
AZ you want to take offline by running:

bosh -d service-instance_GUID run-errand instances

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
instances
Deployment 'service-instance_f378ec82-61a4-4e66-8ed9-889c7cf5342f'

Instance Process State AZ


IPs
mysql/ca0ed8b5-7590-4cde-bba8-7ca2935f2bd0 failing us-central
1-f 10.0.8.11
mysql/37e4b6bc-2ed6-4bd2-84d1-e59a91f5e7f8 running us-central
1-a 10.0.8.10
2 instances

The leader VM might not display its status as failing if you are performing

162
Tanzu for MySQL

planned maintenance.

Promote the Follower


To stop the leader VM and promote the follower VM to the new leader:

1. Stop any data from being written to the leader VM by setting it to read-only:

bosh -d service-instance_GUID \
run-errand make-read-only \
--instance=mysql/INDEX

Where:

GUID: This is the GUID of the leader-follower service instance retrieved above.

INDEX: This is the index of the leader VM retrieved above.

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
run-errand make-read-only \
--instance=mysql/ca0ed8b5-7590-4cde-bba8-7ca2935f2bd0

2. If you still have access to the AZ where the leader VM is located, stop the leader VM:

bosh -d service-instance_GUID stop mysql/INDEX

Use the index of the leader VM retrieved above.


For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
stop mysql/ca0ed8b5-7590-4cde-bba8-7ca2935f2bd0

3. Set the follower VM as writable by running:

bosh -d service-instance_GUID run-errand make-leader --instance=mysql/INDEX

Use the index of the follower VM retrieved above.

For example:

$ bosh -d service-instance_82dc607-710a-404e-b1b8-a7e3ea7ec063 \
run-errand make-leader \
--instance=mysql/37e4b6bc-2ed6-4bd2-84d1-e59a91f5e7f8

4. If the run-errand make-leader command returns an error, re-run it until the follower VM has
finished applying the transactions.

At this point, a single instance is working, but leader-follower replication has not yet been restored.

To fail your app over to a single instance instead of restoring leader-follower, skip to Unbind and
Rebind the App.

163
Tanzu for MySQL

If you are triggering a failover in response to the AZ of the leader VM going offline, you can fail your
app over to a single instance by following the procedure in Unbind and Rebind the App.

To restore leader-follower, you must regain access to the AZ where your leader VM is located. Then
follow the procedures in:

Clean up former Leader VM (Optional) and

Configure the New Follower

Clean up former Leader VM (Optional)


If you are triggering a failover in response to a failing leader VM, to clean up the former leader VM:

1. Deactivate resurrection, specifying the same deployment as previously shown, by running:

bosh update-resurrection off

2. Retrieve the CID of the failing former leader VM by running:

bosh -d service-instance_GUID instances \


--details \
--failing \
--column=”VM CID” \
--json

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 instanc


es \
--details \
--failing \
--column=”VM CID” \
--json

3. Retrieve the disk CID of the failing former leader VM by running:

bosh -d service-instance_GUID instances \


--details \
--failing \
--column=”Disk CIDs” \
--json

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 instanc


es \
--details \
--failing \
--column=”Disk CIDs” \
--json

4. Delete the failing former leader VM by running:

164
Tanzu for MySQL

bosh -d service-instance_GUID delete-vm vm-CID

Where:

GUID: This is the GUID of the leader-follower service instance retrieved above.

CID: This is the CID of the failing former leader VM retrieved above.

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
delete-vm i-1db9ede6

5. Orphan the disk of the failing former leader VM:

bosh -d service-instance_GUID orphan-disk DISK-CID

Where:

GUID: This is the GUID of the leader-follower service instance retrieved above.

DISK-CID: This is the disk CID of the failing former leader VM retrieved above.

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
orphan-disk b-1db9ede6

Orphaning a disk rather than deleting it preserves the disk for possible recovery. After performing
recovery operations, you can reattach the disk to a VM. BOSH deletes orphaned disks after five
days by default.

Configure the new Follower


To start the former leader VM again and configure it as the new follower:

1. Create the former leader VM by running:

bosh -d service-instance_GUID \
recreate \
mysql/INDEX

Where:

GUID: This is the GUID of the leader-follower service instance retrieved above.

INDEX: This is the index of the former leader VM that you are re-creating.

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
recreate \
mysql/ca0ed8b5-7590-4cde-bba8-7ca2935f2bd01.

2. Set the former leader VM as a follower using the same values as previously shown:

165
Tanzu for MySQL

bosh -d service-instance_GUID \
run-errand configure-leader-follower \
--instance=mysql/INDEX

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
run-errand configure-leader-follower \
--instance=mysql/ca0ed8b5-7590-4cde-bba8-7ca2935f2bd0

3. Use the BOSH CLI to run the inspect errand, using the same value as previously shown.
If the output displays one instance marked Role: leader and another instance marked Role:
follower, then leader-follower replication and high availability are resumed. The deployment should
be in its original, working state. You can turn resurrection back on if you want to.

bosh -d service-instance_GUID \
run-errand inspect

For example:

$ bosh -d service-instance_82ddc607-710a-404e-b1b8-a7e3ea7ec063 \
run-errand inspect

Unbind and rebind the app


To fail their apps over to the new leader VM, your developers must bind and rebind their apps to the leader-
follower service instance:

If you have BOSH DNS enabled in Tanzu Operations Manager, you do not need to unbind
and re-bind your app to a leader-follower service instance to failover the app. The operator
activates BOSH DNS in BOSH Director > BOSH DNS Config.

If a developer rebinds an app to the TCF-MySQL service after unbinding, they must also
rebind any existing custom schemas to the app. When you rebind an app, stored code,
programs, and triggers break. For more information about binding custom schemas, see
Use custom schemas.

To unbind and rebind your app:

1. Unbind the app from the leader-follower service instance by running:

cf unbind-service APP-NAME SERVICE-INSTANCE-NAME

Where:

APP-NAME: This is the name of the app bound to the leader-follower service instance.

SERVICE-INSTANCE-NAME: This is the name of the leader-follower service instance.

2. Rebind the app to the leader-follower service instance by running:

166
Tanzu for MySQL

cf bind-service APP-NAME SERVICE-INSTANCE-NAME

3. Restage the app by running:

cf restage APP-NAME

VMware Tanzu for MySQL Clusters HA Procedures


These topics describe procedures for High Availability (HA) clusters.

Bootstrapping

Running mysql-diag

About the replication canary

Bootstrapping
Learn how to bootstrap your MySQL cluster in the event of a cluster failure.

You can bootstrap your cluster by using one of these methods:

Run the bootstrap errand. See Run the Bootstrap errand.

Bootstrap manually. See Bootstrap manually.

When to bootstrap
You must bootstrap a cluster that loses quorum. A cluster loses quorum when fewer than half the nodes can
communicate with each other for longer than the configured grace period. If a cluster does not lose quorum,
individual unhealthy nodes automatically rejoin the cluster after resolving the error, restarting the node, or
restoring connectivity.

Bootstrap only if your cluster has lost quorum, as determined by the following steps. If any
of the following steps show that your cluster has retained quorum, then do not bootstrap.
For example, if the proxy dashboard shows one or two nodes as "Healthy," then your
cluster has retained quorum. In such cases, instead of bootstrapping, consider manually
rejoining a node to a cluster, taking other steps to diagnose and restore your working
cluster, or contacting Broadcom Support for assistance.

To discover if your cluster has lost quorum, look for the following symptoms:

All nodes appear “Unhealthy” on the proxy dashboard. The proxy dashboard is viewable at proxy-
BOSH-JOB-INDEX.p-mysql.YOUR-SYSTEM-DOMAIN.

167
Tanzu for MySQL

All responsive nodes report the value of wsrep_cluster_status as non-Primary:

mysql> SHOW STATUS LIKE 'wsrep_cluster_status';


+----------------------+-------------+
| Variable_name | Value |
+----------------------+-------------+
| wsrep_cluster_status | non-Primary |
+----------------------+-------------+

All unresponsive nodes respond with ERROR 1047 when using most statement types in the MySQL
client:

mysql> select * from mysql.user;


ERROR 1047 (08S01) at line 1: WSREP has not yet prepared node for appli
cation use

Run the Bootstrap errand


VMware Tanzu for MySQL on Cloud Foundry includes a BOSH errand that automates the manual
bootstrapping procedure in the Bootstrap manually section below.

It finds the node with the highest transaction sequence number and asks it to start up in bootstrap mode
and then asks the remaining nodes to join the cluster.

In most cases, running the errand recovers your cluster, but certain scenarios require additional steps.

Discover type of cluster failure


To find out which set of instructions to follow:

1. List your MySQL instances by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT instances

168
Tanzu for MySQL

Where:

YOUR-ENV is the environment where you deployed the cluster.

YOUR-DEPLOYMENT is the deployment cluster name.

For example:

$ bosh -e prod -d mysql instances

2. Find and record the Process State for your MySQL instances. In the following example output, the
MySQL instances are in the failing process state.

Instance Pr
ocess State AZ IPs
backup-restore/c635410e-917d-46aa-b054-86d222b6d1c0 ru
nning us-central1-b 10.0.4.47
bootstrap/a31af4ff-e1df-4ff1-a781-abc3c6320ed4 -
us-central1-b -
broker-registrar/1a93e53d-af7c-4308-85d4-3b2b80d504e4 -
us-central1-b 10.0.4.58
cf-mysql-broker/137d52b8-a1b0-41f3-847f-c44f51f87728 ru
nning us-central1-c 10.0.4.57
cf-mysql-broker/28b463b1-cc12-42bf-b34b-82ca7c417c41 ru
nning us-central1-b 10.0.4.56
deregister-and-purge-instances/4cb93432-4d90-4f1d-8152-d0c238fa5aab -
us-central1-b -
monitoring/f7117dcb-1c22-495e-a99e-cf2add90dea9 ru
nning us-central1-b 10.0.4.48
mysql/220fe72a-9026-4e2e-9fe3-1f5c0b6bf09b fa
iling us-central1-b 10.0.4.44
mysql/28a210ac-cb98-4ab4-9672-9f4c661c57b8 fa
iling us-central1-f 10.0.4.46
mysql/c1639373-26a2-44ce-85db-c9fe5a42964b fa
iling us-central1-c 10.0.4.45
proxy/87c5683d-12f5-426c-b925-62521529f64a ru
nning us-central1-b 10.0.4.60
proxy/b0115ccd-7973-42d3-b6de-edb5ae53c63e ru
nning us-central1-c 10.0.4.61
rejoin-unsafe/8ce9370a-e86b-4638-bf76-e103f858413f -
us-central1-b -
smoke-tests/e026aaef-efd9-4644-8d14-0811cb1ba733 -
us-central1-b 10.0.4.59

3. Choose your scenario:

If your MySQL instances are in the failing state, follow the steps in Scenario 1.

If your MySQL instances are in the - state, follow the steps in Scenario 2.

If your MySQL instances are in the stopped state, follow the steps in Scenario 3.

Scenario 1: VMs running, cluster disrupted

169
Tanzu for MySQL

In this scenario, the VMs are running, but the cluster has been disrupted.

To bootstrap in this scenario:

1. Run the bootstrap errand on the VM where the bootstrap errand is co-located by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT run-errand bootstrap

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

The errand runs for a long time, during which no output is returned.

The command returns many lines of output, eventually followed by:

Bootstrap errand completed


[stderr]
+ echo 'Started bootstrap errand ...'
+ JOB\_DIR=/var/vcap/jobs/bootstrap
+ CONFIG\_PATH=/var/vcap/jobs/bootstrap/config/config.yml
+ /var/vcap/packages/bootstrap/bin/cf-mysql-bootstrap -configPath=/var/
vcap/jobs/bootstrap/config/config.yml
+ echo 'Bootstrap errand completed'
+ exit 0
Errand 'bootstrap' completed successfully (exit code 0)

2. If the errand fails, run the bootstrap errand command again after a few minutes. The bootstrap
errand doesn’t always work the first time.

3. If the errand continues to fail after several tries, bootstrap your cluster manually. See Bootstrap
manually below.

Scenario 2: VMs terminated or lost


In severe circumstances, such as a power failure, it is possible to lose all your VMs. You must re-create
them before you can begin to recover the cluster.

When MySQL instances are in the - state, the VMs are lost. The procedures in this scenario bring the
instances from a - state to a failing state. Then you run the bootstrap errand, as described in Scenario 1,
and restore the configuration.

To recover terminated or lost VMs, follow the procedures in the sections below:

1. Re-create the missing VMs: Bring MySQL instances from a - state to a failing state.

2. Run the Bootstrap errand: Because your instances are now in the failing state, continue to
Scenario 1.

3. Restore the BOSH configuration: Go back to unignoring all instances and redeploy. This is a critical
and mandatory step.

170
Tanzu for MySQL

If you do not set each of your ignored instances to unignore, your instances are not
updated in future deploys. You must follow the procedure in the final section of Scenario 2,
Restore the BOSH Configuration.

Recreate the missing VMs

The procedure in this section uses BOSH to re-create the VMs, install software on them, and try to start the
jobs. The following procedure allows you to:

Redeploy your cluster (while expecting the jobs to fail).

Instruct BOSH to ignore the state of each instance in your cluster. This allows BOSH to deploy the
software to each instance even if the instance is failing.

To re-create your missing VMs:

1. If BOSH Resurrector is activated, deactivate it by running:

bosh -e YOUR-ENV update-resurrection off

Where YOUR-ENV is the name of your environment.

2. Download the current manifest by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT manifest > /tmp/manifest.yml

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

3. Redeploy your deployment by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT deploy /tmp/manifest.yml

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

Expect one of the MySQL VMs to fail. Deploying causes BOSH to create new
VMs and install the software. Forming a cluster is done in a subsequent step.

4. View the instance GUID of the MySQL VM that attempted to start. Record the instance GUID,
which is the string after mysql/ in the BOSH instances output.

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT instances

Where:

YOUR-ENV is the name of your environment.

171
Tanzu for MySQL

YOUR-DEPLOYMENT is the name of your deployment.

5. Instruct BOSH to ignore the MySQL VM that just attempted to start, by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT ignore mysql/INSTANCE-GUID

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

INSTANCE-GUID is the GUID of the instance you recorded in the previous step.

6. Repeat steps 3 through 5 until all instances have attempted to start.

7. If you deactivated the BOSH Resurrector in step 1, re-enable it by running:

bosh -e YOUR-ENV update-resurrection on

Where YOUR-ENV is the name of your environment.

8. Confirm that your MySQL instances have gone from the - state to the failing state by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT instances

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

Run the Bootstrap errand

After you recreate the VMs, all instances now have a failing process state and have the MySQL code.
You must run the bootstrap errand to recover the cluster.

To bootstrap:

1. Run the bootstrap errand by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT run-errand bootstrap

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

The errand runs for a long time, during which no output is returned.

The command returns many lines of output, eventually finishing with the following successful
output:

Bootstrap errand completed


[stderr]

172
Tanzu for MySQL

echo 'Started bootstrap errand ...'


JOB\_DIR=/var/vcap/jobs/bootstrap
CONFIG\_PATH=/var/vcap/jobs/bootstrap/config/config.yml
/var/vcap/packages/bootstrap/bin/cf-mysql-bootstrap -configPath=/var/
vcap/jobs/bootstrap/config/config.yml
echo 'Bootstrap errand completed'
exit 0
Errand 'bootstrap' completed successfully (exit code 0)

2. If the errand fails, run the bootstrap errand command again after a few minutes. The bootstrap
errand might not work immediately.

3. See that the errand completes successfully in the shell output and continue to Restore the BOSH
configuration below. Note that you might still see instances in the failing state. Continue to the
next section anyway.

To restore your BOSH configuration to its previous state, this procedure unignores each instance that was
previously ignored:

If you do not set each of your ignored instances to unignore, your instances are never
updated in future deploys.

1. For each ignored instance, run:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT unignore mysql/INSTANCE-GUID

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

INSTANCE-GUID is the GUID of the instance to unignore.

2. Redeploy your deployment by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT deploy

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

3. Verify that all mysql instances are in a running state by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT instances

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

173
Tanzu for MySQL

Bootstrap manually
If the bootstrap errand cannot recover the cluster automatically, you must do the steps manually.

Follow the procedures in the sections below to manually bootstrap your cluster.

The following procedures are prone to user error and can cause lost data if followed
incorrectly. Follow the procedure in Bootstrap with the BOSH Errand first, and resort to the
manual process only if the errand fails to repair the cluster.

Shut down MySQL


Follow these steps to stop the galera-init process for each node in the cluster. For each node, record if
the monit stop command was successful:

1. SSH into the node using the procedure in Advanced troubleshooting with the BOSH CLI.

2. To shut down the mysqld process on the node, run:

monit stop galera-init

3. Record the result of the monit command:

If monit succeeds in stopping galera-init, then you can use monit to restart this node.
Follow all the steps below, including the steps marked Monit Restart but omitting the steps
marked Manual Redeploy.

If monit exits with the following error, then you must manually deploy this node:

Warning: include files not found '/var/vcap/monit/job/*.monitrc'


monit: action failed -- There is no service by that name

Follow all the steps below including the steps marked Manual Redeploy but omitting the
steps marked Monit Restart.

Do not proceed with Manual Redeploy if you have lost quorum while
upgrading your Tanzu Platform for Cloud Foundry or service instance
deployment (using Tanzu Operations Manager Apply Changes, cf
update-service, or bosh deploy operations). Manual Redeploy requires
a stable manifest, which is used to predictably recreate damaged VMs.
Upgrades introduce a second manifest, which can trigger unpredictable
outcomes if you proceed with the following Manual Redeploy steps.
If you have lost both HA quorum and monit capability on your VMs while
upgrading your database, contact Broadcom Support for assistance.

4. Repeat the preceding steps for each node in the cluster.

You cannot bootstrap the cluster unless you have shut down the mysqld process on all
nodes in the cluster.

174
Tanzu for MySQL

Verify which node to bootstrap


To identify which node to bootstrap, you must find the node with the highest transaction sequence_number.
The node with the highest sequence number is the one most recently updated.

To identify the node to bootstrap:

1. SSH into the node using the procedure in Advanced troubleshooting with the BOSH CLI.

2. View the sequence number for a node by running:

/var/vcap/jobs/pxc-mysql/bin/get-sequence-number

1. When prompted to confirm that you want to stop MySQL, record the value of
sequence_number.
For example:

$ /var/vcap/jobs/mysql/bin/get-sequence-number
This script stops mysql. Are you sure? (y/n): y

{"sequence_number":421,"instance_id":"012abcde-f34g-567h-ijk8-9123l45
67891"}

3. Repeat these steps for each node in the cluster.

4. After you verify the sequence_number for all nodes in your cluster, identify the node with the
highest sequence_number. If all nodes have the same sequence_number, you can choose any
node as the new bootstrap node.

To avoid losing data, you must bootstrap from a node in the cluster that has the highest transaction
sequence number (seqno).

If /var/vcap/jobs/pxc-mysql/bin/get-sequence-number returns a failure, use the following procedure to


retrieve the seqno value.

$ /var/vcap/jobs/pxc-mysql/bin/get-sequence-number
Failure obtaining sequence number!
Result was: [ ]
$

1. For each node in the cluster, find the seqno value:

1. Use SSH to log in to the node, following the procedure in the Tanzu Operations Manager
documentation.

2. Find seqno values in the node’s Galera state file, grastate.dat, by running:

cat /var/vcap/store/pxc-mysql/grastate.dat | grep 'seqno:'

3. If the last and highest seqno value in the output is a positive integer, the node shut
down gracefully. Record this number.

4. If the last and highest seqno value in the output is -1, the node crashed or was
stopped, or it is still running.

175
Tanzu for MySQL

The sequence number is not an indication of the current state of the


cluster. Running nodes show a sequence number (seqno) of -1. A positive
sequence number is assigned only during a successful shutdown. Use
mysql-diag to diagnose problems on a running cluster.

To recover the seqno value from the database:

1. Temporarily start the database and append the last sequence number to its error
log by running:

$ /var/vcap/jobs/pxc-mysql/bin/get-sequence-number

2. The output of the get-sequence-number utility looks similar to the following:

{ "cluster_uuid": "1f594c30-a709-11ed-a00e-5330bbda96d3", "seqno":


4237, "instance_id": "4213a73e-069f-4ac7-b01b-43068ab312b6" }

The seqno in the output is 4237.

3. If the node never connected to the cluster before crashing, there might not be a
group ID assigned. In this case, there is nothing to recover. Do not choose this
node for bootstrapping unless all of the other nodes also crashed.

2. After retrieving the seqno values for all nodes in the cluster, identify the node with the highest
seqno value. If multiple nodes share the same highest seqno value, and it is not -1, you can
bootstrap from any of those nodes.

Bootstrap the first node


After discovering the node with the highest sequence_number, complete the following procedure to
bootstrap the node:

Run these bootstrap commands only on the node that you previously identified in Verify
which node to bootstrap. Non-bootstrapped nodes abandon their data during the
bootstrapping process. Therefore, bootstrapping off the wrong node causes data loss. For
information about intentionally abandoning data, see the architecture sections in the
VMware Tanzu for MySQL documentation.

1. SSH into the node using the procedure in Advanced troubleshooting with BOSH CLI.

2. Update the node state to trigger its initialization of the cluster by running:

echo -n "NEEDS_BOOTSTRAP" > /var/vcap/store/pxc-mysql/state.txt

3. Monit Restart: If when doing Shut down MySQL you successfully used monit to shut down your
galera-init process, then re-launch the mysqld process on the new bootstrap node.

1. Start the mysqld process by running:

monit start galera-init

176
Tanzu for MySQL

2. It can take up to ten minutes for monit to start the mysqld process. To confirm if the
mysqld process has started, run:

watch monit summary

If monit succeeds in starting the galera-init process, then the output includes the line
Process 'galera-init' running.

4. Manual Redeploy: If when doing Shut down MySQL you encountered monit errors, then redeploy
the mysqld software to your bootstrap node as follows:

1. Leave the MySQL SSH login shell and return to your local environment.

2. Target BOSH on your bootstrap node by instructing it to ignore the other nodes in your
cluster. For all nodes except the bootstrap node you identified earlier, run:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT ignore mysql/M


bosh -e YOUR-ENV -d YOUR-DEPLOYMENT ignore mysql/N

Where N and M are the numbers of the non-bootstrapped nodes. For example, if you
bootstrap node 0, then M=1 and N=2.

3. Turn off the BOSH Resurrector by running:

bosh update-resurrection off

4. Use the BOSH manifest to bootstrap your bootstrap machine by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT manifest > /tmp/manifest.yml


bosh -e YOUR-ENV -d YOUR-DEPLOYMENT deploy /tmp/manifest.yml

Restart remaining nodes


After the bootstrapped node is running, restart the remaining nodes.

The procedure to follow for restarting a node depends on the output you got for that node when doing Shut
down MySQL. Do one of the following procedures:

Monit restart

If in Shut down MySQL, you successfully used monit to shut down your galera-init process, then restart
the nodes as follows:

1. SSH into the node using the procedure in Advanced troubleshooting with BOSH CLI.

2. Start the mysqld process with monit by running:

monit start galera-init

If the Interruptor (see the MySQL documentation) prevents the node from starting, follow the
manual procedure to force the node to rejoin the cluster. See Manually force a MySQL node to
rejoin if a node cannot rejoin the HA cluster.

177
Tanzu for MySQL

Forcing a node to rejoin the cluster is a destructive procedure. Follow the


procedure only with the help of Support.

3. If the monit start command fails, it might be because the node with the highest
sequence_number is mysql/0.

In this case:

1. Ensure that BOSH ignores updating mysql/0 by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT ignore mysql/0

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

2. Navigate to Tanzu Operations Manager in a browser, log in, and click Apply Changes.

3. When the deploy finishes, run the following command from the Tanzu Operations Manager
VM:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT unignore mysql/0

Where:

YOUR-ENV is the name of your environment.

YOUR-DEPLOYMENT is the name of your deployment.

Manual redeploy

If in doing Shut down MySQL, you encountered monit errors, then restart the nodes as follows:

1. Instruct BOSH to no longer ignore the non-bootstrap nodes in your cluster by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT unignore mysql/M


bosh -e YOUR-ENV -d YOUR-DEPLOYMENT unignore mysql/N

Where N and M are the numbers of the non-bootstrapped nodes. For example, if you bootstrap node
0, then M=1 and N=2.

2. Redeploy software to the other two nodes and have them rejoin the cluster, bootstrapped from the
node above by running:

bosh -e YOUR-ENV -d YOUR-DEPLOYMENT deploy /tmp/manifest.yml

You only need to run this command once to deploy both the nodes that you unignored in the earlier
step.

3. With your redeploys completed, turn the BOSH Resurrector back on by running:

bosh -e YOUR-ENV update-resurrection on

178
Tanzu for MySQL

Verify that the nodes have joined the cluster


The final task is to verify that all the nodes have joined the cluster.

1. SSH into the bootstrap node.

2. Run the following command to output the total number of nodes in the cluster:

mysql> SHOW STATUS LIKE 'wsrep_cluster_size';

Manually force a MySQL node to rejoin if a node cannot


rejoin the HA cluster
When a MySQL node cannot rejoin the HA cluster automatically, a Tanzu Platform for Cloud Foundry
operator must manually force the node to rejoin. The following procedure applies to Tanzu Platform for CF
for MySQL and VMware Tanzu for MySQL tile HA clusters. This section describes the case where a single
node is joined to a cluster. For bootstrapping, see the Tanzu Platform for CF or MySQL tile documentation.

When a node cannot rejoin the HA (Percona XtraDB Cluster - PXC) cluster automatically, an operator needs
to manually force a MySQL node to rejoin.

If your HA cluster is experiencing downtime or is in a degraded state, VMware recommends first running the
mysql-diag tool to gather information about the current state of the cluster. This tool reports either a
healthy cluster with (typically) 3 running nodes, a cluster in quorum with two running nodes and a third node
needing to re-join, or a cluster that has lost quorum and requires a bootstrap. The mysql-diag tool is
available on the mysql_monitor instance for a Tanzu Platform for CF internal cluster or on the
mysql_jumpbox instance for a MySQL tile HA service instance.

This procedure removes all the data from a server node and forces it to join the cluster, receiving a current
copy of the data from one of the other nodes already in the cluster. The steps are slightly different, based
on which MySQL cluster this is for.

Do not do this if there is any data on the local node that you need to preserve.

The other two nodes must be online and healthy. You can validate this by looking
at the mysql-diag output or checking the MySQL Proxy logs (i.e., grep
'Healthcheck failed on backend' proxy.combined.log). mysql-diag
reports a healthy node as "Synced" and "Primary".

For a Tanzu for MySQL cluster:

1. Log into the instance as root.

2. Run monit stop galera-init. Skip to step 3 if the monit job is unavailable.

3. Ensure that mysql is stopped by running ps auxw | grep mysqld. Kill the mysqld process(es) if
they are running.

4. Run mv /var/vcap/store/pxc-mysql /var/vcap/store/pxc-mysql-backup (or, if disk space is


a concern, run rm -rf /var/vcap/store/pxc-mysql). Clean up the backup after successfully
joining the node to the cluster.

179
Tanzu for MySQL

5. Run /var/vcap/jobs/pxc-mysql/bin/pre-start. A return code of 0 indicates success.

6. Restart the database on the instance using one of these commands:

If the monit job is available, run monit start galera-init.

If the instance has no galera-init monit job, run bosh -d deploymentName restart
mysql/instanceGUID --no-converge.

Running mysql-diag
Learn how to use the mysql-diag tool in VMware Tanzu for MySQL on Cloud Foundry.

mysql-diag prints the state of your MySQL highly available (HA) cluster and suggests solutions if your
node fails. VMware recommends running this tool against your HA cluster before all deployments.

mysql-diag checks the following information about the status of your HA cluster:

Membership status of all nodes

Size as it appears to all nodes

If it needs to be bootstrapped

If replication is working

Disk space used and nodes per server

Run mysql-diag using the BOSH CLI


To use the BOSH CLI to run mysql-diag, do the following:

1. Obtain the information needed to use the BOSH CLI. See Gather Credential and IP Address
Information.

2. SSH into your Tanzu Operations Manager VM. See Log in to the Tanzu Operations Manager VM
with SSH for your IaaS.

3. Log in to your BOSH Director. See Authenticate with the BOSH Director VM.

4. Identify the VM to log in to with SSH by running the following command:

bosh -e MY-ENV -d MY-DEPLOYMENT vms

Where:

MY-ENV is the name of your environment.

MY-DEPLOYMENT is the name of your deployment.

5. Record the GUID associated with the mysql-monitor VM, also known as the jumpbox VM.

6. SSH into your mysql-monitor VM by running the following command:

bosh -e MY-ENV -d MY-DEPLOYMENT ssh mysql-monitor/GUID

Where:

MY-ENV is the name of your environment.

180
Tanzu for MySQL

MY-DEPLOYMENT is the name of your deployment.

GUID is the GUID you recorded in the previous step.

7. View the status of your HA cluster by running the following command:

mysql-diag

Example healthy output


The mysql-diag command returns the following message if your canary status is healthy:

Checking canary status...healthy

Example mysql-diag output when the tool identifies a healthy HA cluster includes:

The date

The canary status, which shows as healthy

The cluster status of the three service instances

A table with the headings Host, Name/UUID, WSREP Local State, WSREP Cluster Status, and
WSREP Cluster Size. The rows contain:
the value for WSREP Local State (‘Synced’)

the value for WSREP Cluster Status (‘Primary’)

the value for WSREP Cluster Size (‘3’)

The disk status of the six services instances

A table with the headings host, name/UUID, persistent disk used, and ephemeral disk used

181
Tanzu for MySQL

View a larger version of this image.

Example unhealthy output


The mysql-diag command returns the following message if your canary status is unhealthy:

Checking canary status...unhealthy

In the event of a broken HA cluster, mysql-diag output contains actionable steps meant to help expedite
the recovery of that HA cluster.

The sample mysql-diag output after the tool identifies an unhealthy HA cluster is similar to the output
described for the healthy case except that some of the clusters show getsockopt: connection refused
errors.

The output ends with messages marked as Critical, and contains instructions about what to do and what
not do to resolve the issues.

182
Tanzu for MySQL

View a larger version of this image.

About the replication canary


This topic describes the replication canary for an highly available (HA) cluster. The replication canary runs
on the jumpbox VM and monitors an HA cluster to ensure that replication is working.

The replication canary writes to a private dataset in the cluster and attempts to read the written data from
each node. It pauses between read and write mode to ensure that the write-sets have been replicated. The
private dataset does not use a significant amount of disk capacity.

When replication fails, the canary cannot read the data from all the nodes and does the following:

Sends a message that replication has failed to a configured email address. For information about
notification emails, see Sample notification email.

Deactivates client access to the cluster. For information about client access to the client, see
Determine if the cluster is accessible.

When replication fails, data can be lost. Contact Support immediately in the case of
replication failure.

Sample notification email


If the canary detects replication failure, it immediately sends an email through the Tanzu Platform for Cloud
Foundry notification service. You must have configured email notifications in for the replication canary to
send emails.

183
Tanzu for MySQL

For more information about configuring email notifications, see the “Configure Email Notifications” procedure
for your IaaS.

The notification service sends emails similar to the following:

Subject: CF Notification: p-mysql Replication Canary, alert 417

This message was sent directly to your email address.

{alert-code 417}
This is an email to notify you that the MySQL service's replication canary
has detected an unsafe cluster condition in which replication is not
performing as expected across all nodes.

Determine if the cluster is accessible


When the canary detects replication failure, it closes connections to the database cluster through the
proxies. When the replication issue is resolved, the canary automatically restores client access to the
cluster. You can determine whether the canary deactivated the cluster access. You can observe the cluster
using the Switchboard API.

To determine if cluster access has been deactivated, do the following:

1. Perform the prerequisite procedure in Monitor node health.

2. To view cluster access, run the following command:

curl https://USERNAME:PASSWORD@N-HOSTNAME/v0/cluster

Where:

USERNAME is the user name you recorded in step 1.

PASSWORD is the password you recorded in step 1.

N is 0, 1, or 2 depending on the proxy you want to connect to.

HOSTNAME is the hostname you recorded in step 1.

For example:

$ curl https://abcdefghijklmno:[email protected]
2e-34f5-g34612c10dba.org.dedicated-mysql.cf-app.com/v0/cluster
[
{
"currentBackendIndex":0,
"trafficEnabled":false,
"message":"Disabling cluster traffic",
"lastUpdated":"2016-07-27T05:16:29.197754077Z"
}
]

When cluster access is deactivated, trafficEnabled is set to false.

If you must restore client access to the cluster while replication is failing, contact Support.

184
Tanzu for MySQL

For more information about the Switchboard API, see Monitoring node health.

Data at rest full-disk encryption (FDE)


In VMware Tanzu for MySQL on Cloud Foundry, data at rest full-disk encryption (FDE) protects data on the
physical hard drives in case the drives are compromised. The disks are only readable with the appropriate
encryption key.

In Tanzu for MySQL, data at rest full-disk encryption is implemented at the IaaS level.

Considerations
After a disk is mounted and used, it is readable from the system on which it is mounted. Consequently,
FDE does not offer protection against an attack on a running server after the disk has been mounted.

Enabling full-disk encryption


Data at rest FDE is supported by the popular IaaS providers. FDE is enabled on all disks used by the IaaS.

The process is IaaS-specific. For information about enabling FDE for each IaaS, see the Tanzu Operations
Manager documentation.

185
Tanzu for MySQL

VMware Tanzu for MySQL - Developer


Guide

This section includes the following topics:

Getting Started
Using VMware Tanzu for MySQL

Using VMware Tanzu for MySQL for multi-site replication

Using TLS

About data migration in VMware Tanzu for MySQL

Migrating data in VMware Tanzu for MySQL

About MySQL server defaults

Changing defaults using arbitrary parameters

Managing VMware Tanzu for MySQL


Connecting to VMware Tanzu for MySQL
Customizing database credentials

Using management tools for VMware Tanzu for MySQL

Creating a service instance with Service-Gateway access

Using SSH to connect from outside a deployment

Triggering multi-site replication and failover

Backup and Restore


About Backups

Making Full Backups

Restoring Incremental Backups

Backing up and restoring with mysqldump

Monitoring node health for HA clusters

Troubleshooting instances

VMware Tanzu for MySQL - Developer Guide - Getting


Started
Using VMware Tanzu for MySQL

Using VMware Tanzu for MySQL for multi-site replication

186
Tanzu for MySQL

Using TLS

About data migration in VMware Tanzu for MySQL

Migrating data in VMware Tanzu for MySQL

About MySQL Server defaults

Changing defaults using arbitrary parameters

Using VMware Tanzu for MySQL


This topic provides instructions for developers using the VMware Tanzu for MySQL on Cloud Foundry
service for their Tanzu Platform for Cloud Foundry apps. The procedures here use the Cloud Foundry
Command Line Interface (cf CLI).

Tanzu for MySQL provides a relational database for apps and devices. To use Tanzu for MySQL in an app:

1. Check the service availability in the Marketplace, and see if there is an instance of Tanzu for
MySQL in your space.
See Confirm the Tanzu for MySQL service availability.

2. If there is no existing instance or you want to use a different one, create an instance of the Tanzu
for MySQL service in the same space as the app.
See Create a service instance.

3. Bind the app to the Tanzu for MySQL service instance to enable the app to use MySQL.
See Bind a service instance to your app.

4. Call the Tanzu for MySQL service in your app code, and then push your app into the space again.
For more information about services, see Use the MySQL service in your app.

After you create a Tanzu for MySQL service instance, you can manage it over the life cycle of your apps
and data.

For more information, see:

Managing service instances with the cf CLI.

Managing apps and service instances using Apps Manager

Prerequisites
To use TCF-MySQL with your Tanzu Platform for Cloud Foundry apps, you must:

Decide what type of plan you want from the following options: single node, leader-follower, highly
available (HA) cluster, and Multi‑Site Replication. Multi‑Site Replication is used to deploy a leader-
follower service instance across multiple foundations or data centers. Your Marketplace might not
offer all of these plan types.
For more information about service plans, see Availability options.

If you intend to use a Multi‑Site Replication plan for deploying a leader-follower service instance
across multiple foundations or data centers, review the limitations associated with this topology.
See Multi‑Site Replication Limitations.

If you intend to use an HA cluster plan, review the limitations associated with this plan type.
See Highly available cluster limitations.

187
Tanzu for MySQL

Have an Tanzu Operations Manager installation with TCF-MySQL installed and listed in the
Marketplace.
For how to verify availability in the Marketplace, see Confirm Service Availability.

Have a Space Developer or Admin account on the Tanzu Platform for CF installation.
For more information, see Manage Users and Roles.

Have a local machine with the following installed:


A browser

A shell

The Cloud Foundry Command-Line Interface (cf CLI). See Installing the cf CLI.

The Linux watch command. See the Linux Information Project website.

Log in to the org and space containing your app. For instructions, see Log in with the CLI.

Confirm the Tanzu for MySQL service availability


For an app to use the Tanzu for MySQL service, both of the following must be true:

The service must be available in the Marketplace for its space.

An instance of the service must exist in its space.

You can confirm both of these using the cf CLI as follows.

Check service availability in the Marketplace


To find out if a Tanzu for MySQL service is available in the Marketplace:

1. Enter the following command:

cf marketplace

2. If the output lists p.mysql in the service column, Tanzu for MySQL is available. If it is not
available, ask your operator to install it.

$ cf marketplace
Getting services from marketplace in org my-org / space my-space as use
[email protected]...
OK
service plans description
[...]
p.mysql db-small Dedicated instances of MySQL service
to provide a relational database
[...]

Check that an instance is running in the space


To confirm that a Tanzu for MySQL instance is running in the space:

1. Use the instructions in Log in to cf CLI or Logging in to Apps Manager to log in to the org and space
that contains the app.

188
Tanzu for MySQL

2. Enter the following command:

cf services

3. Any p.mysql listings in the service column are service instances of Tanzu for MySQL in the
space.

For example:

$ cf services
Getting services in org my-org / space my-space as [email protected]...
OK
name service plan bound apps last operation
my-instance p.mysql db-small create succeeded

You can bind your app to an existing instance or create a new instance to bind to your app.

Create a service instance


On-demand services are created asynchronously, not immediately. The watch command shows you when
your service instance is ready to bind and use.

If you are deploying a leader-follower service instance across multiple foundations, follow
the procedure in Using Tanzu for MySQL for Multi‑Site Replication.

To create an instance of the Tanzu for MySQL service:

1. Create a service instance by running the following command:

cf create-service p.mysql PLAN SERVICE-INSTANCE

Where:

PLAN is the name of the Tanzu for MySQL plan you want to use.

SERVICE-INSTANCE is a name you choose to identify the service instance. This name
appears under service in output from cf services.

2. Enter the following command and wait for the last operation for your instance to show as create
succeeded. If you get an error, see Troubleshooting instances.

watch cf services

For example:

$ cf create-service p.mysql db-small my-instance

Creating service my-instance in org my-org / space my-space as user@exa


mple.com...
OK

189
Tanzu for MySQL

$ watch cf services

Getting services in org my-org / space my-space as [email protected]...


OK
name service plan bound apps last operation
my-instance p.mysql db-small create succeeded

Bind a service instance to your app


For an app to use a service, you must bind the app to a service instance. You must do this every time you
run cf push.

To push and bind an app to a Tanzu for MySQL instance:

1. Push your app into the same space as your Tanzu for MySQL service instance by running the
following command:

cf push

2. Bind your app to a Tanzu for MySQL instance:

cf bind-service APP SERVICE-INSTANCE

Where:

APP is the app that will use the MySQL service instance.

SERVICE-INSTANCE is the name you supplied when you ran cf create-service.

For example:

$ cf bind-service my-app my-instance

Binding service mydb to my-app in org my-org / space test as user@examp


le.com...
OK
TIP: Use 'cf push' to ensure your env variable changes take effect

3. Restart your app by running the following command:

cf restart APP

Where APP is the app that will use the MySQL service instance.

Use the MySQL service in your app


To access the MySQL service from your app:

1. Verify that your app code (or the MySQL client library that the app uses) retries in the case of DNS
timeouts.

2. Locate the connection strings listed in the VCAP_SERVICES > credentials object for your app. For
information about VCAP_SERVICES, see MySQL environment variables.

190
Tanzu for MySQL

3. In your app code, call the MySQL service using the connection strings.
For an example, see Node.js code.

Use custom schemas


Tanzu for MySQL supports multiple custom schemas. You can use custom schemas with apps that share a
MySQL service instance to isolate app data by schema. By default, service bindings use the default
schema, service_instance_db.

To use custom schemas in your apps:

1. Bind your app to the custom schema by running:

cf bind-service APP SERVICE-INSTANCE -c '{"schema":"CUSTOM-SCHEMA"}'

Where:

APP is the app you want to use the custom schema.

SERVICE-INSTANCE is the name of your service instance.

CUSTOM-SCHEMA is the name of your custom schema. Valid characters include uppercase
and lowercase letters, digits, $, and _.

2. Restart your app by running:

cf restart APP

Where APP is the app that will to use the custom schema.

MySQL environment variables


Apps running in Tanzu Operations Manager gain access to bound service instances through an environment
variable credentials hash called VCAP_SERVICES. This environment variable includes the credentials that
apps use to access service instances.

For example:

{
"p.mysql": [
{
"label": "p.mysql",
"name": "my-instance",
"plan": "db-medium",
"provider": null,
"syslog_drain_url": null,
"tags": [
"mysql"
],
"credentials": {
"hostname": "10.0.0.20",
"jdbcUrl": "jdbc:mysql://10.0.0.20:3306/service_instance_db?user=fefcbe8360854
a18a7994b870e7b0bf5\u0026password=z9z6eskdbs1rhtxt",
"name": "service_instance_db",
"password": "z9z6eskdbs1rhtxt",
"port": 3306,
"uri": "mysql://fefcbe8360854a18a7994b870e7b0bf5:[email protected]:33

191
Tanzu for MySQL

06/service_instance_db?reconnect=true",
"username": "fefcbe8360854a18a7994b870e7b0bf5"
},
"volume_mounts": []
}
]
}

You can search for your service by the name given when the service instance was created. You can also
search using the tags or label properties. The credentials property can be used to provide access to the
MySQL protocol.

VCAP_SERVICES is modified only when an app is bound to a service instance. If you modify your service
instance, you must run the following commands to apply the changes to VCAP_SERVICES:

cf unbind-service

cf bind-service

cf restage

If a developer rebinds an app to the TCF-MySQL service after unbinding, they must also
rebind any existing custom schemas to the app. When you rebind an app, stored code,
programs, and triggers break. For more information about binding custom schemas, see
Use custom schemas.

If you use MySQL Connector/J 8.0.13 or later with Tanzu for MySQL, you must modify the
JDBC URL in VCAP_SERVICES to include sslMode=VERIFY_IDENTITY and
verifyServerCertificate=true.
MySQL Connector/J 8.0.13 and later does not verify TLS connections. For more
information about JDBC URL syntax, see the MySQL documentation.

Manage a service instance


You can manage service instances in the following ways:

Migrate your data to a different plan. See Migrate data to a different plan.

Upgrade an individual service instance to the latest version of Tanzu for MySQL. See Upgrade an
individual service instance.

Change the default parameters for an existing service instance. See Change default parameters on
an existing service instance.

Share service instances between orgs and spaces. See Share service instances.

Remove access to a service from an app that no longer needs it. See Unbind an app from a service
instance.

Delete a service instance that is not used. See Delete a service instance.

Migrate data to a different plan

192
Tanzu for MySQL

You can use cf update-service to migrate data to a different plan. When you update a service instance,
you do not need to rebind your app or service keys. However, when you migrate data to a new service
instance, the database is unavailable for several minutes.

For more information about using cf update-service, see the Cloud Foundry CLI Reference Guide.

The following table lists migration use cases for the update-service command:

Use update-service for migrating from... To...

Single Node larger Single Node

Leader-Follower larger Leader-Follower

Single Node Leader-Follower of the same or larger size

Leader-Follower Single Node of the same or larger size

High Availability Multi‑Site Replication

Multi‑Site Replication High Availability

You can use cf update-service only to migrate data between an HA cluster plan and a
multi‑site replication plan. If you want to do this with another plan, you must use the cf
mysql-tools plug-in instead. For more information about migrating data, see About data
migration in VMware Tanzu for MySQL on Cloud Foundry.

If a multi‑site replication is being used, it can only be updated to a HA Cluster plan. If a


multi‑site replication is updated with any plan other than an HA Cluster plan, replication will
break.

To migrate a service instance to another plan:

1. View the available service plans for Tanzu for MySQL by running:

cf marketplace

2. Migrate data to another plan by running:

cf update-service SERVICE-INSTANCE -p PLAN

Where PLAN is the plan you want to update the service instance to.

For example:

$ cf update-service my-instance -p db-large

Upgrade an individual service instance

To upgrade service instances individually, you must have cf CLI v6.46.0 or higher.

193
Tanzu for MySQL

You can use cf update-service with the –upgrade flag to individually upgrade on-demand service
instances to the latest version of Tanzu for MySQL. When you upgrade a service instance, you do not need
to rebind your app or service keys. However, when you upgrade a service instance the database is
unavailable for several minutes.

For more information about using cf update-service, see the Cloud Foundry CLI Reference Guide.

To upgrade a single service instance:

1. Confirm that an upgrade is available for the service instance by running:

cf services

The upgrade is available when the upgrade available column in the output says yes.

For example:

$ cf services
Getting services in org system / space system as admin...

name service plan bound apps last operation broker


upgrade available
my-instance p.mysql db-small create succeeded dedicated-mysql
-broker yes

2. Upgrade the service instance by running:

cf update-service SERVICE-INSTANCE-NAME --upgrade

3. When you are prompted, confirm that you want to upgrade.

Upgrade a service instance to MySQL 8.0


As of Tanzu for MySQL v3.2, only MySQL 8.0 is supported. If you created a service instance using a
previous version of Tanzu for MySQL, then upgrading the service instance forces an upgrade to MySQL
8.0.

When upgrading a highly available (HA) cluster from MySQL 5.7 to MySQL 8.0, VMware
recommends that you first validate the HA cluster's health. To verify that your HA cluster is
healthy, see Monitoring Node Health.
(A "highly available (HA) cluster" refers to any service instance created from a tile plan
configured with the "HA cluster" topology.)
If the HA cluster is unhealthy, bring the cluster to a healthy state before upgrading the
cluster to MySQL 8.0.

To upgrade a service instance, run the following command:

cf update-service SERVICE-INSTANCE-NAME --upgrade

Where SERVICE-INSTANCE-NAME is the name of the service instance to upgrade.

The MySQL 8.0 version of the plan is applied to the service instance, triggering the service instance
upgrade from 5.7 to 8.0.

194
Tanzu for MySQL

Share service instances


In Tanzu for MySQL you can share service instances between different orgs and spaces using cf share-
service. Service instance sharing is enabled by default.

For more information about service instance sharing, see Sharing service instances.

To share a service instance:

1. Target the source org and space for the service instance you want to share by running:

cf target -o SOURCE-ORG -s SOURCE-SPACE

Where:

SOURCE-ORG is the source org for your service instance.

SOURCE-SPACE is the source space for your service instance.

2. Share your service instance to the destination org and space by running:

cf share-service SERVICE-INSTANCE -o DESTINATION-ORG -s DESTINATION-SPACE

Where:

SERVICE-INSTANCE is the service instance you want to share.

DESTINATION-ORG is the destination org for the service instance.

DESTINATION-SPACE is the destination space for the service instance.

3. Target the destination org and space by running:

cf target -o DESTINATION-ORG -s DESTINATION-SPACE

4. Confirm the service instance was shared by running:

cf service SERVICE-INSTANCE

Where SERVICE-INSTANCE is the service instance you shared.

Unbind an app from a service instance


If you want stop an app from using a service, you must unbind the app from the service.

1. Unbind your app by running:

cf unbind-service APP SERVICE-INSTANCE

Where:

APP is the app you want to stop using the MySQL service instance.

SERVICE-INSTANCE is the name you supplied when you ran cf create-service.

For example:

195
Tanzu for MySQL

$ cf unbind-service my-app my-instance

Unbinding app my-app from service my-instance in org my-org / space my-
space as [email protected]...
OK

If a developer rebinds an app to the TCF-MySQL service after unbinding, they must also
rebind any existing custom schemas to the app. When you rebind an app, stored code,
programs, and triggers break. For more information about binding custom schemas, see
Use custom schemas.

Delete a service instance


You cannot delete a service instance that an app is bound to.

To delete a service instance:

1. Run the following command:

cf delete-service SERVICE-INSTANCE

Where SERVICE-INSTANCE is the name of the service to delete.

For example:

$ cf delete-service my-instance

Are you sure you want to delete the service my-instance ? y


Deleting service my-service in org my-org / space my-space as user@exam
ple.com...
OK

2. Enter the following command and wait for a Service instance not found error indicating that the
instance no longer exists:

watch cf service SERVICE-INSTANCE

Purge a service instance


If the service instance VM is lost, then you cannot delete the service instance. However, you can use the
cf CLI to purge a service instance from the Cloud Controller database.

To purge a service instance:

1. Run:

cf purge-service-instance SERVICE-INSTANCE-NAME

Where SERVICE-INSTANCE-NAME is the name of the service instance to purge.

196
Tanzu for MySQL

For example:

$ cf purge-service-instance my-instance

WARNING: This operation assumes that the service broker responsible for
this service instance is no longer available or is not responding with
a 200 or 410, and the service instance has been deleted, leaving orphan
records in Cloud Foundry's database. All knowledge of the service insta
nce are removed from Cloud Foundry, including service bindings and serv
ice keys.

Really purge service instance my-instance from Cloud Foundry?> y


Purging service my-instance...
OK

Using VMware Tanzu for MySQL for multi-site replication


This topic provides instructions for developers configuring multi-site replication of VMware Tanzu for
MySQL on Cloud Foundry across multiple foundations or data centers.

You provision two service instance, one in each of two foundations. Your leader instance may be either a
multi‑site replication topology or HA Cluster topology. Your follower instance must be a multi‑site replication
topology. You then configure these two instances to replicate from the leader to the follower. You may
establish this replication manually, or using the mysql-cli-plugin to the CF cli.

Multi-site replication is configured separately from the leader-follower service plan. Any
mention on this page of "leader" and "follower" refers to the two service instances
provisioned for multi-site configuration, not to any service instance of type "leader-
follower."

For more information about:

multi‑site replication topology, see About multi‑site replication

HA Cluster topology, see About highly available clusters

Prerequisites
Before you use Tanzu for MySQL across multiple foundations, you must:

Select two foundations on which to deploy your multi-site leader and follower service instances.

Select the topology for your leader instance: multi‑site replication or HA Cluster.

Verify that, on each foundation, your operator has configured your access in your org and spaces
to:

a multi‑site replication plan. For more information, see Preparing for Multi‑Site Replication.

a HA Cluster plan if you are considering using a HA cluster as multi-site leader. Within each
foundation, your operator must configure the HA cluster plan and Multi‑Site Replication plan

197
Tanzu for MySQL

in exactly the same Availability Zones.


For more information, see About high-availability clusters.

Multi-site replication usage overview


To create a multi-site replication configuration across multiple foundations:

1. Check the availability of the Multi‑Site Replication plan in the Marketplace in both foundations. See
Confirm the VMware Tanzu for MySQL on Cloud Foundry service availability.

2. Select the topology you want for your leader instance. See Select a multi-site leader topology.

3. Create a service instance on each foundation, a leader of your selected topology on your primary
foundation, and a follower of type Multi‑Site Replication on your secondary foundation. See Create
multi-site replication service instances.

4. Configure replication from your leader to your follower service instances. See Configure multi-site
replication.

5. Bind the multi-site configured service instances to your apps. See Bind a multi-site configured
service instance to your app.

6. Modify your app to use the multi-site configured service instances. See Use the MySQL service in
your app.

After you configure your multi-site service instances, you can manage them over the life cycle of your apps
and data. For instructions on how to manage a Tanzu for MySQL service instance, see Manage service
instances.

Select a multi-site leader topology


Multi-site replication supports two topologies for the leader service instance: Multi‑Site Replication and HA
Cluster.

A Multi‑Site Replication instance has comparatively simple initial setup and ongoing management
such as multi-site switchover procedures. But it deploys MySQL running on a single VM, making it
vulnerable to VM and infrastructure performance issues.
For more information, see Preparing for multi-site replication.

A HA Cluster instance has comparatively simple initial setup, but more involved ongoing
management. such as multi-site switchover procedures. Since it deploys a 3-VM MySQL cluster, it
is more resilient to VM or node failure than a single-VM deployment.
For more information, see:

About high-availability clusters

Monitoring node health

Multi‑Site Replication High-Availability Cluster

Number of deployed 1 4 (3 MySQL cluster + 1 Jumpbox)


VMs

198
Tanzu for MySQL

Multi‑Site Replication High-Availability Cluster

Resilience to VM Low. High.


failure A VM outage may necessitate a failover from Cluster replication lets your multi-site leader
your multi-site leader to your follower withstand and potentially recover from a single
VM outage.

Stemcell upgrade Offline. Online.


behavior The topology's single VM goes offline during Rolling upgrades affect only 1 cluster VM at a
stemcell upgrades (requiring multi-site time.
switchover for continued uptime).

Multi-site switchover Moderate. Multiple steps are needed to switch More complex:
complexity the instance's leader and follower roles. HA Clusters cannot serve as multi-site
followers. Therefore, switchover
requires downscaling any HA Cluster
leader into a single-VM Multi‑Site
Replication follower.

As switchover is promoting your


secondary foundation's Multi‑Site
Replication instance from follower to
leader, you may elect to upscale that
(new follower) instance into a HA
Cluster, adding some considerations to
the switchover process.

Existing HA clusters created with Tanzu for MySQL version 3.2 or later may be configured to be multi-site
leaders. HA clusters created with earlier versions may be configured as multi-site leaders using a migration
process.
For more information, see Migrating HA instances for multi-site replication.
In all cases these clusters must meet the above Prerequisites.

Before you can use a highly available deployment as a multi-site leader, your operator
must enable Service-Gateway. You must create the leader instance with Service-Gateway
access. For more information, see Creating a service instance with Service-Gateway
access

Create multi-site service instances in your two foundations


This section describes how to create one service instance in your primary foundation to act as the
replication leader and how to create a second service instance in your secondary foundation (usually your
disaster recovery site) to serve as the replication follower. Later sections explain how to establish the
replication between these two service instances.

1. Check the availability of the Multi‑Site Replication plan in the Marketplace in both foundations.
Even if you want to use a HA Cluster as your replication leader, you must have a Multi‑Site
Replication plan configured within your primary foundation for switchover scenarios. See Confirm
the VMware Tanzu for MySQL on Cloud Foundry service availability.

199
Tanzu for MySQL

2. In your primary foundation, create a leader service instance of the type you selected in Select a
multi-site leader topology.

1. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

Where PRIMARY-API-URL is the API endpoint for the primary foundation.

2. Create a primary service instance by running:

cf create-service p.mysql PLAN PRIMARY-INSTANCE

Where:

PLAN is the name of the Multi‑Site Replication or HA Cluster plan you want your
leader to use.

PRIMARY-INSTANCE is a name you choose to identify the service instance. This


name appears under service in output from cf services.

For example:

$ cf create-service p.mysql db-ha80-small primary-db

Creating service primary-db in org my-org / space my-space as adm


in...
OK

Do not name your service instance leader or follower. If you trigger a


failover or switchover, the service instances in your primary and
secondary foundations switch roles. For more information, see Triggering
multi-site replication failover and switchover.

3. (Optional) Watch the progress of the service instance creation. If you get an error
message, see Troubleshooting instances.

watch cf services

Wait for the last operation for your instance to show as create succeeded.

For example:

$ watch cf services

Getting services in org my-org / space my-space as admin...


OK
name service plan bound apps last opera
tion
primary-db p.mysql db-small create suc
ceeded

200
Tanzu for MySQL

3. Create a follower Multi‑Site Replication service instance in your secondary foundation by repeating
step 1, replacing references to primary with secondary, and selecting a Multi‑Site Replication plan
in your secondary foundation.

Configure multi-site using the mysql-tools plug-in


To reduce the complexity of configuring multi-site instance replication, Tanzu for MySQL offers the mysql-
tools plug-in to the cf CLI. This plug-in automates some of the manual steps documented in the later
section, Configure multi-site manually. The resulting configurations are functionally identical.

To configure multi-site replication across multiple foundations with mysql-tools, install the latest version of
the mysql-tools cf CLI plug-in. For more information about the mysql-tools cf CLI plug-in, see mysql-cli-
plugin in GitHub.

This procedure assumes you are using cf CLI v8 or greater. Earlier cf CLI versions are not
compatible with the latest mysql-tools plug-in release.
See Upgrading to cf CLI v8 in the Cloud Foundry documentation.

To configure multi-site replication across foundations with mysql-tools, the steps are:

Save cloudfoundry targeting information for the primary foundation.

Save cloudfoundry targeting information for the secondary foundation.

Use mysql-tools to automatically establish replication between your instances in your primary and
secondary foundations.

1. Create a leader service instance in your primary foundation and follower in your secondary
foundation if you have not already done so.
For more information, see Create multi-site service instances in your two foundations.

2. Save the cf config to target your primary foundation:

1. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

Where PRIMARY-API-URL is the API endpoint for the primary foundation.

2. Use mysql-tools to save the config.

cf mysql-tools save-target PRIMARY-TARGET-NAME

Where PRIMARY-TARGET-NAME is your chosen name for the primary foundation.

3. Save the cf config to target the secondary foundation:

1. Log in to the deployment for your secondary foundation by running:

cf login SECONDARY-API-URL

Where SECONDARY-API-URL is the API endpoint for the primary foundation.

2. Use mysql-tools to save the config.

201
Tanzu for MySQL

cf mysql-tools save-target SECONDARY-TARGET-NAME

Where SECONDARY-TARGET-NAME is your chosen name for the secondary foundation.

4. Use mysql-tools to configure replication between the primary and secondary foundations.

cf mysql-tools setup-replication --primary-target PRIMARY-TARGET-NAME --primary


-instance PRIMARY-INSTANCE \
--secondary-target SECONDARY-TARGET-NAME --secondary-instance SECONDARY-INSTA
NCE

Where:

PRIMARY-TARGET-NAME is your chosen name for the primary foundation.

PRIMARY-INSTANCE is your chosen name for the primary instance.

SECONDARY-TARGET-NAME is your chosen name for the secondary foundation.

SECONDARY-INSTANCE is your chosen name for the secondary instance.

The mysql-tools plug-in has shorthand flags for the above options. Type
cf mysql-tools setup-replication for a help message listing the
options.

This step redeploys both your leader and follower instances. Any
Multi‑Site Replication instances will experience downtime during these
redeploys.

The entire process may take several minutes or longer to complete,


depending on your selected service instances and foundation
configurations.

This step uses tokens from your foundation cf login commands, and therefore it should be
launched shortly after performing those logins (e.g. within minutes) before those tokens expire.

5. The plug-in output shows status lines showing checkpoints for the various configuration steps:

Validating the primary instance: 'primary-db'.


Validating the secondary instance: 'secondary-db'.
Creating a 'host-info' service-key: 'MSHostInfo-1705540659' on the secondary in
stance: 'secondary-db'.
Getting the 'host-info' service-key from the secondary instance: 'secondary-d
b'.
Updating the primary with the secondary's 'host-info' service-key: 'MSHostInfo-
1705540659'.
Creating a 'credentials' service-key: 'MSCredInfo-1705541348' on the primary in
stance: 'primary-db'.
Getting the 'credentials' service-key from the primary instance. 'primary-db'.
Updating the secondary instance with the primary's 'credentials' service-key:
'MSCredInfo-1705541348'.

6. Purge cf config information from your workstation:

202
Tanzu for MySQL

cf mysql-tools remove-target PRIMARY-TARGET-NAME


cf mysql-tools remove-target SECONDARY-TARGET-NAME

Configure multi-site manually


You may manually establish replication between your two foundations. You may want to do this if you want
to avoid briefly saving cloud foundry targeting information, or want to control the names of the service keys
used to establish replication. When the manual steps are properly executed, the resulting configuration is
functionally identical to one created by the mysql-tools plug-in.

After you create your service instances in primary and secondary foundations, you must configure
replication between the two service instances.

You configure replication using service keys to pass connection information between the leader and follower
service instances. You must not use these service keys for any other use case besides establishing multi-
site replication.

The following diagram describes the workflow for configuring multi-site replication:

View a larger version of this diagram

The steps shown in the diagram are as follows:

1. Create host-info service key.

2. Record host-info service key.

3. Update secondary service instance with host-info service key.

203
Tanzu for MySQL

4. Create credentials service key.

5. Record credentials service key.

6. Update primary service instance with credentials service key.

The following procedure assumes that you created the leader service instance in the primary foundation and
the follower service instance in the secondary foundation. You created these service instances in the
procedure called Create multi-site replication service instances.

1. Create a host-info service key for the service instance in your secondary foundation by running:

cf create-service-key SECONDARY-INSTANCE SERVICE-KEY \


-c '{"replication-request": "host-info"}'

Where:

SECONDARY-INSTANCE is the name of the follower service instance you created in step 2 of
Create multi-site replication service instances.

SERVICE-KEY is a name you choose for the host-info service key.

For example:

$ cf create-service-key secondary-db host-info \


-c '{"replication-request": "host-info" }'

Creating service key host-info for service instance secondary-db as


admin...
OK

2. View the replication-credentials for your host-info service key by running:

cf service-key SECONDARY-INSTANCE SERVICE-KEY

Where:

SECONDARY-INSTANCE is the name of the follower service instance you created in step 2 of
Create multi-site replication service instances.

SERVICE-KEY is the name of the host-info service key you created in step 1.

For example:

$ cf service-key secondary-db host-info-key

Getting key host-info-key for service instance secondary-db as admin...

{
"credentials": {
"replication": {
"peer-info": {
"hostname": "6497378d-f518-4922-92d5-9530d3dc634a.mysql.s
ervice.internal",
"ip": "10.0.19.12",

204
Tanzu for MySQL

"system_domain": "sys.secondary-domain.com",
"uuid": "6497378d-f518-4922-92d5-9530d3dc634a"
},
"role": "leader"
}
}
}

This procedure assumes you are using cf CLI v8 or greater. Earlier cf CLI versions
do not include the top-level credentials JSON key in their cf service-key
response.

3. Record the output of the previous command, and remove the top-level credentials key.

4. Log in to the deployment for your primary foundation:

cf login PRIMARY-API-URL

5. Update your primary service instance with the host-info service key by running:

cf update-service PRIMARY-INSTANCE -c HOST-INFO

Where:

PRIMARY-INSTANCE is the name of the primary service instance you created in step 1 of
Create multi-site replication service instances.

HOST-INFO is the output you recorded in step 3.

For example:

$ cf update-service primary-db -c '{


"replication":{ \
"peer-info":{
"hostname": "6497378d-f518-4922-92d5-9530d3dc634a.mysql.service.i
nternal",
"ip": "10.0.18.12",
"system_domain": "sys.secondary-domain.com",
"uuid": "6497378d-f518-4922-92d5-9530d3dc634a"
},
"role": "leader"
}
}'

Updating service instance primary-db as admin...


OK

6. Watch the progress of the service instance. If you get an error message, see Troubleshooting
instances.

watch cf services

205
Tanzu for MySQL

Wait for the last operation for your instance to show as update succeeded.

For example:

$ watch cf services

Getting services in org my-org / space my-space as admin...


OK
name service plan bound apps last operation
primary-db p.mysql db-small update succeeded

7. Create a credentials service key for the leader service instance in your primary foundation by
running:

cf create-service-key PRIMARY-INSTANCE SERVICE-KEY-NAME \


-c '{"replication-request": "credentials"}'

Where:

PRIMARY-INSTANCE is the name of the leader service instance you created in step 1 of
Create multi-site replication service instances.

SERVICE-KEY-NAME is a name you choose for the credentials service key.

(Note that the -c flag is different than the flag used in step 1.)

For example:

$ cf create-service-key primary-db cred-key \


-c '{"replication-request": "credentials" }'

Creating service key cred-key for service instance primary-db as admi


n...
OK

8. View the replication-credentials for your credentials service key by running:

cf service-key PRIMARY-INSTANCE SERVICE-KEY-NAME

Where:

PRIMARY-INSTANCE is the name of the leader service instance you created in step 1 of
Create multi-site replication service instances.

SERVICE-KEY-NAME is the name of the credentials service key you created in step 6.

For example:

$ cf service-key primary-db cred-key

Getting key cred-key for service instance primary as admin...

{
"credentials": {

206
Tanzu for MySQL

"replication": {
"credentials": {
"password": "a22aaa2a2a2aaaaa",
"username": "6bf07ae455a14064a9073cec8696366c"
},
"peer-info": {
"hostname": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad.mysql.servi
ce.internal",
"ip": "10.0.17.12",
"ports": {
"agent": 8443,
"backup": 8081,
"mysql": 3306
},
"system_domain": "sys.primary-domain.com",
"uuid": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad"
},
"role": "follower"
}
}
}

This procedure assumes you are using cf CLI v8 or greater. Earlier cf CLI versions
do not include the top-level credentials JSON key in their cf service-key
response.

9. Record the output of the previous command, and remove the top-level credentials JSON key.
The resulting JSON is your “credentials service key.”

10. Log in to the deployment for your secondary foundation:

cf login SECONDARY-API-URL

11. Update your follower service instance with the credentials service key by running:

cf update-service SECONDARY-INSTANCE -c CREDENTIALS

Where:

SECONDARY-INSTANCE is name of the secondary service instance you created in step 2 of


Create multi-site replication service instances.

CREDENTIALS is the output you recorded in step 9.

For example:

$ cf update-service secondary-db -c '{"replication": {


"credentials": {
"password": "a22aaa2a2a2aaaaa",
"username": "6bf07ae455a14064a9073cec8696366c"
},
"peer-info": {

207
Tanzu for MySQL

"hostname": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad.mysql.service.i
nternal",
"ip": "10.0.17.12",
"ports": {
"agent": 8443,
"backup": 8081,
"mysql": 3306
},
"system_domain": "sys.primary-domain.com",
"uuid": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad"
},
"role": "follower"
}
}'

Updating service instance primary-db as admin...


OK

You now have a multi-site configuration with replication enabled.

Bind a multi-site configured service instance to your app


For an app to use a multi-site configuration, you must bind your app to your leader service instance in your
primary foundation. If you want to use an active-active topology, you must also bind your app to the follower
service instance in your secondary foundation.

For information about active-passive and app-layer active-active topologies, see see About active-passive
topology and About app-layer active-active topology.

To bind an app to a leader service instance:

1. Log in to the deployment for your primary foundation:

cf login PRIMARY-API-URL

2. Bind your app to your primary service instance by doing the procedure in Bind a service instance to
your app.

3. (Optional) If you are using an active-active topology, you must bind the same app to your follower
service instance in your secondary foundation. To do this, repeat the previous steps and replace
references to primary with secondary.

4. Modify your app to use the Tanzu for MySQL service by using the procedure in Use the MySQL
service in your app.

Upgrade a multi-site configuration


When upgrading the service instances used in a multi-site configuration, it is important to upgrade in a
specific order: Follower first, then leader. This ensures that any incompatibilities between different multi-site
MySQL versions are handled correctly.

Using TLS

208
Tanzu for MySQL

This topic describes how developers can use TLS to secure the communication from their apps and local
workstations to the VMware Tanzu for MySQL on Cloud Foundry service.

If your operator has configured TLS in the tile, new service instances have TLS activated by default. You
can establish a TLS connection from your local workstation to a Tanzu for MySQL service instance.

For more information about how to establish a TLS connection, see Establish a TLS Connection to a
Service Instance.

Mutual TLS (mTLS) is not supported in Tanzu for MySQL. Because of this, the server
certificate does not validate apps. If an app presents a certificate to the MySQL server, the
connection closes and a network error appears in the app logs. To resolve this issue, you
must deactivate mTLS in your apps.

Establish a TLS connection to a service instance


You can use mysql to establish a TLS connection to a Tanzu for MySQL service instance that has TLS
activated.

To establish a TLS connection to a service instance:

1. Create a new service key for the service instance with TLS activated. For example:

$ cf create-service-key my-service-instance my-tls-service-key


Creating service key my-tls-service-key for service instance my-service
-instance as admin...
OK
$ cf service-key my-service-instance my-tls-service-key
{
"credentials": {
"hostname": "27ce4cec-7d89-4e63-9a76-b8d9e4d57b61.mysql.service.in
ternal",
"jdbcUrl": "jdbc:mysql://27ce4cec-7d89-4e63-9a76-b8d9e4d57b61.mysq
l.service.internal:3306/service_instance_db?permitMysqlScheme&user=f386
7aa9bab54fa89661fb53d3d79c66&password=lfeykm0nbrphh7h7&sslMode=VERIFY_I
DENTITY&useSSL=true&requireSSL=true&enabledTLSProtocols=TLSv1.2&serverS
slCert=/etc/ssl/certs/ca-certificates.crt",
"name": "service_instance_db",
"password": "lfeykm0nbrphh7h7",
"port": 3306,
"tls": {
"cert": {
"ca": "-----BEGIN CERTIFICATE-----\...n-----END CERTIFICATE---
--\n"
}
},
"uri": "mysql://f3867aa9bab54fa89661fb53d3d79c66:lfeykm0nbrphh7h7@
27ce4cec-7d89-4e63-9a76-b8d9e4d57b61.mysql.service.internal:3306/servic
e_instance_db?reconnect=true",
"username": "f3867aa9bab54fa89661fb53d3d79c66"

209
Tanzu for MySQL

}
}

This procedure assumes that you are using cf CLI v8 or greater. Earlier cf CLI
versions do not include the top-level credentials JSON key in their cf service-
key response.

If the service key does not have a CA certificate under tls.cert.ca, the service key might be
stale. Create a new service key.

2. Copy the contents of the CA certificate under tls.cert.ca and paste it into a file. For example:

$ pbpaste > root.pem

3. Record the values for username, password, and hostname.

4. Use mysql to establish a TLS connection to the MySQL instance. Run the following command:

mysql --host=HOSTNAME \
--user=USERNAME \
--password=PASSWORD \
--ssl-ca=root.pem \
--ssl-verify-server-cert

Where:
HOSTNAME is the value for hostname previously retrieved.

USERNAME is the value for username previously retrieved.

PASSWORD is the value for password previously retrieved.

For example:

$ mysql --hostname=27ce4cec-7d89-4e63-9a76-b8d9e4d57b61.mysql.service.i
nternal \
--user=f3867aa9bab54fa89661fb53d3d79c66 \
--password=lfeykm0nbrphh7h7 \
--ssl-ca=root.pem \
--ssl-verify-server-cert

About data migration in VMware Tanzu for MySQL


This topic explains how data migration works in VMware Tanzu for MySQL on Cloud Foundry.

Read this topic before you do the procedures in Migrating Data in Tanzu for MySQL.

The migrate command

210
Tanzu for MySQL

You can use the mysql-tools CLI plug-in to migrate MySQL data with the migrate command. The
migrate command does a streaming mysqldump and restore to migrate data from your source MySQL
database to a destination Tanzu for MySQL instance.

The command supports connections over TLS. If TLS is configured in the source and destination MySQL
instances, the data is securely streamed using TLS. For information about how to configure TLS in a Tanzu
for MySQL service instance, see Using TLS.

During data migration, the migrate command:

1. Creates a new Tanzu for MySQL service instance in the destination space with the same name as
the source MySQL service instance name.

2. Copies the source data over to the new service instance.

3. Appends -old to the source service instance name whether the source is a Tanzu for MySQL
service instance or a user-provided service instance.

Use cases
The migrate command is used for most migration use cases.

However, many common migrations, such as from a small to a large database, can be done with the
simpler update-service command. See Use Cases Not Requiring the migrate Command.

If your use case is not listed in this topic, you might need to manually back up and restore your database to
migrate your data. See Backing up and Restoring with mysqldump

Use cases requiring the migrate command


The following table lists migration use cases that must be done with the migrate command:

Use the migrate command for migrating from... To...

Single Node Highly Available (HA) Cluster

Multi-Site

Leader-Follower HA Cluster

Multi-Site

HA Cluster Leader-Follower

Single Node

Multi-Site Single Node

Leader-Follower

Off-Platform Database * Tanzu for MySQL

Availability Zone (AZ) Another AZ

* If your off-platform database has encryption at rest or the Percona PAM Authentication plug-in enabled,
you cannot use the migrate command. Instead, you must follow the procedure in Restore from an Off-
Platform Logical Backup.

211
Tanzu for MySQL

Use cases not requiring the migrate command


Not all migrations require the migrate command. For example, if you are migrating a database to a larger
single node or leader-follower plan, use the simpler update-service command.

The following table lists migration use cases that can be done with the update-service command. For
instructions about using the update-service command, see Update a Service Instance.

Use update-service command for migrating from... To...

Single Node larger Single Node

Leader-Follower larger Leader-Follower

Single Node Leader-Follower of the same or larger size

Leader-Follower Single Node of the same or larger size

Omitted data
The migrate command does not migrate all stored programs. The command only migrates views and does
not migrate triggers, routines, or events. To manually migrate triggers, routines, or events, contact
Broadcom Support.

The migrate command skips the following system schemas:

cf_metadata: This is metadata for binding information, such as database users mapped to GUIDs.

information_schema: This is metadata for the MySQL database.

mysql: This is a MySQL database that describes users, user accounts, and permissions. Tanzu for
MySQL uses it to authenticate users.

performance_schema: This is metadata about performance and server execution.

sys: This is a schema that summarizes performance_schema.

sys, performance_schema, and information_schema change dynamically with changes on a Tanzu for
MySQL service instance. These schemas are not migrated because the destination database has its own
version of the schemas. You do not need to back up these schemas.

Migrating data in VMware Tanzu for MySQL


You can migrate data from any MySQL database to a VMware Tanzu for MySQL on Cloud Foundry service
instance.
You can migrate data from a source database to a destination database using the migrate command.

To migrate data in Tanzu for MySQL:

1. Enable the migrate command to access the source database. See Enable source access.

2. Migrate the data from the source database to the destination service instance. See Migrate data.

3. Ensure that the migration was successful. See Validate data.

4. Rebind and re-stage apps to the new destination service instance. To save resources, you can
optionally delete the old source database. See Rebind and re-stage apps.

212
Tanzu for MySQL

Migrating large datasets can take several hours. Data migration is linear and
depends on the hardware being used. For example, if X amount of data takes 10
minutes to migrate, then 2X amount of data takes 20 minutes to migrate using the
same hardware.

Do a test migration with small datasets to estimate how long the entire migration
will take before migrating larger datasets.

Prerequisites
Before you do the procedures in this topic, you must have:

Reviewed the information about the migrate command in About data migration in Tanzu for
MySQL.

An existing MySQL database that is the source of the data you want to migrate from. MySQL
source databases can be:
a Tanzu for MySQL service instance

a database that is not a Tanzu for MySQL database

Tanzu for MySQL installed in the destination environment you want to migrate to.

A Tanzu for MySQL service plan available in the destination org and space you want to migrate
your data to. This service plan must fulfill the requirements in Resource planning. Talk to your
operator to determine which service plan is appropriate.

Resource planning
You must ensure that the Tanzu for MySQL service plan for your destination service instance has your
preferred VM type and persistent disk size. If the plan does not have enough space on disk to store the
data, migration fails.

You can choose a service plan for your destination service instance. If you select:

A single node or leader-follower plan, ensure that the persistent disk size is three times larger
than the size of the source data.

A HA cluster plan, ensure that the persistent disk size is two times larger than the size of the
source data.

For more information about recommended persistent disk sizes, see Persistent disk usage.

Install the mysql-tools CF CLI plugin


VMware recommends that developers migrate data using the mysql-tools CLI plug-in with the migrate
command. For more information about mysql-tools cf CLI plug-in, see mysql-cli-plugin in GitHub.

You must have TLS activated to use this feature. To configure and activate TLS, see Configure TLS.

To install the mysql-tools cf CLI plug-in:

1. Install the plug-in by running:

213
Tanzu for MySQL

cf install-plugin -r CF-Community "MysqlTools"

2. To confirm that the plug-in has successfully installed, run:

cf mysql-tools version

Enable source access


The migrate command must be able to access the source database. How you activate this access
depends on where the source database is located relative to the destination space and org.

If you are:

Migrating within an Org and Space: The migrate command can access the source without any
preparation. Continue to Migrate data.

Migrating across Spaces: If the source database is in a different space than the destination,
enable access using service instance sharing. See Source access across spaces.

Migrating from Off-Platform: If the source database is in a different Tanzu Operations Manager
foundation than the destination or not deployed on Tanzu Operations Manager, create a user-
provided service that can access the remote database. See Source access off-platform.

Source access across spaces


If your source MySQL service instance is in a different development space than your destination org and
space, you can migrate your data by sharing the service instance to the destination org and space. Service
instance sharing is enabled by default.

To share a source MySQL service instance with your destination org and space:

1. Do the procedure in Share service instances.

2. Continue to Migrate data.

Source access off-platform


If your source MySQL database is in a different Tanzu Operations Manager foundation or not deployed on
Tanzu Operations Manager, you can migrate your data by creating a local user-provided service instance
that can access the database.

For more information about user-provided service instances, see User-provided service instances.

To create a user-provided service instance to access the off-platform database:

1. Confirm that your off-platform MySQL database permits inbound and outbound network traffic to
your destination Tanzu Operations Manager foundation. You might need to modify firewall rules for
your off-platform MySQL database. Talk to your platform operator for assistance.

2. If your off-platform MySQL database requires connections over TLS, verify that your Tanzu
Operations Manager foundation is configured to recognize the CA certificate that the MySQL server
certificate signed in with. Talk to your platform operator for assistance.

3. Record the information needed to access your off-platform database. These values usually include:

hostname: the domain name or IP address of the off-platform source database

214
Tanzu for MySQL

name: the name of source database

username and password: the database account credentials

port: The port number for the database. This number is usually 3306.

If your off-platform database is a Tanzu for MySQL service instance, these values are in your
VCAP_SERVICES environment variable credentials hash. For more information, see MySQL
Environment Variables.

4. Create a Cloud Foundry user-provided service instance that can access the off-platform database:

cf cups CF-DB-INSTANCE -p CREDS-STRUCT

Where:

CF-DB-INSTANCE is the name that you want to give to the new database service instance
that you are migrating to.

CREDS-STRUCT is a JSON structure that contains the off-platform database access values
you recorded in the previous step.

For example:

$ cf cups migrating-db -p '{"hostname": "34.192.88.212", "name": "my\_db", \


"username": "root", "password": "P455w0rD", "port": 3306}'

Creating user provided service migrating-db in org my-org / space my-space as a


dmin...
OK

cf cups is a shortcut for the cf create-user-provided-service command.

5. After your user-provided service instance is created, continue to Migrate data.

Migrate data
When your source database can access the destination space:

If your source MySQL database is running on Tanzu Operations Manager, you must stop all
traffic to the service instance before you migrate your data. You can do this by stopping and
unbinding all of your apps. See Stop and unbind apps.

If your source MySQL database is running off-platform, do the procedure in Migrate Data to
Destination Instance.

Stop and unbind apps


To stop and unbind your apps:

1. Log in to your BOSH deployment by running:

cf login API-URL

215
Tanzu for MySQL

When prompted, enter your credentials.

2. Target the org and space for the new destination service instance by running:

cf target -o DESTINATION-ORG -s DESTINATION-SPACE

3. Retrieve and record a list of your bound apps by running:

cf services

Your apps are listed in the bound apps column.

4. For each bound app you recorded in the previous step:

1. Stop the app by running:

cf stop APP

2. Unbind the app by running:

cf unbind-service APP SOURCE-INSTANCE

Where:

APP is the name of your app.

SOURCE-INSTANCE is the name of your source Tanzu for MySQL service instance.

5. Do the procedure in Migrate data to destination instance.

Migrate data to destination instance


To migrate data from your source database to your destination service instance:

1. Log in to your BOSH deployment by running:

cf login API-URL

When prompted, enter your credentials.

2. Target the org and space for the new destination service instance by running:

cf target -o DESTINATION-ORG -s DESTINATION-SPACE

3. View and select an available Tanzu for MySQL service plans by running:

cf marketplace

Tanzu for MySQL service plans are under p.mysql.

4. Migrate your data by running:

cf mysql-tools migrate SOURCE-INSTANCE PLAN

Where:

216
Tanzu for MySQL

SOURCE-INSTANCE is the name of your source MySQL service instance or user-provided


service instance.

PLAN is the name of the service plan that you previously selected.

For example:

$ cf mysql-tools migrate my-instance db-small

2018/04/24 11:31:19 Creating new service instance "my-instance" for ser


vice p.mysql using plan db-small
2018/04/24 11:41:01 Unpacking assets for migration to /var/folders/dm/6
6n2j9xx02l8vs58q2whz4080000gn/T/migrate\_app\_101341527
2018/04/24 11:41:02 Started to push app
Done uploading
2018/04/24 11:41:09 Successfully pushed app
2018/04/24 11:41:10 Successfully bound app to v1 instance
2018/04/24 11:41:12 Successfully bound app to v2 instance
2018/04/24 11:41:12 Starting migration app
2018/04/24 11:41:25 Started to run migration task
2018/04/24 11:41:27 Migration completed successfully
2018/04/24 11:41:29 Cleaning up...

For debugging purposes, you can add the --no-cleanup flag to the previous command. If a
migration fails, this flag preserves the migration app and the newly-created service instance.
However, if a migration succeeds, the migration app is still deleted.

Validate data
After you migrate your data, you must verify that the data was successfully migrated by validating the data
in the new Tanzu for MySQL service instance.

You can validate the data by creating an SSH tunnel to gain direct command line access to the new Tanzu
for MySQL service instance.

To create an SSH tunnel to the instance and validate your data:

1. Create an SSH tunnel to your Tanzu for MySQL service instance by doing the following procedures
in Accessing Services with SSH:

1. Push your host app

2. Create your service key

3. Configure your SSH tunnel

4. Access your service instance

2. From the MySQL shell, validate that the data that you expect to see has been imported into the
Tanzu for MySQL service instance.

3. Exit the MySQL shell and stop the SSH tunnel.

Rebind and re-stage apps

217
Tanzu for MySQL

To complete the migration, rebind and re-stage any bound apps in your destination org and space. After
rebinding and re-staging your apps, VMware recommends deleting the old source database instance to save
resources.

If a developer rebinds an app to the TCF-MySQL service after unbinding, they must also
rebind any existing custom schemas to the app. When you rebind an app, stored code,
programs, and triggers break. For more information about binding custom schemas, see
Use custom schemas.

To rebind and re-stage your apps and delete the source database instance:

1. Bind the app to the new service instance by running

cf bind-service APP INSTANCE

Where:

APP is the name of your app.

INSTANCE is the name of your Tanzu for MySQL service instance.

For example:

$ cf bind-service my-app my-instance


Binding service my-instance to app my-app in org my-org / space my-spac
e as [email protected]...
OK
TIP: Use 'cf restage my-app' to ensure your env variable changes take e
ffect

2. Re-stage the app by running:

cf restage APP

The app is now using your new Tanzu for MySQL service instance and should be operational again.

3. (Optional) Delete your source database. If your source database is deployed on Tanzu Operations
Manager, delete the old database instance by running the following command in your source space
and org:

cf delete-service SOURCE-INSTANCE

Where SOURCE-INSTANCE is the name of your source Tanzu for MySQL service instance.

About MySQL server defaults


This topic provides information about the defaults that VMware Tanzu for MySQL on Cloud Foundry applies
to its Percona Server components.

Learn about the server defaults for Tanzu for MySQL service plans. Most of the server defaults are the
same for all service plans, but some server defaults differ depending on the service plan.

For server defaults that are:

218
Tanzu for MySQL

Common to all plans, see Server defaults for all plans.

Specific to single node and leader-follower plans, see Server defaults for single node and leader-
follower plans.

Specific to highly available (HA) cluster plans, see Server defaults for highly available cluster
plans.

Specific to multi‑site replication plans, see Server defaults for Multi‑Site Replication plans.

You can use optional parameters to change certain server defaults. For more information,
see Changing Defaults Using Arbitrary Parameters.

Server Defaults for All Plans


The following table lists the Tanzu for MySQL server defaults that are common to all service plans:

Name Variable Name Default Notes

Applier wsrep_applier_threads Number of CPU Defines the number of threads to use when applying
Threads cores write-sets. For more information, see the Percona
XtraDB Cluster documentation. This property can be
overridden using a arbitrary parameter, or reset to the
default (Number of CPU cores) by setting an arbitrary
parameter value of -1.

Audit Log audit-log OFF To set to ON, select Enable Server Activity Logging
in Monitoring. Logs are written in JSON to
/var/vcap/store/mysql_audit_logs/mysql_serve
r_audit.log.

Space binlog_space_limit 33% of the disk For more information, see Configure MySQL and the
Limit for space on each Percona blog site.
Binary service instance if If Limit binary log disk use is not selected, then no
Logs Limit binary log space limit is applied.
disk use is selected.

Character character-set-server utf8 This setting defaults all character sets. You can
Set override this during a session.

Binary expire_log_days 3 This setting is the number of days before binary log
Log files are automatically removed.
Removal For more information, see the MySQL documentation.

InnoDB innodb_flush_log_at_trx 1 At each transaction commit, logs are written and


Transactio _commit flushed to disk.
n Log For more information, see the MySQL documentation.
Durability

InnoDB innodb_flush_method fsync This setting defines the method used to flush data to
Flush InnoDB data and log files.
Method For more information, see the MySQL documentation.

InnoDB innodb-autoinc-lock- 2 This setting uses the interleaved mode. This enables
Auto mode multiple statements to execute at the same time. There
Increment can be gaps in auto-incrementing columns. For more
Lock information, see the MySQL documentation.
Mode

219
Tanzu for MySQL

Name Variable Name Default Notes

InnoDB innodb-buffer-pool-size 50% of the available This setting is dynamically configured to be 50% of the
Buffer memory on each available memory on each service instance. For more
Pool Size service instance information, see the MySQL documentation.

InnoDB innodb-log-buffer-size 32 MB This setting defaults to 32 MB to avoid excessive disk


Log I/O when issuing large transactions. For more
Buffer information, see the MySQL documentation.
Size

Log Bin log-bin-trust-function- ON This setting relaxes constraints on how MySQL writes
Trust creators stored procedures to the binary log.
Function For more information, see the MySQL documentation.
Creators

Lower lower-case-table-names 0 By default, all table names are case sensitive.


Case Operators can change this default setting on the
Table MySQL Configuration page and permit developers to
Names override the default when they create a service
instance.
For more information about using lowercase table
names, see the MySQL documentation.

Max max-allowed-packet 256 MB If necessary, you can change this size setting in a
Allowed session variable.
Packet

Max Size max_binlog_size default ~1/3 of the Space For more information, see Configure MySQL and the
Allowed Limit for Binary Logs Percona documentation.
for Binary if Limit binary log If Limit binary log disk use is not selected, then no
Logs disk use is selected. maximum is set.

Reverse skip-name-resolve ON This deactivates reverse DNS lookups to improve


Name performance. Tanzu for MySQL uses user credentials,
Resolutio rather than hostnames, to authenticate access.
n Therefore, most deployments do not need reverse
DNS lookups.

To activate reverse name resolution, deselect this


option. For more information, see the MySQL
documentation.

Skip symbolic-links OFF Tanzu for MySQL is configured to prevent the use of
Symbolic symlinks to tables. VMware recommends this security
Links setting to prevent users from manipulating files on the
server file system.
For more information, see the MySQL documentation.

Table table-definition-cache 8192 For information about changing this setting, see the
Definition MySQL documentation.
Cache

Server defaults for single node and leader-follower plans


In addition to the server default settings that are common to all plans, single node and leader-follower plans
use the server defaults listed in the following table:

220
Tanzu for MySQL

Name Variable Name Default Notes

Max max-connections 5000 connections System processes count towards this limit. For more
Connection per service information, see the MySQL documentation.
s instance

Event event-scheduler ON Tanzu for MySQL enables the event scheduler so


Scheduler users can create and use events in their dedicated
service instances. For more information, see the
MySQL documentation.

InnoDB innodb-log-file-size 256 MB Tanzu for MySQL clusters default to a log-file size of
Log File 256 MB. For more information, see the MySQL
Size documentation.

Collation collation-server utf8_general_ci You can override this during a session.


Server For instructions about viewing available and default
collations, see the MySQL documentation.

Relay Log relay-log-recovery ON When enabled, relay log recovery happens


Recovery automatically after server startup.
For more information, see the MySQL documentation.

Server defaults for highly available cluster plans


In addition to the server default settings that are common to all plans, HA cluster plans use the server
defaults listed in the following table:

Name Variable Name Default Notes

Max max-connections 5000 connections System processes count towards this limit. For more
Connection per service information, see the MySQL documentation.
s instance

Event event-scheduler OFF Tanzu for MySQL enables the event scheduler so
Scheduler users can create and use events in their dedicated
service instances. For more information, see the
MySQL documentation.

InnoDB innodb-log-file-size 1024 MB Tanzu for MySQL clusters default to a log-file size of
Log File 256 MB. For more information, see the MySQL
Size documentation.

Collation collation-server utf8_unicode_ci You can override this during a session.


Server For instructions about viewing available and default
collations, see the MySQL documentation.

Relay Log relay-log-recovery ON When enabled, relay log recovery happens


Recovery automatically after server startup.
For more information, see the MySQL documentation.

Server defaults for Multi‑Site Replication plans


In addition to the server default settings that are common to all plans, multi‑site replication plans use the
server defaults listed in the following table.

221
Tanzu for MySQL

Name Variable Name Default Notes

Max max-connections 5000 connections System processes count towards this limit. For more
Connection per service information, see the MySQL documentation.
s instance

Event event-scheduler OFF Tanzu for MySQL activates the event scheduler so
Scheduler users can create and use events in their dedicated
service instances. For more information, see the
MySQL documentation.

InnoDB innodb-log-file-size 1024 MB Tanzu for MySQL clusters default to a log-file size of
Log File 256 MB. For more information, see the MySQL
Size documentation.

Collation collation-server utf8_unicode_ci You can override this setting during a session.
Server For instructions about viewing available and default
collations, see the MySQL documentation.

Relay Log relay-log-recovery OFF When enabled, relay log recovery happens
Recovery automatically after server startup.
For more information, see the MySQL documentation.

Changing defaults using arbitrary parameters


You can use optional parameters to change server defaults when using VMware Tanzu for MySQL on Cloud
Foundry.

Optional parameters for changing server defaults


You can configure optional parameters to change certain Tanzu for MySQL server defaults. You might want
to configure optional parameters in the following cases:

You have a read-heavy or write-heavy application. See Workloads.

You are migrating from a case-insensitive database. See Lowercase Table Names.

You want to use a different character set or collation than the default. See Character Sets.

You want to configure leader-follower replication. See Synchronous Replication.

You have heavy traffic, or spikes in traffic, while using a high availability topology. See WSREP
Applier Threads.

The procedures in this topic use the Cloud Foundry Command Line Interface (cf CLI). You can also use
Apps Manager to perform the same tasks using a graphical UI.

The Tanzu for MySQL service instances are configured by default with industry best
practices. For information about the configured server defaults, see About MySQL Server
Defaults.

Set optional parameters


You can change the default configuration of optional parameters by creating a new service instance or
updating an existing service instance.

222
Tanzu for MySQL

The available optional parameters include:

workload

enable_lower_case_table_names

default-charset and default-collation

replication_mode

backup-schedule

optimize_for_short_words

wsrep_applier_threads.

To set optional parameters:

1. Do one of the following:

If you want to create a new service instance, run:

cf create-service p.mysql PLAN SERVICE-INSTANCE \


-c '{ "PARAMETER": "PARAMETER-VALUE"}'

If you want to update an existing service instance, run:

cf update-service SERVICE-INSTANCE \
-c '{ "PARAMETER": "PARAMETER-VALUE"}'

The -c flag accepts a valid JSON object containing service-specific configuration


parameters, provided either in-line or in a file.

2. Verify that the cf command ran successfully by running:

watch cf services

Wait for the last operation for your instance to show as create succeeded.

For example:

$ watch cf services
Getting services in org my-org / space my-space as [email protected]...\
OK

name service plan bound apps last operation


myDB p.mysql db-small create succeeded

Workloads
The following table describes how to use the workload optional parameter to adjust server default settings
for different workload profiles.

workload

Type String

223
Tanzu for MySQL

Default mixed

Description Set this parameter to mixed, read-heavy, or write-heavy. See Workload Profile Types.

Usage create-service or update-service

Workload Profile Types


The following table lists the workload profiles that developers can use to configure MySQL instances based
on their specific app workloads.

Profile Description

Mixed Workload By default, each MySQL service instance is configured for a mixed workload. This workload is
equally heavy on reads and writes.

The configuration for this profile is described in About MySQL Server Defaults.

Read-Heavy Workload For apps that have a large number of reads, you can configure your service instances with a
read-heavy workload.

The read-heavy profile changes the following server defaults listed in About MySQL Server
Defaults:
innodb_buffer_pool_size is increased to 75% of the available memory on each
service instance.

innodb_flush_method is set to O_DIRECT.

Write-Heavy Workload For apps that write to the database frequently, you can configure your service instances with
a write-heavy workload.

The write-heavy profile changes the following server defaults that are listed in About MySQL
Server Defaults:
innodb_buffer_pool_size is increased to 75% of the available memory on each
service instance.

innodb_flush_method is set to O_DIRECT.

innodb_log_file_size is increased to 1 GB.

max_allowed_packets is increased to 1 GB.

Lowercase table names


If you are migrating a database from a system that was case insensitive, you can enable lowercase table
names to change all table names to lowercase.

For example, if your database had the table names TableName and TABLEname, when you enable lowercase
table names both of the names change to tablename and are interpreted as the same table.

For more information, see the MySQL documentation.

The following table describes how to use the enable_lower_case_table_names optional parameter.

enable_lower_case_table_names

Type Boolean

224
Tanzu for MySQL

Default Set by the operator in the Mysql Configuration pane in the tile. See Configure MySQL.

Description The operator can set a default for this parameter and permit developers to override the
default.

If you set this to true, table names are stored in lowercase.


See About lowercase table names.

Before you enable this feature, ensure that all tables have lowercase
names. Tables with uppercase names are inaccessible after enabling
lowercase table names.

Usage create-service or update-service

Character sets
The following table describes how to use the default-charset and default-collation optional
parameters to change the character sets used in databases.

default-charset

Type String

Default utf8

Description You can set this to any MySQL supported character set. For information about character sets
and collations, see the MySQL documentation.

Usage create-service or update-service

default-collation

Type String

Default utf8_general_ci

Description The default-collation changes based on the default-charset. To set the default-
collation, first set the default-charset.

For instructions for viewing available and default collations, see the MySQL documentation.

Usage create-service or update-service

Synchronous replication for leader-follower


If you use a leader-follower service instance, Tanzu for MySQL supports synchronous replication in addition
to the default asynchronous replication. In sync replication, data does not get committed to the leader node
until the follower acknowledges the commit and can replicate it.

The guarantee of redundancy gives sync replication an advantage over asynchronous replication in data
integrity. However, depending on latency, sync replication reduces the performance of write operations.

The following table describes how to use the replication_mode optional parameter.

225
Tanzu for MySQL

replication_mode does not work for single node, HA cluster, or multi-site replication
plans.

replication_mode

Type String

Default async

Description Set this parameter to one of the following:


semi-sync: This enables sync replication on a leader-follower service instance.

async: This restores the default asynchronous replication for a leader-follower


service instance.

Usage create-service or update-service

In Tanzu for MySQL, replication is called "sync," rather than "semi-sync." This is because
it is as synchronous as possible given the limits of MySQL. For more information about
MySQL semi-sync replication, see the MySQL documentation.

Synchronous replication timeout


By default, the timeout for sync replication is set to approximately 292 million years. Therefore, the leader
always waits for the follower to confirm receipt of the transaction. This guarantees that if the leader is lost,
a redundant copy of the data exists on the follower.

When the replication mode timeout is reached, the replication mode automatically reverts
to asynchronous without any user intervention. You can manually override this timeout by
setting a lower value.

The following table describes how to use the semi_sync_ack_timeout_in_ms optional parameter.

semi_sync_ack_timeout_in_ms

Type Integer

Default 263 milliseconds

Description Sets the timeout in milliseconds for the leader to acknowledge a replication operation.

Usage create-service or update-service

Backup schedule
The following table describes how to use the backup-schedule optional parameter.

backup-schedule

Type Cron expression

226
Tanzu for MySQL

Default The operator sets the default

Description Enter a cron expression using standard syntax. The cron expression sets the full backup
schedule for your service instance. For example, entering a cron expression of 15 10 * * *
triggers a full backup 10:15 AM every day. Test your cron expression using a website such as
Crontab Guru.

Configuring a cron expression overrides the default schedule for your


service instance.

Usage create-service or update-service

Optimize for short words


The following table describes how to use the optimize_for_short_words optional parameter.

optimize_for_short_words

Type Boolean

Default false

Description Set this parameter to true to change the MySQL system variable
innodb_ft_min_token_size to 1. This allows shorter words to be stored in the InnoDB full-
text index.

Because this has the side effect of increasing the size of the index, you must monitor the
memory usage of the service instance and choose a larger VM type when necessary. Also,
the operator must prevent the index from becoming too large and ineffective by removing
entries as described in the following paragraphs. How often this needs to be done depends on
the workload and how much data is changed in the full-text index.
For more information about the innodb_ft_min_token_size system variable, see the
MySQL documentation.

To remove full-text index entries for deleted records or old records:

1. Edit my.cnf to set innodb_optimize_fulltext_only=ON.


For single node and leader-follower plans, the path to the file is
/var/vcap/jobs/mysql/config/my.conf.

For HA cluster and Multi‑Site Replication plans the path to the file is
/var/vcap/jobs/pxc-mysql/config/my.conf.

2. Run OPTIMIZE TABLE on the indexed tables.

3. When the optimization is done, set innodb_optimize_fulltext_only=OFF so that


the query behaves normally for other tables.

For more information about InnoDB full-text index deletion, see the MySQL documentation.

Usage create-service or update-service

WSREP applier threads


The following table describes how to use the wsrep_applier_threads optional parameter.

227
Tanzu for MySQL

wsrep_applier_threads

Type Integer

Default 4

Description Specifies the number of threads that can apply replication transactions in parallel in a high-
availability cluster. Tanzu for MySQL defaults the number of threads to match the number of
vCPUs on the MySQL VM, whereas Percona XtraDB Cluster defaults to 1.
For further details, see the Percona XtraDB Cluster Docs.

This parameter only applies to "high-availability" service plans, and is


rejected for service plans using other topologies.

Usage create-service or update-service as per Set Optional Parameters

Developer Guide - Managing VMware Tanzu for MySQL


This section covers the following topics for managing VMware Tanzu for MySQL on Cloud Foundry:

Connecting to VMware Tanzu for MySQL


Customizing database credentials

Using management tools for VMware Tanzu for MySQL

Creating a service instance with Service-Gateway access

Using SSH to connect from outside a deployment

Triggering multi-site replication and failover

Backup and Restore


About Backups

Making Full Backups

Restoring Incremental Backups

Backing up and restoring with mysqldump

Monitoring node health for HA clusters

Troubleshooting instances

Connecting to VMware Tanzu for MySQL


This section covers connection topics for VMware Tanzu for MySQL on Cloud Foundry:

Customizing database credentials

Using management tools for Tanzu for MySQL

Creating a service instance with Service-Gateway access

Using SSH to connect from outside a deployment

Customizing database credentials

228
Tanzu for MySQL

You can customize access credentials and privileges for VMware Tanzu for MySQL on Cloud Foundry
service instances.

You can customize database credentials by creating service keys with custom properties. For example, you
can create read-only access credentials to activate desktop tools to connect to your databases.

The following procedures use the Cloud Foundry Command Line Interface (cf CLI). You can also use Apps
Manager to do the same tasks using a graphical user interface. For information about Apps Manager, see
Getting Started with Apps Manager.

Create read-only access credentials


Tanzu for MySQL enables space developers to create read-only credentials for users who need read-only
access to the database. These users can audit and monitor the database without changing any data.

Any user that can create a service key can provision a fully-privileged service key.

To create and find read-only credentials for an existing service instance:

1. Create a new read-only service key for a service instance by running:

cf create-service-key SERVICE-INSTANCE-NAME KEY-NAME -c '{ "read-only": true }'

For example:

$ cf create-service-key mydb mykey1 -c '{ "read-only": true }'

Creating service key mykey1 for service instance mydb as admin...


OK

2. Retrieve the read-only credentials from the service key by running:

cf service-key SERVICE-INSTANCE-NAME KEY-NAME

For example:

$ cf service-key mydb mykey1

{
"hostname": "a7113e41-7254-4f5a-a0cf-a88b052c8b10.mysql.service.intern
al",
"jdbcUrl": "jdbc:mysql://a7113e41-7254-4f5a-a0cf-a88b052c8b10.mysql.se
rvice.internal:3306/service_instance_db?user=973eb219bd554dfc9794bc29a3
01bcb1\u0026password=zr3aqa847tzm6cls\u0026sslMode=VERIFY_IDENTITY\u002
6useSSL=true\u0026requireSSL=true\u0026serverSslCert=/etc/ssl/certs/ca-
certificates.crt",
"name": "service_instance_db",
"password": "zr3aqa847tzm6cls",
"port": 3306,
"tls": {
"cert": {

229
Tanzu for MySQL

"ca": "-----BEGIN CERTIFICATE-----\nMIIDDzCCAfegAwIBAgIUW0tF3p3wubz+


0GMH/850aVUIPnUwDQYJKoZIhvcNAQEL\nBQAwFzEVMBMGA1UEAxMMVG9vbHNtaXRoc0NBM
B4XDTIwMDkyMTA2MjcxMFoXDTIx\nMDkyMTA2MjcxMFowFzEVMBMGA1UEAxMMVG9vbHNtaX
Roc0NBMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv5lmGmSCIkV2w1axS/v
Gk7GjQHnTtjhme4cO\nvT1Nbv6oWqt0Tlm+2gzGb8W7A6SsIEN33ltq4LTWEFK8t0htphDe
1xkAf1Eq7jWM\nnS9aFnXyEuqw5fzWAjQMMqd3JvvZ2Z85o9NaHdi+XOlQAv9UHlWkjaSAv
FoRyaC7\npI0GNF8/QpvHORdPxpyGey/LtE8FxSKb8EL1y430LT7N/PxmVmFnySItlMbBiX
cA\nTkosY+9IswMwrVyYBwN65UoC7sKomjrloVNHhErm5pZv1hlEvEK116wiNY//9Wav\nA
mUneQ4LpjMPYXDGhHL04mMc2ySsrFW0lI8zcYzbEQBUQN5ovwIDAQABo1MwUTAd\nBgNVHQ
4EFgQUyCp0znZlP1d+vQ9U4tpzs1g/hrAwHwYDVR0jBBgwFoAUyCp0znZl\nP1d+vQ9U4tp
zs1g/hrAwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC\nAQEAfh26fULdpurm
RdE9KKcRGVY56fFk2SbxITTIoHULtQY5pzau9KVOKGl2+czM\n875QC1YviBoonQZE8LSA1
A1gaj9s+XT5/fCGRagU/ODZX/sBDJMQJjaN+/QFRhom\nXKHZ+1nCPJqSiGGDJOANtZT1Xl
fz+cKreuDfPysAA+s5row17CUIcYcC0WTNgVE9\nGdkjzF9ZDakLHekkQ9F2nMmEhwRTwxw
neqJzcTFqDgWiIZpzkF6Ck90Ay43mpc7N\nU/osEYJlW10NJy8+wq11yZ50T3Z8EFkkbzo9
QipnfW1byY+JstVeR0uLmUzNmkyy\nNBUf8fcYBdCLr2lDvOiUGhRw6w==\n-----END CE
RTIFICATE-----\n\n"
}
},
"uri": "mysql://973eb219bd554dfc9794bc29a301bcb1:zr3aqa847tzm6cls@a711
3e41-7254-4f5a-a0cf-a88b052c8b10.mysql.service.internal:3306/service_in
stance_db?reconnect=true",
"username": "973eb219bd554dfc9794bc29a301bcb1"
}

In cf CLI v8, the response includes a top-level credentials key. Earlier versions
of the cf CLI do not include a top-level credentials key.

Creating custom username credentials


Tanzu for MySQL enables space developers to create custom usernames for service keys and service
bindings. You can create these credentials for users who want to access the database with a specific
username.

Any user that can create a service key can provision a fully-privileged service key.

To create and find custom username credentials for an existing service instance:

1. Create a new service key and username for a service-instance by running:

cf create-service-key SERVICE-INSTANCE-NAME KEY-NAME -c '{ "username": "NEW-USE


RNAME" }'

For example:

$ cf create-service-key mydb mykey2 -c '{ "username": "myuser" }'

Creating service key mykey2 for service instance mydb as admin...


OK

230
Tanzu for MySQL

2. Retrieve the credentials from the service key by running:

cf service-key SERVICE-INSTANCE-NAME KEY-NAME

For example:

$ cf service-key mydb mykey2

{
"hostname": "a7113e41-7254-4f5a-a0cf-a88b052c8b10.mysql.service.intern
al",
"jdbcUrl": "jdbc:mysql://a7113e41-7254-4f5a-a0cf-a88b052c8b10.mysql.se
rvice.internal:3306/service_instance_db?user=myuser\u0026password=bdjq5
o19ax4suzmg\u0026sslMode=VERIFY_IDENTITY\u0026useSSL=true\u0026requireS
SL=true\u0026serverSslCert=/etc/ssl/certs/ca-certificates.crt",
"name": "service_instance_db",
"password": "bdjq5o19ax4suzmg",
"port": 3306,
"tls": {
"cert": {
"ca": "-----BEGIN CERTIFICATE-----\nMIIDDzCCAfegAwIBAgIUW0tF3p3wubz+
0GMH/850aVUIPnUwDQYJKoZIhvcNAQEL\nBQAwFzEVMBMGA1UEAxMMVG9vbHNtaXRoc0NBM
B4XDTIwMDkyMTA2MjcxMFoXDTIx\nMDkyMTA2MjcxMFowFzEVMBMGA1UEAxMMVG9vbHNtaX
Roc0NBMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv5lmGmSCIkV2w1axS/v
Gk7GjQHnTtjhme4cO\nvT1Nbv6oWqt0Tlm+2gzGb8W7A6SsIEN33ltq4LTWEFK8t0htphDe
1xkAf1Eq7jWM\nnS9aFnXyEuqw5fzWAjQMMqd3JvvZ2Z85o9NaHdi+XOlQAv9UHlWkjaSAv
FoRyaC7\npI0GNF8/QpvHORdPxpyGey/LtE8FxSKb8EL1y430LT7N/PxmVmFnySItlMbBiX
cA\nTkosY+9IswMwrVyYBwN65UoC7sKomjrloVNHhErm5pZv1hlEvEK116wiNY//9Wav\nA
mUneQ4LpjMPYXDGhHL04mMc2ySsrFW0lI8zcYzbEQBUQN5ovwIDAQABo1MwUTAd\nBgNVHQ
4EFgQUyCp0znZlP1d+vQ9U4tpzs1g/hrAwHwYDVR0jBBgwFoAUyCp0znZl\nP1d+vQ9U4tp
zs1g/hrAwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOC\nAQEAfh26fULdpurm
RdE9KKcRGVY56fFk2SbxITTIoHULtQY5pzau9KVOKGl2+czM\n875QC1YviBoonQZE8LSA1
A1gaj9s+XT5/fCGRagU/ODZX/sBDJMQJjaN+/QFRhom\nXKHZ+1nCPJqSiGGDJOANtZT1Xl
fz+cKreuDfPysAA+s5row17CUIcYcC0WTNgVE9\nGdkjzF9ZDakLHekkQ9F2nMmEhwRTwxw
neqJzcTFqDgWiIZpzkF6Ck90Ay43mpc7N\nU/osEYJlW10NJy8+wq11yZ50T3Z8EFkkbzo9
QipnfW1byY+JstVeR0uLmUzNmkyy\nNBUf8fcYBdCLr2lDvOiUGhRw6w==\n-----END CE
RTIFICATE-----\n\n"
}
},
"uri": "mysql://myuser:bdjq5o19ax4suzmg@a7113e41-7254-4f5a-a0cf-a88b05
2c8b10.mysql.service.internal:3306/service_instance_db?reconnect=true",
"username": "myuser"
}

In cf CLI v8, the response includes a top-level credentials key. Earlier versions
of the cf CLI do not include a top-level credentials key.

Using management tools for VMware Tanzu for MySQL

231
Tanzu for MySQL

This topic provides instructions for tool developers to use for access to their VMware Tanzu for MySQL on
Cloud Foundry databases.

You can access your Tanzu for MySQL databases using the following tools:

MySQLWeb Database management app.

cf CLI MySQL plug-in.

Desktop Tools.

MySQLWeb Database management app


The MySQLWeb app provides a web UI for managing Tanzu for MySQL databases. This free app lets you
view and operate on tables, indexes, constraints, and other database structures, and directly run SQL
commands.

You can run the MySQLWeb app in the following ways:

Standalone on your own machine

Deployed to Tanzu Operations Manager

If you deploy MySQLWeb to Tanzu Operations Manager, you can configure it in the deployment manifest to
automatically bind to a specific service instance.

See the MySQLWeb code repo for how to install and run MySQLWeb.

cf CLI MySQL plugin


To connect to your Tanzu for MySQL databases from a command line, use the cf CLI MySQL plugin. The
plug-in allows you to:

Inspect databases for debugging

Manually adjust database schema or contents in development environments

Dump and restore databases

To install the cf CLI MySQL plugin, run the following:

$ cf install-plugin -r "CF-Community" mysql-plugin

For more information, see the cf-mysql-plugin repository.

Desktop Tools
You can connect your Tanzu for MySQL databases to a desktop tool; for example, MySQL Workbench or
Sequel Pro, if you have the credentials for your MySQL service instance.

If you do not have credentials for your MySQL service instance, follow the procedure in Create Read-only
Access Credentials.

To connect your databases to a desktop tool:

1. To retrieve the credentials from your MySQL service instance service key, run the following
command:

232
Tanzu for MySQL

cf service-key SERVICE-INSTANCE MYSQL-SERVICE-KEY

Where:

SERVICE-INSTANCE is the name of your service instance.

KEY-NAME is the name of your service key.

For example:

$ cf service-key mydb mykey1

{
"credentials": {
"hostname": "72c66633-8a60-466a-8f8a-8beee5e548b8.mysql.service.i
nternal",
"jdbcUrl": "jdbc:mysql://72c66633-8a60-466a-8f8a-8beee5e548b8.mys
ql.service.internal:3306/service_instance_db?user=6bf07ae455a14064a9073
cec8696366c\u0026password=a22aaa2a2a2aaaaa\u0026=true",
"name": "service\_instance\_db",
"password": "a22aaa2a2a2aaaaa",
"port": 3306,
"uri": "mysql://6bf07ae455a14064a9073cec8696366c:a22aaa2a2a2aaaaa
@72c66633-8a60-466a-8f8a-8beee5e548b8.mysql.service.internal:3306/servi
ce_instance_db?reconnect=true",
"username": "6bf07ae455a14064a9073cec8696366c"
}
}

This procedure assumes you are using cf CLI v8 or greater. Earlier cf CLI versions
do not include the top-level credentials JSON key in their cf service-key
response.

2. Record the following values:

hostname

name

password

port

username

3. Configure an SSH tunnel using the values for hostname and port that you just recorded. For
information about configuring an SSH tunnel, see Configure your SSH tunnel.

4. Configure a connection in your desktop tool using the values for hostname, name, password, port,
and username that recorded earlier.

Creating a service instance with Service-Gateway access

233
Tanzu for MySQL

You can create a service instance with service-gateway access to allow connection to VMware Tanzu for
MySQL on Cloud Foundry. You can activate and deactivate service-gateway access on an existing service
instance.

For general information about service-gateway access, including architecture information and use cases,
see About Service-Gateway access.

If service-gateway access is activated for the foundation, the external components (outside the foundation)
can connect to MySQL service instances. These external components are also referred to as off-platform
components.

These components are typically:

Apps running externally or on other foundations

Database administration and management tools; for example, MySQL Workbench

Apps that are deployed in the same foundation as the service instance can connect to the
service instance directly without going through the service-gateway.

Prerequisites
The procedures in this topic assume:

You meet the prerequisites for using VMware Tanzu for MySQL on Cloud Foundry. For more
information, see Prerequisites in Using VMware Tanzu for MySQL on Cloud Foundry.

Your operator has enabled service-gateway access. If you do not know if the foundation is enabled
for service-gateway access, contact your operator.

Create a service instance that allows off-platform access


The following procedure describes how to create a new MySQL service instance that can be accessed by
external components.

To create a service instance that enables service-gateway access:

1. Run:

cf create-service p.mysql PLAN SERVICE-INSTANCE-NAME -c '{"enable_external_acce


ss": true}'

Where:

PLAN is the name of the Tanzu for MySQL plan you want to use.

SERVICE-INSTANCE-NAME is a name you choose for the service instance. This name
appears under service in the output from cf services.

2. Obtain credentials by creating a service key. Run:

cf create-service-key SERVICE-INSTANCE-NAME SERVICE-KEY

Where:

234
Tanzu for MySQL

SERVICE-INSTANCE-NAME is the name of the instance you created above.

SERVICE-KEY is a name you choose for the service key. Choose a name that indicates that
the key contains credentials for off-platform access.

If you deactivate and then activate service-gateway access again on the


foundation, you must create new service keys to obtain a new set of credentials
for service-gateway access.

3. Use the keys in the service key to access the service instance from outside the foundation.

This is an example of a service key:

{
"hostname": "tcp.turtlegreen.cf-app.com",
"jdbcUrl": "jdbc:mysql://tcp.turtlegreen.cf-app.com:1035/service_instance_db?u
ser=4801b239ba514be0be393cb33a0f3431\u0026password=g3mfwbz00byl6s5a\u0026sslMod
e=VERIFY_IDENTITY\u0026useSSL=true\u0026requireSSL=true\u0026enabledTLSProtocol
s=TLSv1.2\u0026serverSslCert=/etc/ssl/certs/ca-certificates.crt",
"name": "service_instance_db",
"password": "g3mfwbz00byl6s5a",
"port": 1035,
"tls": {
"cert": {
"ca": "-----BEGIN CERTIFICATE-----\nMIIDLTCCAhWgAwIBAgIUTgFaBwCzHAZfdQ5gHt
ol0IjMUXIwDQYJKoZIhvcNAQEL\nBQAwJjEkMCIGA1UEAxMbZG0tcm9vdC5kZWRpY2F0ZWQtbXlzcWw
uY29tMB4XDTIw\nMTEwMjE0MDUyNloXDTIxMTEwMjE0MDUyNlowJjEkMCIGA1UEAxMbZG0tcm9vdC5k
\nZWRpY2F0ZWQtbXlzcWwuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC\nAQEArLEuvd6
HKVEgIs+SeZBMVVT7UafRQp2NWNV1mYS4zthXzP3q7MNPQr3Hr+qp\nANO95Mnq5bCxXAIHFIOUS4nH
kSYPSNkaGmkRrUiDLkEH+xGodAKnmshPcfuhW8gO\nc5RTrqgCsNEzpAask7MQoj9njp8oQyNQ2qS7z
m9t91XYiLc1RstKc9jnyU3xzJDr\n6+FBqC9uwyJIhV9fGsmUxnB7OMS8kx/uYmOPeNL6ywIAypQBaI
IPs7THzqDPe+Qi\nY8o2J5ylFWXasz3tGjtTCetSmrfyBzZFNc1EmqzABkNTXi/qfRs5KtS+UtRqtIs
F\nTgL/F0bBlZe15bv7MahMVRqeOwIDAQABo1MwUTAdBgNVHQ4EFgQUNdcf3u9oTtLl\noQ3Y7J5wCC
tNKLAwHwYDVR0jBBgwFoAUNdcf3u9oTtLloQ3Y7J5wCCtNKLAwDwYD\nVR0TAQH/BAUwAwEB/zANBgk
qhkiG9w0BAQsFAAOCAQEAFEkmfosL5eLIri6Wi2dQ\nva5olI5xMwaHAp7gaqp9rxPUlugMgSsiNqzS
5fL/682HbsqLVQijEg4tbX5VeA/6\ndztZE58DUjXam1YOU6THUt8oeK6NtUJ3TmjTttFWB+x2yvQef
JoldGslBh06HzBr\nY5CrlkVsiLek2JKmU9LQ2XQ7CIZEzz20MJp8CrDDsn1U3BjUrUVmlLdgAtIuWg
J7\nufmYar41bWcMjsNvETrOxWtY5uvErmP+Z+0GGdYEUimLgxCc6WfBWdhMbEygOS4G\n6amSkb/rZ
THWr0z4swHdrNtP627jhtcdjlh5QFQYYxc8O/jeAehUdS06JjG9qUzP\nFQ==\n-----END CERTIFI
CATE-----\n"
}
},
"uri": "mysql://4801b239ba514be0be393cb33a0f3431:[email protected]
een.cf-app.com:1035/service_instance_db?reconnect=true",
"username": "4801b239ba514be0be393cb33a0f3431"
}

The four keys that change to include the TCP domain and TCP port are hostname, jdbcUrl, port,
and uri. The keys you need depend on the type of component you are accessing the service
instance from.

Enable an existing service instance for off-platform access

235
Tanzu for MySQL

If you already have a MySQL service instance, you can make it accessible to external components by
activating service-gateway access.

To activate service-gateway access on an existing MySQL service instance:

1. Run:

cf update-service SERVICE-INSTANCE-NAME -c '{"enable_external_access": true}'

Where SERVICE-INSTANCE-NAME is the name of the service instance that you want to make
available for off-platform access. This name appears under service in the output from cf
services.

2. Obtain credentials by creating a service key. Follow steps 2 and 3 in Create a service instance that
allows off-platform access.

Deactivate off-platform access on a service instance


If you have a MySQL service instance that no longer needs to be accessed from outside the foundation,
deactivate service-gateway access for that service instance.

To deactivate service-gateway access on a MySQL service instance, run:

cf update-service SERVICE-INSTANCE-NAME -c '{"enable_external_access": false}'

Where SERVICE-INSTANCE-NAME is the name of the service instance for which you want to deactivate
service-gateway access.

Using SSH to connect from outside a deployment


You can use these instructions to establish an SSH tunnel connection to a VMware Tanzu for MySQL on
Cloud Foundry service instance from outside a deployment.

The preferred method to connect to a service instance from outside your deployment is
through service-gateway access. For information, see Creating a Service Instance with
Service-Gateway Access.

You might want to use an SSH tunnel in the following cases:

Connecting from a local workstation. For more information, see Tanzu for MySQL Tools.

Connecting from outside the location of your Tanzu for MySQL service instance

Connecting from a legacy app that is not in your Tanzu Platform for CF deployment

Taking a manual backup to be stored locally. For more information, see Backing up and Restoring
with mysqldump.

Any app deployed in your deployment can resolve BOSH DNS hostnames and forward traffic to MySQL
service instances using an SSH tunnel.

Prerequisite

236
Tanzu for MySQL

To connect using an SSH tunnel, you must have SSH access to app containers. For more information, see
Configuring SSH access for Tanzu Platform for Cloud Foundry.

Procedure
To connect to a MySQL instance using an SSH tunnel, follow the procedures in Accessing Services with
SSH.

Triggering multi-site replication failover and switchover


You can trigger a failover and switchover when using a multi-site configuration with VMware Tanzu for
MySQL on Cloud Foundry.

A failover or switchover performs the following tasks:

1. Redirects traffic from primary to secondary foundation

2. Promotes original follower to be the leader:

Failover discards your primary foundation’s former leader.

Switchover reconfigures that former leader as a new follower to your secondary


foundation’s newly-promoted leader.

3. Reconfigures replication

For information about when to trigger a failover or switchover, see About Failover and Switchover.

Before you trigger a failover or switchover, you must verify that the follower service instance is healthy. See
Verify Follower Health.

The procedures in this topic assume that you created the leader service instance in the
primary foundation and the follower service instance in the secondary foundation.

Verify follower health


Before you trigger a failover or switchover, you must verify that the follower service instance is healthy. If
your follower service instance is unhealthy, contact Support.

To verify the service instance:

1. Log in to the deployment for your secondary foundation by running:

cf login SECONDARY-API-URL

Where SECONDARY-API-URL is the API endpoint for your secondary foundation.

2. Obtain and record the GUID of the follower service instance by running:

cf service SERVICE-INSTANCE-NAME --guid

Where SERVICE-INSTANCE-NAME is the name of the follower service instance.

237
Tanzu for MySQL

For example:

$ cf service secondary-db --guid


12345678-90ab-cdef-1234-567890abcdef

3. Obtain the credentials and IP address needed to SSH into the Tanzu Operations Manager VM by
following the procedure in Gather Credential and IP Address Information.

4. SSH into the Tanzu Operations Manager VM by following the procedure in Log in to the Tanzu
Operations Manager VM with SSH.

5. From the Tanzu Operations Manager VM, log in to your BOSH Director by following the procedure
in SSH Into the BOSH Director VM.

6. View the health of the service instance by running:

bosh -d service-instance_GUID instance

For example:

$ bosh -d service-instance_12345678-90ab-cdef-1234-567890abcdef instanc


e
Using environment 'https://10.0.0.6:25555' as client 'admin'

Task 21409. Done

Deployment 'service-instance_12345678-90ab-cdef-1234-567890abcdef'

Instance Process State AZ IPs


mysql/1373022d-4eab-46d3-8fd1-a12067edf597 running z2 10.
0.17.14

1 instances

Succeeded

7. Ensure that the service instance is running. If the service instance is failing, contact Support.

Select your promoted leader topology


Failover and switchover operations promote your secondary foundation’s follower service instance to a
leader. (We call a promoted follower “leader” whether or not it is configured to have a “follower”.) That
follower is a service instance created with a multi‑site replication plan.

During follower promotion, you can choose to update it to an HA Cluster plan. See Select a multi-site leader
topology for more information.
Updating to an HA cluster requires more time than keeping the existing multi‑site replication plan type.

Trigger a failover
You should trigger a failover only if you do not need to recover the leader service instance.

238
Tanzu for MySQL

To trigger a failover:

1. Promote the follower

2. Delete or purge the former leader

3. Create a new follower

4. Reconfigure multi-site replication

Promote the follower


This procedure applies only to a multi-site follower service instance. If you try promoting any other service
instance to a leader, you see an error message similar to the following:

Updating service instance nonfollower-db as admin...


FAILED
Server error, status code: 502, error code: 10001, message: Service broker error: 1
error occurred:
* the configuration parameter 'initiate-failover' is not a valid option

To promote the follower service instance to leader:

1. Log in to the deployment for your secondary foundation by running:

cf login SECONDARY-API-URL

Where SECONDARY-API-URL is the API endpoint for your secondary foundation.

2. Promote the follower service instance to leader by running one of these two commands, based the
selection you made above in Select your promoted leader topology:

To keep your service instance on its existing single-VM multi‑site replication topology, run
the following command:

cf update-service SECONDARY-INSTANCE \
-c '{"initiate-failover":"promote-follower-to-leader"}'

For example:

$ cf update-service secondary-db \
-c '{"initiate-failover":"promote-follower-to-leader"}'

Updating service instance secondary-db as admin...


OK

To scale up your service instance to a three-VM HA cluster topology, run the following
command:

cf update-service SECONDARY-INSTANCE \
-c '{"initiate-failover":"promote-follower-to-leader"} \
-p HA-PLAN-NAME'

Where HA-PLAN-NAME is the name of a plan configured in your second foundation’s Operations
Manager with a HA Cluster topology.

239
Tanzu for MySQL

For example:

$ cf update-service secondary-db \
-c '{"initiate-failover":"promote-follower-to-leader"} \
-p configured-ha-plan

Updating service instance secondary-db as admin...


OK

3. If this command fails, do one of the following:

If you have unapplied local transactions on the follower service instance, wait for the
transactions to be applied and run the command again. The error message looks like the
following:

Updating service instance secondary-db as admin...


FAILED
Server error, status code: 502, error code: 10001, message: Service broke
r error: Promotion of follower failed - has 1 transactions still unapplie
d

If the leader service instance is still reachable and in read-write mode (it shouldn’t be),
follow the procedure in Trigger a Switchover instead. The error message looks like the
following:

Updating service instance secondary-db as admin...


FAILED
Server error, status code: 502, error code: 10001, message: Service broke
r error: Promotion of follower failed - the leader is still writable

4. Watch the progress of the service instance update by running:

watch cf services

Wait for the last operation for your instance to show as update succeeded.

For example:

$ watch cf services

Getting services in org my-org / space my-space as admin...


OK
name service plan bound apps
last operation
secondary-db p.mysql db-pxc-single-node-small
update succeeded

5. Reconfigure your global DNS load balancer to direct all traffic to apps in your secondary foundation.
See Configure Your GLB.

Delete or purge the former leader

240
Tanzu for MySQL

When you do a failover, you cannot manually recover the leader service instance. After you promote the
follower service instance to leader, you must remove the former leader service instance. Otherwise, the
service instance can recover in read-write mode.

The way you remove the service instance depends on whether its VMs and responsiveness have been lost
or not.

To remove the former leader service instance:

1. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

Where PRIMARY-API-URL is the API endpoint for the primary foundation.

2. Do one of the following:

If a multi‑site replication follower VM is lost or your follower is otherwise non-responsive,


purge the service instance by doing the procedure Purge a Service Instance.

If the foundation is lost, you purge the service instance after following the
steps to recover the foundation's Cloud Controller database in Restoring
Deployments from Backup with BBR.

If your service instance is still responsive:

1. Remove all app bindings by following the procedure Unbind an App from a Service
Instance.

2. Delete the service keys from the former leader service instance.

3. Delete the service instance by following the procedure Delete a Service Instance.

Create a new follower


To reconfigure multi-site replication between two instances, a new follower without any data must be created
in the primary foundation.

To create a follower:

1. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

Where PRIMARY-API-URL is the API endpoint for the primary foundation.

2. Create a service instance using the multi‑site replication plan:

Follow the procedure in Create a Service Instance.

Do not name your service instance follower because, if in the future you trigger a failover
or switchover, this instance can no longer be the follower.

Reconfigure multi-site replication

241
Tanzu for MySQL

The follower in the primary foundation needs to catch up to the newly promoted leader in the secondary
foundation.

Reconfigure multi-site replication so that the primary foundation follower receives the data from the
secondary foundation leader.

To reconfigure replication, follow the procedure in Reconfigure multi-site replication.

Trigger a switchover using mysql-tools plugin


To reduce the complexity of managing multi-site instance replication, Tanzu for MySQL offers the mysql-
tools plug-in for the cf CLI. You can use this plug-in to automatically trigger the switchover. After the
switchover is completed, the new leader instance on the secondary foundation has the same plan type as
the original leader instance, and the new follower instance also has the same plan type as the original
follower instance.

To perform a switchover using the mysql-tools plug-in, you need:

The cf CLI v8 or later. Earlier cf CLI versions are not compatible with the latest version of the
mysql-tools plug-in. For more information, see the VMware Tanzu Platform for Cloud Foundry
documentation.

The latest version of the mysql-tools cf CLI plug-in. For more information about the plug-in, see the
mysql-cli-plugin repository on GitHub.

Plan types that are available on both the primary foundation and the secondary foundation.

To perform a switchover across foundations using the mysql-tools plug-in:

1. Save the Cloud Foundry configuration for targeting the primary foundation.

1. Log in to the deployment for the primary foundation:

cf login PRIMARY-API-URL

Where PRIMARY-API-URL is the API endpoint for the primary foundation.

2. Use the mysql-tools plug-in to save the configuration:

cf mysql-tools save-target PRIMARY-TARGET-NAME

Where PRIMARY-TARGET-NAME is your chosen name for the primary foundation.

2. Save the Cloud Foundry configuration for targeting the secondary foundation.

1. Log in to the deployment for the secondary foundation:

cf login SECONDARY-API-URL

Where SECONDARY-API-URL is the API endpoint for the secondary foundation.

2. Use the mysql-tools plug-in to save the configuration:

cf mysql-tools save-target SECONDARY-TARGET-NAME

Where SECONDARY-TARGET-NAME is your chosen name for the secondary foundation.

242
Tanzu for MySQL

3. Use the mysql-tools plug-in to perform the switchover:

cf mysql-tools switchover --primary-target PRIMARY-TARGET-NAME --primary-instan


ce PRIMARY-INSTANCE \
--secondary-target SECONDARY-TARGET-NAME --secondary-instance SECONDARY-INSTAN
CE

Where:

PRIMARY-TARGET-NAME is the name you chose for the primary foundation.

PRIMARY-INSTANCE is the name you chose for the primary instance.

SECONDARY-TARGET-NAME is the name you chose for the secondary foundation.

SECONDARY-INSTANCE is the name you chose for the secondary instance.

This step uses tokens from the first two steps of this procedure, and it must be
performed before those tokens expire (within minutes of performing the first two
steps).

This process may take several minutes to complete, depending on your selected service instances
and foundation configurations. Any Multi‑Site Replication instances experience downtime as the
leader and follower instances are re-deployed.

The plug-in output includes status lines that show checkpoints for the various configuration steps:

When successful, leader will become secondary and follower will become primary.
Do you want to continue? [yN]: y
[leader-foundation] Checking whether instance 'primary' exists
[follower-foundation] Checking whether plan 'multisite' exists
[follower-foundation] Checking whether instance 'secondary' exists
[leader-foundation] Checking whether plan 'multisite' exists
[leader-foundation] Demoting primary instance 'primary'
[follower-foundation] Promoting secondary instance 'secondary'
[leader-foundation] Retrieving information for new secondary instance 'primary'
[follower-foundation] Registering secondary instance information on new primary
instance 'secondary'
[follower-foundation] Retrieving replication configuration from new primary ins
tance 'secondary'
[leader-foundation] Updating new secondary instance 'primary' with replication
configuration
Successfully switched replication roles. primary = [follower-foundation] follow
er, secondary = [leader-foundation] leader

4. Purge the Cloud Foundry configuration information from your computer:

cf mysql-tools remove-target PRIMARY-TARGET-NAME


cf mysql-tools remove-target SECONDARY-TARGET-NAME

Where:

PRIMARY-TARGET-NAME is your chosen name for the primary foundation.

SECONDARY-TARGET-NAME is your chosen name for the secondary foundation.

243
Tanzu for MySQL

Trigger a Switchover Manually


To trigger a switchover:

1. Make the leader read-only.

2. Promote the follower.

3. Reconfigure multi-site replication.

This procedure applies only to configured multi-site service instances. If you try promoting
other service instances to be read-only or a multi-site leader, you see an error message
similar to:

Updating service instance non-multisite-db as admin...


FAILED
Server error, status code: 502, error code: 10001, message: Service br
oker error: 1 error occurred:
* the configuration parameter 'initiate-failover' is not a valid o
ption

Make the leader read-only


1. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

Where PRIMARY-API-URL is the API endpoint for the primary foundation.

2. Set your leader to read-only.

If your leader is an HA service instance, set it to read-only and downscale it to a single


node by running:

cf update-service PRIMARY-INSTANCE \
-c '{"initiate-failover":"make-leader-read-only"}' \
-p MULTI-SITE-REPLICATION-PLAN

Where MULTI-SITE-REPLICATION-PLAN is the name of a service plan configured with the


multi‑site replication topology.

For example:

$ cf update-service primary-db \
-c '{"initiate-failover":"make-leader-read-only"}' \
-p multi-site-single-node-plan

Updating service instance primary-db as admin...


OK

If your leader is a single-node multi‑site replication service instance, set it to read-only by


running:

244
Tanzu for MySQL

cf update-service PRIMARY-INSTANCE \
-c '{"initiate-failover":"make-leader-read-only"}'

For example:

$ cf update-service primary-db \
-c '{"initiate-failover":"make-leader-read-only"}'

Updating service instance primary-db as admin...


OK

The service instance can be made writable again by running:

cf update-service --wait PRIMARY-INSTANCE -c '{"multisite": "make-leader-


writeable"}''

For example:

$ cf update-service --wait primary-node -c '{"multisite": "make-l


eader-writeable"}'

Updating service instance primary-node as admin...


OK

To determine whether your leader was created from a multi‑site replication or a HA cluster
plan, run cf services and look in the “plan” column. Consult your platform operator if you
are unsure which plans correspond to which service instance type.

3. Watch the progress of the service instance update by running:

watch cf services

Wait for the last operation for your instance to show as update succeeded.

Promote the follower


1. Log in to the deployment for your secondary foundation by running:

cf login SECONDARY-API-URL

Where SECONDARY-API-URL is the API endpoint for your secondary foundation.

2. Promote the follower service instance to leader by running one of these two commands, based the
selection you made above in Select your promoted leader topology:

To keep your service instance on its existing single-VM multi‑site replication topology, run
the following command:

cf update-service SECONDARY-INSTANCE \
-c '{"initiate-failover":"promote-follower-to-leader"}'

For example:

245
Tanzu for MySQL

$ cf update-service secondary-db \
-c '{"initiate-failover":"promote-follower-to-leader"}'

Updating service instance secondary-db as admin...


OK

To scale up your service instance to a three-VM HA cluster topology, run the following
command:

cf update-service SECONDARY-INSTANCE \
-c '{"initiate-failover":"promote-follower-to-leader"} \
-p HA-PLAN-NAME'

Where HA-PLAN-NAME is the name of a plan configured in your second foundation’s


Operations Manager with a HA Cluster topology.

For example:

$ cf update-service secondary-db \
-c '{"initiate-failover":"promote-follower-to-leader"} \
-p configured-ha-plan

Updating service instance secondary-db as admin...


OK

3. If this command fails, do one of the following:

If you have unapplied local transactions on the follower service instance, wait for the
transactions to be applied and then run the command again. The error message might look
like the following:

Updating service instance secondary-db as admin...


FAILED
Server error, status code: 502, error code: 10001, message: Service broke
r error: Promotion of follower failed - has 1 transactions still unapplie
d

If the leader service instance is still reachable and in read-write mode, verify that you
successfully completed the above steps in Make the leader read-only. The error message
might look like the following:

Updating service instance secondary-db as admin...


FAILED
Server error, status code: 502, error code: 10001, message: Service broke
r error: Promotion of follower failed - the leader is still writable

4. Watch the progress of the service instance update by running:

watch cf services

Wait for the last operation for your instance to show as update succeeded.

Reconfigure multi-site replication

246
Tanzu for MySQL

To establish a connection between the service instances in the primary and secondary foundations, you
must reconfigure replication. Reconfiguring replication is similar to the procedure in Configure multi‑site
replication, except that the primary foundation service instance becomes the new follower and the
secondary foundation service instance becomes the new leader.

To successfully trigger a switchover, the follower dataset must be a subset of the leader
dataset. This means that multi-site replication has not written new data exclusively to the
follower. The follower must also be no more than three days behind the leader.

If your follower instance does not satisfy these requirements, you must create a new
multi‑site replication service instance and reconfigure replication using this new, empty
instance as the follower.

The following diagram describes the workflow for reconfiguring multi-site replication:

View a larger version of this diagram

The steps shown in the diagram are:

1. Create host-info service key.

2. Record host-info service key.

3. Update secondary service instance with host-info service key.

4. Create credentials service key.

5. Record credentials service key.

247
Tanzu for MySQL

6. Update primary service instance with credentials service key.

To reconfigure replication for the service instances:

1. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

2. Create a host-info service key for the service instance in your primary foundation:

cf create-service-key PRIMARY-INSTANCE SERVICE-KEY \


-c '{"replication-request": "host-info"}'

Where:

PRIMARY-INSTANCE is the name of the follower service instance in the primary foundation.

SERVICE-KEY is a name you choose for the host-info service key.

For example:

$ cf create-service-key primary-db host-info \


-c '{"replication-request": "host-info" }'

Creating service key host-info for service instance primary-db as admi


n...
OK

3. View the replication-credentials for your host-info service key by running:

cf service-key PRIMARY-INSTANCE SERVICE-KEY

Where:

PRIMARY-INSTANCE is the name of the follower service instance in the primary foundation.

SERVICE-KEY is the name of the host-info service key you created in previous step.

For example:

$ cf service-key primary-db host-info-key

Getting key host-info-key for service instance primary-db as admin...

{
"credentials": {
"replication": {
"peer-info": {
"hostname": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad.mysql.s
ervice.internal",
"ip": "10.0.19.12",
"system_domain": "sys.primary-domain.com",
"uuid": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad"
},
"role": "leader"

248
Tanzu for MySQL

}
}
}

This procedure assumes you are using cf CLI v8 or greater. Earlier cf CLI versions
do not include the top-level credentials JSON key in their cf service-key
response.

4. Record the output of the previous command, and remove the top-level credentials key.

5. Log in to the deployment for your secondary foundation by running:

cf login SECONDARY-API-URL

6. Update your secondary foundation service instance with the host-info service key by running:

cf update-service SECONDARY-INSTANCE -c HOST-INFO

Where:

SECONDARY-INSTANCE is the name of the secondary service instance you created in


Creating multi-site replication service instances.

HOST-INFO is the output you recorded in an earlier step.

For example:

$ cf update-service secondary-db -c '{


"replication":{
"peer-info":{
"hostname": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad.mysql.service.i
nternal",
"ip": "10.0.18.12",
"system_domain": "sys.primary-domain.com",
"uuid": "878f5fb3-fcc5-43cd-8c1f-3018e9f277ad"
},
"role": "leader"
}
}'

Updating service instance secondary-db as admin...


OK

7. Monitor the progress of the service instance update by running:

watch cf services

Wait for the last operation for your instance to show as update succeeded.

8. Create a credentials service key for your secondary foundation service instance by running:

cf create-service-key SECONDARY-INSTANCE SERVICE-KEY-NAME \

249
Tanzu for MySQL

-c '{"replication-request": "credentials"}'

Where:

SECONDARY-INSTANCE is the name of the service instance in the secondary foundation.

SERVICE-KEY-NAME is a name you choose for the credentials service key.

For example:

$ cf create-service-key secondary-db cred-key \


-c '{"replication-request": "credentials" }'

Creating service key cred-key for service instance secondary-db as user


@example.com...
OK

The -c flag is different than the one in step 2.

9. View the replication-credentials for your credentials service key by running:

cf service-key SECONDARY-INSTANCE SERVICE-KEY-NAME

Where:

SECONDARY-INSTANCE is the name of the service instance in the secondary foundation.

SERVICE-KEY-NAME is the name of the credentials service key you created earlier.

For example:

$ cf service-key secondary-db cred-key

Getting key cred-key for service instance secondary as admin...

{
"credentials": {
"replication": {
"credentials": {
"password": "a22aaa2a2a2aaaaa",
"username": "6bf07ae455a14064a9073cec8696366c"
},
"peer-info": {
"hostname": "zy98xw76-5432-19v8-765u-43219t876543.mysql.ser
vice.internal",
"ip": "10.0.17.12",
"system_domain": "sys.secondary-domain.com",
"uuid": "zy98xw76-5432-19v8-765u-43219t876543",
"ports": {
"mysql": 3306,
"agent": 8443,
"backup": 8081
},
},
"role": "follower"

250
Tanzu for MySQL

}
}
}

This procedure assumes you are using cf CLI v8 or greater. Earlier cf CLI versions
do not include the top-level credentials JSON key in their cf service-key
response.

10. Record the output of the previous command, and remove the top-level credentials key.

11. Log in to the deployment for your primary foundation by running:

cf login PRIMARY-API-URL

12. Update the primary foundation service instance with the credentials service key by running:

cf update-service PRIMARY-INSTANCE -c CREDENTIALS

Where:

PRIMARY-INSTANCE is name of the service instance in the primary foundation.

CREDENTIALS is the output you recorded in the previous step.

For example:

$ cf update-service primary-db -c '{"replication": {


"credentials": {
"password": "a22aaa2a2a2aaaaa",
"username": "6bf07ae455a14064a9073cec8696366c"
},
"peer-info": {
"hostname": "zy98xw76-5432-19v8-765u-43219t876543.mysql.service.i
nternal",
"ip": "10.0.17.12",
"ports": {
"agent": 8443,
"backup": 8081,
"mysql": 3306
},
"system_domain": "sys.secondary-domain.com",
"uuid": "zy98xw76-5432-19v8-765u-43219t876543"
},
"role": "follower"
}
}'

Updating service instance primary-db as admin...


OK

13. Watch the progress of the service instance update by running:

251
Tanzu for MySQL

watch cf services

Wait for the last operation for your instance to show as update succeeded.

You now have a multi-site replication successfully configured, where the new leader is in your
secondary foundation and the new follower is in the primary foundation.

14. If this command fails and you get one of the following errors, you must create a new multi‑site
replication service instance in your primary foundation and reconfigure replication using this new
empty instance as the follower.

If your follower service instance is more than three days behind the leader, you see an error
message similar to the following:

$ cf update-service primary-db -c /tmp/credentials-key.json


Updating service instance primary-db as admin...
FAILED
Server error, status code: 502, error code: 10001, message: Ser
vice broker error: Establishing Replication Failed - follower is
too far behind Leader to start replication
Leader GTIDs offering: "487e6056-6e93-11ea-8c96-42010a010806:5-
9"
Follower GTIDs missed: "487e6056-6e93-11ea-8c96-42010a010806:1-
9"
Try again with an empty instance or contact your operator to tr
oubleshoot

If your follower has a divergent dataset from the leader, you see an error message similar
to the following:

$ cf update-service primary-db -c /tmp/credentials-key.json


Updating service instance primary-db as admin...
FAILED
Server error, status code: 502, error code: 10001, message: Ser
vice broker error: Establishing Replication Failed - the follower
has divergent data
Leader GTIDs applied: "bd2ff185-6947-11ea-80d8-42010a000808:1-2
0"
Follower GTIDs applied: "c1abd2a4-6947-11ea-8099-42010a010807:1
-15"
Try again with an empty instance or contact your operator to tr
oubleshoot

In either case, you must create a new multi‑site replication service instance and reconfigure
replication using this new empty instance as the follower.

15. Reconfigure your global DNS load balancer to direct traffic to the correct foundations of your
choice. See Configure Your GLB.

Backing up and Restoring

252
Tanzu for MySQL

These topics cover the various types of backups you, as an application developer, can perform for Tanzu
for MySQL.

About Backups

Making Full Backups

Restoring Incremental Backups

Backing up and restoring with mysqldump

About Backups for Tanzu for MySQL


This topic is for application developers. It covers the two types of backups you can run and when to use
each one for Tanzu for MySQL.

Full backups: see Configuring Full Backups

Incremental backups: see Configuring Incremental Backups

You might want to back up or restore a service instance in the following use cases:

Disaster recovery

Troubleshooting

Testing

The backup and restore capability described in this topic restores a running service instance’s backup to a
new instance. It is not intended to list or restore backups created by a deleted service instance. For more
information about restoring a backup from a deleted service instance, see Manually Restoring From
Backup.

The backup procedures assume that you are using the adbr plug-in v0.3.0 or later. See Prerequisite: adbr
plug-in for instructions.

About full backups


Full backups use the Percona Xtrabackup tool, which does not lock schemas that use the default InnoDB
transactional storage engine. Full backups do briefly lock non-transactional operations, specifically DDL and
MyISAM (non-InnoDB) storage engine operations. Developers should be aware of these backup DDL and
MyISAM locks, and consider any impact on their running applications. For more information about locking
behavior during full backups, see the Percona documentation.

Full backups fail if the service broker is unavailable (for example, during an upgrade).

About incremental backups


Incremental backups supplement full backups. They automatically back up a service instance’s new
transactions as they occur. The operator configures how frequently new transactions get backed up (the
default is every 15 minutes). These incremental backups occur continuously, in tandem with any full
backups of the service instance.

When you restore a full backup from a source instance, you can optionally restore up to that source’s
“latest” transaction. The resulting “incremental restore” includes the full backup contents, plus all of the

253
Tanzu for MySQL

source’s more recent transactions (that is, transactions after the full backup, and up to the most recent
incremental backup).

Incremental restores always contain a full restore in their execution: a single adbr command restores a full
backup, and then applies incremental transactions to that restored database. You trigger an incremental
restore by issuing a restore to the “latest” restore point, using the new --restore-point option in the adbr
cli:

cf adbr restore ${TARGET_INSTANCE_ID} ${FULL_BACKUP_ID} --restore-point=latest

Incremental backups do not replace full backups. Incremental backups always


restore on top of and relative to a full backup. Continue making full backups to
take advantage of incremental backups and restores.

For an incremental restore to succeed, the source instance's incremental backups


must be continuously activated from the time the source instance's full backup
was taken. Incremental backup must also be enabled on the destination instance
being restored.

Prerequisite: adbr plug-in


Before you can manually back up or restore a service instance, you must have installed the
ApplicationDataBackupRestore (adbr) plug-in for the Cloud Foundry Command Line Interface (cf CLI) tool.

For the procedures on this page, you need the adbr plug-in v0.3.0 or later.

To install the adbr plug-in, run:

cf install-plugin -r CF-Community "ApplicationDataBackupRestore"

Using full backups for VMware Tanzu for MySQL


This topic describes how you can create and restore a full backup of a VMware Tanzu for MySQL on Cloud
Foundry service instance

You can back up and restore a Tanzu for MySQL service instance using the Cloud Foundry Command Line
Interface (cf CLI). The backup artifacts you create in this topic are physical backups. The following
procedures do not create logical backup artifacts. Restoring a physical backup is faster than restoring a
logical backup and causes less downtime.

The procedures on this page assume that you are using the adbr plug-in v0.3.0 or later.

Before following the procedures in this topic, see About Backups. This topic provides the basics and the
prerequisites for creating and restoring a full backup of a service instance.

The backup and restore capability described in this topic restores a running service instance's backup to a
new instance. It is not intended to list or restore backups created by a deleted service instance. For more
information about restoring a backup from a deleted service instance, see Manually Restoring From
Backup.

254
Tanzu for MySQL

Manually back up a service instance


To manually trigger a full backup outside of the configured schedule:

1. Back up your service instance by running:

cf adbr backup SOURCE-INSTANCE

Where SOURCE-INSTANCE is the service instance you are backing up.

2. View the status of the backup process by running:

cf adbr get-status SOURCE-INSTANCE

For example:

$ cf adbr get-status myDB


Getting status of service instance myDB in org my-org / space my-spac
e as user...
[Fri May 28 18:08:25 UTC 2021] Status: Backup was successful. Uploade
d 3.2M

Restore a service instance


When restoring a service instance, you create a new service instance and then restore the backup to it. If
you are restoring leader-follower or HA cluster service instance, you update the plan after restoring the data.
Finally, you rebind and restage apps that use the service instance.

About restoring a service instance


Before you begin the restore procedure review the following important information:

The topology of the instance you restore to

Restoring to a non-empty database

When to use a different restore method

Concept: The topology of the instance you restore to

You can restore to service instances of any topology type. Whatever topology you choose, ensure that the
destination service instance has sufficient persistent disk to accommodate the restore’s data volume. For
information about persistent disk sizing recommendations, see Persistent disk usage.

Concept: Restoring to a non-empty database

You can restore to either a newly-created empty database service instance, or to a running non-empty
database service instance that is actively hosting data.

Restoring to a non-empty database is destructive; you lose any data in that database. This includes not just
application data, but also ancillary configuration data such as:

Application bindings and service keys you have created

Automatic backup configuration

255
Tanzu for MySQL

Any replication configuration if restoring to a Multi‑Site Replication service instance

Before restoring, confirm that it is safe to overwrite the database. During the restore, the cf adbr CLI plug-
in asks you to provide manual confirmation, or you can use the --force or -f flag.

1. Provide manual confirmation:

cf adbr restore DESTINATION-INSTANCE BACKUP-ID


Restoring service instance DESTINATION-INSTANCE in org $ORG / space $SPACE as
$USER...
This action will overwrite all data in this service instance.
Really restore the service instance DESTINATION-INSTANCE? [yN]: y
OK

2. Use the --force or -f flag:

cf adbr restore DESTINATION-INSTANCE BACKUP-ID --force


Restoring service instance DESTINATION-INSTANCE in org $ORG / space $SPACE as
$USER...
This action will overwrite all data in this service instance.
OK

Concept: When to use a different restore method

There are some cases where you cannot use the following procedure.

If you are restoring:

from a deleted service instance, see Manually Restoring from Backup

from a different foundation, see Manually Restoring from Backup

Restore a service instance


To restore a service instance:

1. Create a new MySQL service instance to restore to by running:

cf create-service p.mysql NEW-PLAN DESTINATION-INSTANCE

Where:

NEW-PLAN is the name of the service plan for your new service instance. The plan you
choose depends on the service instance topology that you are restoring. If the topology
that you are restoring is:
Single node or leader-follower: Select a single node plan.

Multi‑Site Replication or HA cluster: Select a Multi‑Site Replication plan.

DESTINATION-INSTANCE is a name that you choose to identify the service instance.

You restore the backup artifact to this service instance. For more information, see Create a service
instance.

2. View the available backup artifacts for your service instance by running:

cf adbr list-backups SOURCE-INSTANCE --limit NUMBER

256
Tanzu for MySQL

Where:

SOURCE-INSTANCE is the name of the service instance.

--limit NUMBER is an optional flag to specify how many backups to list. Without the flag,
the command lists five backups.

For example:

$ cf adbr list-backups myDB --limit 1


Getting backups of service instance myDB in org my-org / space my-spa
ce as user...
Backup ID Time of Backup
a41bf723-2538-4020-9d16-50cccb7b7c8d_1589825284 Fri May 28 18:08:04
UTC 2021

Record the Backup ID from the output.

3. Restore the service instance by running:

cf adbr restore DESTINATION-INSTANCE BACKUP-ID


Restoring service instance DESTINATION-INSTANCE in org $ORG / space $SPACE as
$USER...
This action will overwrite all data in this service instance.
Really restore the service instance DESTINATION-INSTANCE? [yN]: y
OK

Where BACKUP-ID is the ID you recorded in the previous step.

For example:

$ cf adbr restore myTargetDB a41bf723-2538-4020-9d16-50cccb7b7c8d_15898


25284

4. View the status of the restore process by running:

cf adbr get-status DESTINATION-INSTANCE

For example:

$ cf adbr get-status myTargetDB


Getting status of service instance myTargetDB in org my-org / space m
y-space as user...
[Mon May 31 22:29:24 UTC 2021] Status: Restore was successful

If the status is Restore failed, see Troubleshooting the adbr plug-in.

5. Determine if your app is bound to a service instance by running:

cf services

For example:

257
Tanzu for MySQL

$ cf services
Getting services in org my-org / space my-space as user...
OK
name service plan bound apps last operation
myDB p.mysql db-small my-app create succeeded
myTargetDB p.mysql db-small create succeeded

6. If an app is currently bound to a service instance, unbind it by running:

cf unbind-service APP-NAME SOURCE-INSTANCE

If you rebind an app to the Tanzu for MySQL service after unbinding, you must
also rebind any existing custom schemas to the app. When you rebind an app,
stored code, programs, and triggers break. For more information about binding
custom schemas, see Use custom schemas.

7. Update your app to bind to the new service instance by running:

cf bind-service APP-NAME DESTINATION-INSTANCE

8. Restage your app by running:

cf restage APP-NAME

Troubleshooting the adbr plug-in


If you get HTTP error codes when working with the adbr plug-in, see Failed backup or restore with adbr plug-
in in Troubleshooting on-demand instances.

Monitor backups
It is particularly important to verify that automated full backups are being taken according to schedule. A
common cause of failure for automated full backups is the persistent disk filling up.

There are three ways to check that full backups are being made:

Use the adbr plug-in to list backups.

Use Healthwatch to confirm automated full backups

Monitoring and KPIs for VMware Tanzu for MySQL

Use the adbr plug-in to list backups


The adbr plug-in is a quick way to list backups for a service instance. The plug-in lists the last five
backups for a service instance. The list includes both automated and manual backups.

1. List the backups for an instance using the adbr plug-in by running:

cf adbr list-backups SOURCE-INSTANCE --limit NUMBER

258
Tanzu for MySQL

Where:

SOURCE-INSTANCE is the name of the service instance.

--limit NUMBER is an optional flag to specify how many backups to list. Without the flag,
the command lists five backups.

For example:

$ cf adbr list-backups myDB


Getting backups of service instance myDB in org my-org / space my-space
as admin...
Backup ID Time of Backup
f4df63d3-ece1-4b53-99a4-4b55ad10af2f_1596675600 Wed Jun 2 01:00:00 U
TC 2021
f4df63d3-ece1-4b53-99a4-4b55ad10af2f_1596675420 Wed Jun 2 00:57:00 U
TC 2021
f4df63d3-ece1-4b53-99a4-4b55ad10af2f_1596675240 Wed Jun 2 00:54:00 U
TC 2021
f4df63d3-ece1-4b53-99a4-4b55ad10af2f_1596675000 Wed Jun 2 00:50:00 U
TC 2021
f4df63d3-ece1-4b53-99a4-4b55ad10af2f_1596674700 Wed Jun 2 00:45:00 U
TC 2021

Restoring Incremental Backups


This topic describes how to restore incremental backups for VMware Tanzu for MySQL on Cloud Foundry.
The incremental backup features were introduced as part of Release 10.0.

Before using any of the procedures in this topic, ensure that your operator has activated
incremental backups in the Tanzu for MySQL.

Overview of incremental backups and restores


Incremental backups supplement your periodic full backups with frequent logical backups, giving you much
better backup granularity, and limiting lost data in problem scenarios.

Your operator activates incremental backups within Tanzu for MySQL. Incremental-enabled service
instances continuously back up individual database transactions. When restoring a service instance’s full
backup, you can restore to that instance’s “latest” transaction when needed. Your restore includes
transactions that occurred after the full backup was created.

For more information, see About incremental backups

Prepare incremental backups


You do not “create” incremental backups; instead, your operator activates incremental backups within Tanzu
for MySQL.

After incremental backups are activated, new and upgraded service instances automatically and
continuously create incremental backups. Your operator establishes the frequency of these incremental

259
Tanzu for MySQL

backups.

If you have just upgraded a service instance to take advantage of incremental backups, immediately create
a full backup of that service instance, and use that full backup in subsequent incremental restores
(described below). Incremental restores using a pre-upgrade full backup are unsupported; the resulting
restored database may be missing some transactions from before the upgrade.

You should monitor your service instance's incremental backups to ensure that they are
successful and free of recorded transaction gaps. For more information, see Monitor
Incremental Backups.

Perform an incremental restore


You perform incremental restore by using the ApplicationDataBackupRestore (adbr) plugin to restore a full
backup with an additional argument --restore-point=latest.

For an incremental restore to succeed, the source instance’s incremental backups must be continuously
activated from the time the source instance’s full backup was taken. Incremental backup must also be
enabled on the destination instance being restored.

During an incremental restore, do not create service bindings. The create binding will
succeed, but to prevent the database from being in a bad state, Tanzu for MySQL blocks
any external requests coming from the bound app.
If you accidentally bind an app while the restore is in progress, that binding is deleted as
the restore process finishes. If this happens, you will have to create a new binding before
restaging the app.

1. Use the adbr plugin CLI to list the most recent full backup from the service instance you want to
restore by running:

cf adbr list-backups SOURCE_INSTANCE --limit 1

Where:

SOURCE_INSTANCE is the name of the service instance whose backup you want to restore

For example:

$ cf adbr list-backups myDB –limit 1 Getting backups of service instanc


e myDB in org my-org / space my-space as admin… Backup ID
Time of Backup f4df63d3-ece1-4b53-99a4-4b55ad10af2f_1596675600 Wed Ap
r 9 01:00:00 UTC 2025

2. Issue the command to restore to the “latest” transaction by running:

cf adbr restore --restore-point=latest TARGET_INSTANCE BACKUP_ID

Where:

TARGET_INSTANCE is the name of the service instance you are restoring to

260
Tanzu for MySQL

BACKUP_ID is the Backup ID returned by the previous command

In this first release of incremental backups, latest is the only available restore
point.

3. Check the status of your restore by running:

$ cf adbr get-status TARGET_INSTANCE

For example:

$ cf adbr get-status targetdb

Getting status of service instance targetdb in org org001 / space orsp01 as adm
in...
Backup ID Time of Backup
Status Reason
122527df-9f45-4405-970f-5027528b0b97_1744165423 Wed Apr 9 02:30:00 UTC 2025
Restore was successful n/a

Incremental restores take longer to execute than full restores. An incremental restore first performs a full
restore of the provided full backup, then individually applies additional transactions.

Monitor Incremental Backups


You should monitor your incremental backups to ensure they are healthy and valid to restore.

A service instance with incremental backups enabled emits three relevant metrics:

binlog_collector_files_archived_total: a count of binary log files archived by incremental


backups

binlog_collector_last_archived_timestamp_seconds: the time the last file was archived

binlog_collector_gap_detected_total: a counter of detected transaction gaps in the set of


binary logs being archived, indicating that some transactions could not be archived.
This is not about gaps since the last full backup, only the gaps between this and the next
incremental backup.

For more information on these metrics, see Monitoring and KPIs for VMware Tanzu for MySQL

Troubleshooting Incremental Backups


This section describes the messages you might see when restoring incremental backups and what to do if
you see them.

Symptom/Message When restoring to or from a legacy HA instance, incremental restore fails with a
message similar to
"This instance does not support incremental backup."

261
Tanzu for MySQL

Explanation The HA instance was created with Tanzu for MySQL version 3.1 or earlier, which does
not enable GTIDs.
This error occurs when that legacy instance has not been upgraded to the newest
Tanzu for MySQL.

Solution/Action Incremental backups are incompatible with HA service instance created with Tanzu for
MySQL version 3.1 or earlier. Upgrading that service instance to a newer version of
Tanzu for MySQL does not fix this incompatibility.
If you require incremental backups of this legacy HA instance's data, you must first
create a new HA service instance, and then migrate that data from your legacy instance
to your newly-created instance. You can use full backup and restore to migrate that
data.

Symptom/Message When restoring to or from a legacy HA instance that has been upgraded to the latest
Tanzu for MySQL, incremental restore fails.
Only the full backup is restored.

Explanation The HA instance was created with Tanzu for MySQL version 3.1 or earlier, which does
not enable GTIDs.
This error occurs when that legacy instance has been upgraded to the newest Tanzu
for MySQL; upgraded instances are still incompatible with incremental backups.

Solution/Action Incremental backups are incompatible with HA service instance created with Tanzu for
MySQL version 3.1 or earlier. Upgrading that service instance to a newer version of
Tanzu for MySQL does not fix this incompatibility.
If you require incremental backups of this legacy HA instance's data, you must first
create a new HA service instance, and then migrate that data from your legacy instance
to your newly-created instance. You can use full backup and restore to migrate that
data.

Symptom/Message Restore fails with a message similar to:

Restore failed ... incremental recovery is not supported b


y this service instance via the ADBR api

Explanation You are attempting incremental restore to a service instance created before incremental
backups were released (Tanzu for MySQL v.10.0), and you have not yet upgraded that
service instance to support incremental backups.

Solution/Action Upgrade the service instance to the latest version, or restore to a newly-created
service instance.

Symptom/Message Restore fails with a message similar to:

Restore failed ... this version of the adbr api does not s
upport incremental recovery

Explanation You are attempting incremental restore to a environment running a version of Tanzu for
MySQL before incremental backups were introduced (in v.10.0).

Solution/Action Upgrade Tanzu for MySQL to v10.0 or later.

Backing up and restoring with mysqldump

262
Tanzu for MySQL

You can manually create a logical backup with mysqldump and restore a MySQL database with the backup.

A logical backup reproduces the database table structure and data, without copying the InnoDB data files.
The backup contains SQL statements to recreate the data. Because of this, restoring a logical backup
takes longer than restoring a physical backup. While the backup is running, operations against the database
are stalled.

You can create a physical backup by following the procedure in Making Full Backups.

You can configure physical backups of your MySQL database in the Tanzu for MySQL tile.
For information about backups configured by operators, see About Backups.

You might want to create a logical backup in the following use cases:

Migrating a Tanzu for MySQL database to an off-platform database

In most cases, if you want to copy a Tanzu for MySQL database, you can use the migrate
command. For information about using the migrate command, see About data migration in Tanzu
for MySQL.

Backing up specific individual Tanzu for MySQL databases

Editing table definitions or insert statements before restoring the Tanzu for MySQL database

Migrating an off-platform database with encryption at rest or the Percona PAM Authentication plug-
in enabled to a Tanzu for MySQL database. See Restore from an off-platform logical backup.

When you take a logical backup, Tanzu for MySQL does not send an email notification.

Back up and restore a Tanzu for MySQL logical backup


To back up and restore a Tanzu for MySQL logical backup:

1. Create a logical backup of your database. See Create a Tanzu for MySQL logical backup.

2. Restore the logical backup to a Tanzu for MySQL service instance. For more information about
restoring a logical backup, see Restore from a Tanzu for MySQL logical backup.

Create a Tanzu for MySQL logical backup


Tanzu for MySQL deactivates remote admin access to MySQL databases. You must create a read only
service key to access the database for the service instance you want to back up.

To back up your Tanzu for MySQL data manually:

1. Create and retrieve read-only access credentials by following the procedure in Create read-only
access credentials.

2. From the output of the previous step, record the following values:

hostname: The MySQL BOSH DNS hostname

password: The password for the user that can be used for backups of the service instance
database

263
Tanzu for MySQL

username: The username for the user that can be used for backups of the service instance
database

3. Connect to your service instance, by either service-gateway access or an SSH tunnel. For more
information, see Creating a service instance with Service-Gateway access or Using SSH to
connect from outside a deployment.

4. View a list of your databases by running:

mysql --user=USERNAME --password=PASSWORD \


--host=MYSQL-IP \
--port=MYSQL-PORT \
--silent --execute='show databases'

Where:

USERNAME is the username retrieved from the output of cf service-key.

PASSWORD is the password retrieved from the output of cf service-key.

MYSQL-IP is the MySQL IP address. This value is 0 if you are connecting using an SSH
tunnel.

MYSQL-PORT is the MySQL Port. This value is 3306 if you are connecting to the database
directly.

For example:

$ mysql --user=abcdefghijklm --password=123456789 \


--host=10.10.10.5 \
--silent --execute='show databases'

5. Do not back up the following databases:

cf_metadata

information_schema

mysql

performance_schema

sys

6. For each remaining database, start the back up by running:

mysqldump --set-gtid-purged=off \
--no-tablespaces \
--single-transaction \
--user=USERNAME --password=PASSWORD \
--host=MYSQL-IP \
--port=MYSQL-PORT \
--databases DB-NAME > BACKUP-FILE

Where:

USERNAME is the username retrieved from the output of cf service-key.

PASSWORD is the password retrieved from the output of cf service-key.

264
Tanzu for MySQL

MYSQL-IP is the MySQL IP address.

MYSQL-PORT is the MySQL Port.

DB-NAME is the name of the database.

BACKUP-FILE is a name you create for the backup file. Use a different filename for each
backup.

The --set-gtid-purged=off flag enables you to restore the backup without admin privileges.

For example:

$ mysqldump --set-gtid-purged=off \
--no-tablespaces \
--single-transaction \
--user=abcdefghijklm --password=123456789 \
--host=10.10.10.5 \
--port=3306 \
--databases canary\_db > canary\_db.sql

For more information about the mysqldump utility, see mysqldump in the MySQL Documentation.

Restore from a Tanzu for MySQL logical backup


To restore a logical backup:

1. (Optional) If you want to create a new service instance, follow the procedure in Create a service
instance.

2. Retrieve the credentials for the service instance you are restoring the backup to by following steps
1 and 2 in Create a Tanzu for MySQL logical backup. You can either restore the backup to an
existing service instance or the one you created in step 1.

3. Connect to your service instance, by either service-gateway access or an SSH tunnel. For more
information, see Creating a service instance with Service-Gateway access or Using SSH to
connect from outside a deployment.

4. Restore your data from the SQL file on your local machine by running:

mysql --user=USERNAME --password=PASSWORD --host=MYSQL-IP < BACKUP-FILE

Where:

USERNAME is the username retrieved from the output of cf service-key.

PASSWORD is the password retrieved from the output of cf service-key.

MYSQL-IP is the MySQL IP address.

BACKUP-FILE is the name of your backup artifact.

Restore from an off-platform logical backup

265
Tanzu for MySQL

This section assumes that you have already created a logical backup for your off-platform database using
mysqldump.

If you want to restore a logical backup from an off-platform database that has encryption at rest or the
Percona PAM Authentication plug-in enabled, you cannot use the migrate command.

To restore an off-platform logical backup to a Tanzu for MySQL database:

1. If your database has encryption at rest enabled, delete all instances of ENCRYPTION='Y' from your
backup artifact.

2. Retrieve your service instance GUID by running:

cf service SERVICE-INSTANCE-NAME --guid

Record the GUID.

3. Copy the backup artifact to the service instance by running:

bosh -d service-instance_GUID scp ./BACKUP-FILE mysql:/tmp/

Where GUID is the GUID you recorded in the previous step.

4. SSH into the service instance by running:

bosh -d service-instance_GUID ssh

5. Restore your backup artifact into mysql by running:

mysql --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
-D SERVICE-INSTANCE-NAME < /tmp/BACKUP-FILE

Monitoring Node Health


You can get the health status for each node in a MySQL database cluster node in a Highly Available (HA)
cluster plan.

If you cannot write or connect to your app, you should check the health status of your nodes. You can
observe the health status of nodes through a proxy using the Switchboard dashboard or API endpoint. You
can also view the number of client connections routed through a proxy to each node. For more information
about proxies, see About highly available clusters.

Monitor node health


You can monitor the health status of your nodes by doing one of the following:

Monitor Node Health using the dashboard

Monitor Node Health using the API

Prerequisite
To connect to Switchboard, you must obtain credentials.

To obtain credentials for accessing the Switchboard dashboard and API, do the following:

266
Tanzu for MySQL

1. Run the following command:

cf service YOUR-HA

Where YOUR-HA is the name of your HA cluster.

$ cf service myHA
Showing info of service myHA in org acceptance / space example as admi
n...

name: myHA
service: p.mysql
tags:
plan: db-ha-small
description: Dedicated instances of MySQL
documentation:
dashboard: proxy.YOUR-SYSTEM-DOMAIN.TLD
(username: 'PROXY-API-USERNAME', password: 'PROXY-API-PASSWORD')

Showing status of last operation from service myHA...

status: update succeeded


message: Instance update completed
started: 2018-11-20T01:25:55Z
updated: 2018-11-20T01:30:33Z

2. Record the dashboard hostname, user name, and password.

Monitor node health using the dashboard


To monitor node health using the Switchboard dashboard, do the following:

1. To view a list of proxies in your browser, navigate to the hostname that you recorded in
Prerequisite. For example, proxy.YOUR-SYSTEM-DOMAIN.TLD

2. When prompted, enter the user name and password that you recorded in Prerequisite.

3. Click the link for the proxy that you want to use to monitor node health.

4. If you are prompted, enter the user name and password that you recorded in Prerequisite.

267
Tanzu for MySQL

Monitor node health using the API


You can also use the Switchboard API to obtain the information that is shown on the Switchboard
dashboard.

For example, you might want to use the API to write your own app to monitor the cluster.

To monitor node health using the Switchboard API, do the following:

1. To monitor node health, run the following command:

curl -u PROXY-API-USERNAME:PROXY-API-PASSWORD https://PROXY-VM-INDEX-HOSTNAME/v


0/backends

Where:

PROXY-API-USERNAME is the username you recorded in Prerequisite above.

PROXY-API-PASSWORD is the password you recorded in Prerequisite above.

PROXY-VM-INDEX is either 0, 1, or 2 depending on the proxy you want to connect to.

HOSTNAME is the hostname you recorded in Prerequisite above.

The preceding command outputs a JSON object similar to the following:

$ curl -u PROXY-API-USERNAME:PROXY-API-PASSWORD https://0-proxy.YOUR-SY


STEM-DOMAIN.TLD/v0/backends
[
{
"host": "a-b1234c5d6.e-f891.bosh",
"port": 6033,
"healthy": true,
"name": "mysql/30a13eb4-b3e0-45f8-8449-8dfada826db2",
"currentSessionCount": 401,
"active": true,
"trafficEnabled": true
},
{
"host": "a-b1234c5d6.e-f891.bosh",

268
Tanzu for MySQL

"port": 6033,
"healthy": true,
"name": "mysql/742cb991-b818-4b1f-b1bb-f45d834d8df6",
"currentSessionCount": 0,
"active": false,
"trafficEnabled": true
},
{
"host": "a-b1234c5d6.e-f891.bosh",
"port": 6033,
"healthy": true,
"name": "mysql/fc680458-977e-4d1e-8aa5-5ee62fe3e8cc",
"currentSessionCount": 0,
"active": false,
"trafficEnabled": true
}
]

Node health status


Before routing traffic, the proxy queries an HTTP health check process running on the node. This health
check can return as either healthy or unhealthy, or the node can be unresponsive.

Healthy nodes
If the health check process returns HTTP status code 200, the proxy includes the node in its pool of
healthy nodes.

When a new or resurrected nodes rejoin the cluster, the proxy continues to route all connections to the
active node. In the case of failover, the proxy considers all healthy nodes as candidates for new
connections.

Unhealthy nodes
If the health check returns HTTP status code 503, the proxy considers the node unhealthy.

269
Tanzu for MySQL

This happens when a node becomes non-primary. For more information, see About Multi-Site Replication.

The proxy closes existing connections to the unhealthy node. The proxy routes new connections to a
healthy node, assuming such a node exists. Clients are expected to handle reconnecting on connection
failure if the entire cluster becomes inaccessible.

Unresponsive nodes
If node health cannot be verified due to an unreachable or unresponsive health check endpoint, the proxy
considers the node unhealthy. This can happen if there is a network partition or if the VM running the node
and health check failed.

Migrating HA instances for multi-site replication


VMware Tanzu for MySQL on Cloud Foundry v3.2+ allows you to configure newly-provisioned MySQL high-
availability (HA) service instances to be the leader in a multi-site deployment. This topic contains
instructions for performing migration of pre-v3.2 instances so they can use multi-site replication.

This new feature lets you:

create a new HA service instance in your Primary foundation to act as the leader,

create a Multi‑Site Replication service instance (using a Multi‑Site Replication service plan) in your
Secondary foundation to be the follower, and

apply configuration to both instances, establishing replication between that Primary HA leader and
your Secondary Multi‑Site Replication follower.

For more information about multi-site configuration, see Using VMware Tanzu for MySQL for multi-site
replication.

HA service instances created with pre-3.2 tiles can not be configured for multi-site replication. To apply this
new HA multi-site feature to older HA instances, you must migrate that instance’s data to a new HA
instance using a backup and restore operation.

Migrating older high-availability instances for multi-site


replication

270
Tanzu for MySQL

To enable multi-site replication of data hosted on an older (pre-3.2) high-availability service instance:

1. Backup and restore your old HA to a new Multi‑Site Replication instance

2. Scale up the Multi‑Site Replication instance to your new HA

3. Configure your new HA as multi-site replication leader

Backup and restore your old HA to a new Multi‑Site Replication


instance
1. Create a new service instance from a Multi‑Site Replication plan, following the procedure in Create
a service instance.

2. Create a manual backup of your old HA service instance, following the procedure in Manually back
up a service instance.

3. Restore your manual backup to the new Multi‑Site Replication service instance you created in Step
1, following the steps in Restore a service instance.

More information on backup and restore is in Backing up and restoring VMware Tanzu for MySQL.

Scale up Multi‑Site Replication to your new HA


1. Update your new Multi‑Site Replication service instance to a high-availability service instance by
following the procedure in Migrate data to a different plan.

For example, if your new Multi‑Site Replication instance is named new-leader and your HA
configured plan name is mysql-high-availability, you would type:

cf update-service new-leader -p mysql-high-availability

You may ignore the instructions warning against running cf update-service...


to migrate Multi‑Site Replication instances to "a plan of another topology." That
warning applies to instances already configured for multi-site replication (which cf
update-service breaks). Since we have not yet finished configuring multi-site
replication, the Multi‑Site Replication instance can be safely updated.

2. After that update is completed, migrate any application bindings and service keys from your old HA
instance to this new HA instance.

Configure your new HA as multi-site replication leader


1. Log in to your Secondary foundation and create a Multi‑Site Replication service instance from a
multi-site plan, following the procedure in Create Multi-Site Replication service instances.

Skip the procedure's instructions for creating a service instance in your


Primary foundation; the above HA instance is your primary service
instance. You need only create a Multi‑Site Replication service instance in
your secondary foundation.

271
Tanzu for MySQL

Your secondary instance must be a Multi‑Site Replication instance; you


cannot replicate from one HA instance to a second HA instance. (If your
secondary instance becomes the primary instance following a multi-site
failover or switchover, it can be updated to an HA instance at that time.)

2. Configure both your Primary HA and Secondary Multi‑Site Replication service instances for multi-
site replication, following the procedure in Configure Multi-Site Replication.

The procedure in Configure Multi-Site Replication references Multi‑Site Replication


instances in your primary and secondary foundations. This procedure also applies
in the scenario in which you are configuring a new high-availability instance in your
primary foundation. For example, you apply the cf update-service PRIMARY-
INSTANCE -c HOST-INFO command to your primary HA instance.

Other Notes and Considerations


cf mysql-tools plugin HA leaders
In Configure your new HA as a multi-site replication leader, after you created your Secondary foundation
Multi‑Site Replication instance, you can use the cf mysql-tools plug-in to perform the subsequent steps.
The plugin supports configuring both Multi‑Site Replication and also High-Availability instances as multi-site
leaders.

For more information about the cf mysql-tools plug-in, see Create a leader-follower service instance using
mysql-tools.

Troubleshooting on-demand instances


This topic provides techniques that app developers can use to begin troubleshooting on-demand instances.

Troubleshoot errors
Start here if you are responding to a specific error or error messages.

Common service errors


Errors common to on-demand services are:

No Metrics from Log Cache

When Using Service-Gateway Access, create-service or update-service Fails

No Metrics from Log Cache

Symptom You receive no metrics when running the cf tail command.

Cause This might happen because the Firehose is deactivated in the Tanzu Platform for CF
tile.

272
Tanzu for MySQL

No Metrics from Log Cache

Ask your operator to ensure that the V2 Firehose checkbox is activated, and the
Solution
Enable Log Cache syslog ingestion checkbox is deactivated in the Tanzu Platform
for CF tile. For more information about configuring these checkboxes, see Enable
Syslog Forwarding.

When using Service-Gateway Access, create-service or update-service Fails

Symptom When you run cf create-service or cf update-service with


{"enable_external_access": true}, you receive an error like this:

Service broker error: contact your operator,


service configuration issue occurred

Cause When off-platform access is set up for a foundation, a range of TCP ports is reserved
for MySQL traffic. Each service instance for which service-gateway access is enabled
requires one port.
If all the ports in the range have been assigned to other service instances, then you
cannot create or update service instances to use service-gateway access.

To resolve this error, confirm that the problem is due to not enough ports and, if so,
Solution
increase the port range:
1. Review the BOSH logs on the MySQL service broker VM, and, in the
broker.stdout.log file, look for this error message: Failed to update
manifest: There are no free ports in range: […
For information about how to download the service broker logs, see Access
broker logs and VMs.

2. Ask the operator to increase the external TCP port range for off-platform
access by editing the Settings pane on the VMware Tanzu for MySQL tile.
For information about the Settings pane, see Enable Service-Gateway
Access in Enabling Service-Gateway Access.

Instances or database are inaccessible


You might experience the following in a leader-follower or Multi‑Site Replication topology, or during
upgrades:

Temporary outages

Apps cannot write to the database

Apps are inoperable

Apps cannot connect to the database

MySQL Connector/J v5.1.41 or earlier

Mutual TLS

Java apps cannot connect after buildpack update

273
Tanzu for MySQL

Temporary Outages

Symptom Tanzu for MySQL service instances can become temporarily inaccessible during
upgrades and VM or network failures.

For more information, see Service interruptions.


Solution

Apps Cannot Write to the Database

Symptom You have a leader-follower or Multi‑Site Replication topology, and your apps can no
longer write to the database.

Cause If you have a leader-follower topology, the leader VM might be read-only. If you can also
no longer read from the database, your persistent disk might be full.
If you have a Multi‑Site Replication topology, your leader VM might be down.

Solution If you have a leader-follower topology and the leader VM is read-only, see Both Leader
and Follower Instances are Read-Only for how to troubleshoot this problem.
If your apps can also no longer read from the database, your persistent disk might be
full. For more information about troubleshooting inoperable apps, see Apps are
Inoperable.
If you have a Multi‑Site Replication topology and your leader VM is down, to resolve this
issue, you can trigger a failover to the follower VM. For more information about
troubleshooting this problem, see Triggering multi-site replication failover and
switchover.

Apps are Inoperable

Symptom Your apps become inoperable. Read, write, and cf CLI operations do not work.

Cause Your persistent disk might be full.

Contact your operator to check if your persistent disk is full. For more information about
Solution
troubleshooting this problem, see Persistent disk is full.

Apps Cannot Connect to the Database

Symptom Apps can fail to connect to the database.

Cause When your app uses an incompatible version of MySQL Connector/J.

When your app uses mutual TLS (mTLS).

Solution See MySQL Connector/J v5.1.41 or earlier.

See Mutual TLS.

See Java apps cannot connect after buildpack update.

274
Tanzu for MySQL

MySQL Connector/J v5.1.41 or Earlier

Symptom
Apps cannot connect to the database when TLS is enabled and the apps are using
MySQL Connector/J v5.1.41 or earlier.

Cause
You see errors about certificates.

For example:

Caused by: javax.net.ssl.SSLHandshakeException: Rece


ived fatal alert: bad_certificate
at sun.security.ssl.Alerts.getSSLException(Alerts.
java:192) ~[na:1.8.0_152]

Solution If you cannot update the MySQL Connector/J, do the workaround in How to deactivate
KeyManager and TrustManager in Container Security Provider Framework in the
Javanbuildpack.

Mutual TLS

Symptom Apps cannot connect to the database when TLS is activated and your apps use mTLS.

Cause You see network errors in your app logs.

Solution To resolve this issue, deactivate mTLS in your apps.

Java Apps Cannot Connect after Buildpack Update

Symptom After updating a Java app to use Java buildpack v4.38 or later, the app cannot connect
to the database over TLS.
In the app logs, you see errors such as:

javax.net.ssl.SSLHandshakeException: No appropriate
protocol (protocol is deactivated or cipher suites a
re inappropriate)

Cause By default, the new version of Java deactivates TLS v1.0 and v1.1.

Solution Update the app to use MySQL Connector/J v5.1.49 or later, or MySQL Connector/J
v8.0.19 or later. This ensures that TLS v1.2 is used.

Failed backup or restore with the adbr plug-in


If you get errors when working with the ApplicationDataBackupRestore (adbr) plug-in for the Cloud Foundry
Command Line Interface (cf CLI) tool, see:

“400” error during backup or restore

275
Tanzu for MySQL

“500” error during backup or restore

“502” error during backup or restore

“Status: Restore failed” after adbr restore

“400” Error during Backup or Restore

Symptom
When running cf adbr backup or cf adbr restore, an error occurs.

For example:

$ cf adbr backup myDB


Failed to backup service instance "myDB": failed d
ue to server error, status code: 400

Cause The broker on the VM is not running or is in an unhealthy state.

Solution Verify the health of the broker VM and review the logs for the broker.

“500” Error during Backup or Restore

Symptom
When running cf adbr backup or cf adbr restore, an error occurs.

For example:

$ cf adbr backup myDB


Failed to backup service instance "myDB": failed d
ue to server error, status code: 500

Cause The service instance agent is not running or is in an unhealthy state.

Solution Verify the health of the service instance VM and review the logs for the service
instance.

“502” Error during Backup or Restore

Symptom
When running cf adbr backup or cf adbr restore, an error occurs.

For example:

$ cf adbr backup myDB


Failed to backup service instance "myDB": failed d
ue to server error, status code: 502

Cause The VM is down, stopped, or in an unhealthy state.

Solution Verify the health of the broker VM and review the logs for the broker.

276
Tanzu for MySQL

“Status: Restore failed” after adbr Restore

Symptom
When running cf adbr get-status after restoring to a service instance, adbr returns
Restore failed.

For example:

$ cf adbr get-status myTargetDB


Getting status of service instance myTargetDB in org
my-org / space system as admin...
[Thu Feb 25 22:33:58 UTC 2021] Status: Restore faile
d

Cause A possible cause is that the database or infrastructure issues are preventing a
successful restore.

Solution To investigate the restore:


1. If restoring to a non-empty database, try creating a new service instance and
retrying the restore to that empty database.

2. Review other logs on the service instance and, if necessary, contact Support.

For general information about the adbr plug-in, see Backing up and restoring Tanzu for MySQL.

Persistent disk usage is increasing


If you have set the optimize_for_short_words parameter to true and you are alerted that persistent disk
usage is high, then you might need to optimize the indexed tables.

Persistent Disk Usage Is Increasing

Symptom
You have set the optimize_for_short_words optional parameter to true and the
persistent disk is filling up.

For information about the parameter, see Optimize for short words.

Cause Over time, data has been deleted from your database and the full-text index has
become too large.

Solution Remove full-text entries for deleted or old records by following the instructions in
Optimize for short words.

For information about monitoring disk usage, see Monitoring and KPIs.

Techniques for troubleshooting


See the following sections for troubleshooting techniques when using the Cloud Foundry Command-Line
Interface (cf CLI) to perform basic operations on a Tanzu for MySQL service instance.

Basic cf CLI operations include create, update, bind, unbind, and delete.

277
Tanzu for MySQL

Understand a Cloud Foundry error message


Failed operations (create, update, bind, unbind, delete) result in an error message. You can retrieve the error
message later by running the cf CLI command cf service INSTANCE-NAME.

$ cf service myservice

Service instance: myservice


Service: super-db
Bound apps:
Tags:
Plan: dedicated-vm
Description: Dedicated Instance
Documentation url:
Dashboard:

Last Operation
Status: create failed
Message: Instance provisioning failed: There was a problem completing your re
quest.
Please contact your operations team providing the following information:
service: redis-acceptance,
service-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c70089,
broker-request-id: 63da3a35-24aa-4183-aec6-db8294506bac,
task-id: 442,
operation: create
Started: 2017-03-13T10:16:55Z
Updated: 2017-03-13T10:17:58Z

Use the information in the Message field to debug further. Provide this information to Support when filing a
ticket.

The task-id field maps to the BOSH task ID. For more information on a failed BOSH task, use the bosh
task TASK-ID.

The broker-request-guid maps to the portion of the On-Demand Broker log containing the failed step.
Access the broker log through your syslog aggregator, or access BOSH logs for the broker by typing bosh
logs broker 0. If you have more than one broker instance, repeat this process for each instance.

Find information about your service instance


You might need to find the name, GUID, or other information about a service instance. To find this
information, do the following:

1. Log into the space containing the instance or failed instance.

$ cf login

2. If you do not know the name of the service instance, run cf services to see a listing of all service
instances in the space. The service instances are listed in the name column.

$ cf services
Getting services in org my-org / space my-space as [email protected]...

278
Tanzu for MySQL

OK
name service plan bound apps last operation
my-instance p.mysql db-small create succeeded

3. To retrieve more information about a specific instance, run cf service SERVICE-INSTANCE-NAME.

4. To retrieve the GUID of the instance, run cf service SERVICE-INSTANCE-NAME –guid.

The GUID is useful for debugging.

Use the Knowledge Base Community


To find the answer to your question and browse product discussions and solutions, search the Broadcom
Knowledge Base.

File a Support Ticket


You can file a support ticket at Broadcom Support. Be sure to provide the error message from cf service
YOUR-SERVICE-INSTANCE.

To expedite troubleshooting, provide your service broker logs, service instance logs, and BOSH task
output, if possible. Your cloud operator can obtain these from your error message.

279
Tanzu for MySQL

Monitoring and KPIs for VMware Tanzu for


MySQL

You can monitor the health of the VMware Tanzu for MySQL on Cloud Foundry service using logs, metrics,
and Key Performance Indicators (KPIs) that are generated by Tanzu for MySQL component VMs.

For more information about logging and metrics in Tanzu Platform for Cloud Foundry, see Logging and
metrics.

About metrics
Metrics are regularly generated log entries that report measured component states. The default metrics
polling interval is 30 seconds for MySQL instances. For the service broker, the default is 60 seconds.

You can configure the MySQL instance interval in Configure Monitoring in the Tanzu for MySQL tile. For
more information, see Configure monitoring.

Metrics are long, single lines of text with the format:

origin:"p.mysql" eventType:ValueMetric timestamp:1496776477930669688 deployment:"servi


ce-instance_2b5a001f-2bf3-460c-aee6-fd2253f9fb0c" job:"mysql" index:"b09df494-b731-4d0
6-a4b0-c2985ceedf4c" ip:"10.0.8.4" valueMetric:<name:"/p.mysql/performance/open_files"
value:24 unit:"file" >

Access MySQL metrics


To access MySQL metrics:

Use Grafana. This method requires Healthwatch v2.x.

Use log cache

Use Grafana
You can use Grafana to visually view metrics for Tanzu for MySQL service instances. This requires
Healthwatch v2.x.

1. Install the Healthwatch tile in Tanzu Operations Manager. For information about installing
Healthwatch, see the Healthwatch for VMware Tanzu documentation.

2. View the Grafana dashboard by going to:

https://grafana.YOUR-SYSTEM-DOMAIN

For more information about using Grafana dashboards, see the Healthwatch for VMware Tanzu
documentation.

280
Tanzu for MySQL

Use Log Cache


To access metrics for Tanzu for MySQL service instances, you can use the Loggregator Log Cache feature
with the Log Cache CLI plug-in. Log Cache is enabled by default.

To access metrics for on-demand service instances:

1. Install the cf CLI plug-in by running:

cf install-plugin -r CF-Community "log-cache"

2. To access metrics for a service instance, run:

cf tail SERVICE-INSTANCE-NAME

Where SERVICE-INSTANCE-NAME is the name of your service instance.

Two useful flags to append to this command are:

-f, --follow: Append metrics logs to stdout as they are generated, rather than returning a
fixed number of metrics or metrics over a fixed interval.

--json: Output metrics logs as envelopes in JSON format. For a complete list of cf tail
options, see the Log Cache CLI repository in GitHub.

For example:

$ cf tail -f my-instance | egrep 'connected|available|persistent_disk


_used_percent'
2019-05-17T11:25:59.48-0700 [my-instance] GAUGE /p.mysql/available:1.
000000 boolean
2019-05-17T11:26:29.49-0700 [my-instance] GAUGE /p.mysql/system/persi
stent_disk_used_percent:17.000000 percentage
2019-05-17T11:26:29.49-0700 [my-instance] GAUGE /p.mysql/performance/
threads_connected:**6**.000000 connection
2019-05-17T11:26:59.50-0700 [my-instance] GAUGE /p.mysql/available:1.
000000 boolean
2019-05-17T11:27:29.50-0700 [my-instance] GAUGE /p.mysql/system/persi
stent_disk_used_percent:17.000000 percentage
2019-05-17T11:27:29.50-0700 [my-instance] GAUGE /p.mysql/performance/
threads_connected:**7**.000000 connection

For more information about the metrics output, see the Key Performance Indicators and Component
Metrics sections below.

For more information about how to enable Log Cache and about the cf tail command, see Enable Log
Cache.

KPIs for MySQL service instances


KPIs are metrics for MySQL service instances that you can monitor for two purposes:

To ensure high performance

To discover emerging issues

281
Tanzu for MySQL

KPIs can be either raw component metrics or derived metrics generated by applying formulas to raw
metrics.

VMware provides the following KPIs as general alerting and response guidance for typical Tanzu for MySQL
installations. VMware recommends that you alter the alert measures by observing historical trends. You can
also create your KPIs that are specific to your environment using the available component metrics.

For a list of all the Tanzu for MySQL component metrics, see Component Metrics.

Server availability

/p.mysql/available

Description If the MySQL Server is responding to requests. This indicates if the component is available.

Use: If the server does not emit heartbeats, it is offline.

Origin: Doppler/Firehose
Type: Boolean
Frequency: 30 s

Recommended Average over last 5 minutes


measurement

Recommended Yellow warning: N/A


alert thresholds Red critical: < 1

Recommended Check the MySQL Server logs for errors. You can find the instance by targeting your MySQL deployment
response with BOSH and inspecting logs for the instance. For more information, see Failing Jobs and Unhealthy
Instances.

If your service plan is an highly available (HA) cluster, you can also run mysql-diag to check logs for
errors.

Persistent Disk Used

/p.mysql/system/persistent_disk_used_percent

Description The percentage of disk used on the persistent file system.

Use: MySQL cannot function correctly if there is not sufficient free space on the file systems. Use these
metrics to ensure that you have disks large enough for your user base.

Origin: Doppler/Firehose
Type: Percent
Frequency: 30 s (default)

Recommended Maximum of persistent disk used of all of nodes


measurement

282
Tanzu for MySQL

/p.mysql/system/persistent_disk_used_percent

Recommended Single Node and Leader Follower:


alert thresholds Yellow warning: > 25%

Red critical: > 30%

Highly Available Cluster:


Yellow warning: > 80%

Red critical: > 90%

Recommended Upgrade the service instance to a plan with larger disk capacity.
response
For Tanzu SQL for VMs v2.9 and later, if you set the optimize_for_short_words parameter to true,
then see Troubleshooting VMware Tanzu SQL with MySQL for VMs before upgrading the service.

Ephemeral Disk Used

/p.mysql/system/ephemeral_disk_used_percent

Description The percentage of disk used on the ephemeral file system.

Use: MySQL cannot function correctly if there is not sufficient free space on the file systems. Use
these metrics to ensure that you have disks large enough for your user base.

Origin: Doppler/Firehose
Type: Percent
Frequency: 30 s (default)

Recommended Maximum disk used of all nodes


measurement

Recommended Yellow warning: > 80%


alert thresholds Red critical: > 95%

Recommended Upgrade the service instance to a plan with larger disk capacity.
response

CPU use percentage

/p.mysql/performance/cpu_utilization_percent

Description CPU time being consumed by the MySQL service.

Use: A node that experiences context switching or high CPU use becomes unresponsive. This also
affects the ability of the node to report metrics.

Origin: Doppler/Firehose
Type: Percent
Frequency: 30 s (default)

283
Tanzu for MySQL

/p.mysql/performance/cpu_utilization_percent

Recommended Average over last 10 minutes


measurement

Recommended alert Yellow warning: > 80 Red critical: > 90


thresholds

Recommended Discover what is using so much CPU. If it is from normal processes, update the service instance to
response use a plan with larger CPU capacity.

Connections

/p.mysql/variables/max_connections

/p.mysql/net/max_used_connections

Description The maximum number of connections used over the maximum permitted number of simultaneous client
connections.

Use: If the number of connections drastically changes or if apps are unable to connect, there might be a
network or app issue.

Origin: Doppler/Firehose
Type: count
Frequency: 30 s

Recommended max_used_connections / max_connections


measurement

Recommended Yellow warning: > 80 %


alert thresholds Red critical: > 90 %

Recommended If this measurement meets or exceeds 80% with exponential growth, monitor app use to ensure that
response everything is working.

When approaching 100% of maximum connections, apps might not always connect to the database.
The connections/second for a service instance vary based on app instances and app use.

Queries Delta

/p.mysql/performance/queries_delta

Description The number of statements executed by the server over the last 30 seconds.

Use: The server always processes queries. If the server does not process queries, the server is
non-functional.

Origin: Doppler/Firehose
Type: count
Frequency: 30 s

284
Tanzu for MySQL

/p.mysql/performance/queries_delta

Recommended Average over last 2 minutes


measurement

Recommended alert Red critical: 0


thresholds

Recommended Investigate the MySQL server logs, such as the audit log, to understand why query rate changed
response and decide on appropriate action.

Highly Available Cluster WSREP Ready

/p.mysql/galera/wsrep_ready

Description Shows whether each cluster node can accept queries. Returns only 0 or 1. When this metric is 0,
almost all queries to that node fail with the error:
ERROR 1047 (08501) Unknown Command

Use: Discover when nodes of a cluster were unable to communicate and accept transactions.

Origin: Doppler/Firehose
Type: Boolean
Frequency: 30 s (default)

Recommended Average of values of each cluster node, over the last 5 minutes
measurement

Recommended alert Yellow warning: < 1


thresholds Red critical: 0 (cluster is down)

Recommended Run mysql-diag and check the MySQL Server logs for errors.
response
Ensure that no infrastructure event is affecting intra-cluster communication.

Ensure that wsrep_ready is not set to off by using the query:


SHOW STATUS LIKE 'wsrep_ready';.

Highly Available Cluster WSREP Cluster Size

/p.mysql/galera/wsrep_cluster_size

Description The number of cluster nodes with which each node is communicating normally.

Use: When running in a multi-node configuration, this metric indicates if each member of the cluster
is communicating normally with all other nodes.

Origin: Doppler/Firehose
Type: count
Frequency: 30 s (default)

285
Tanzu for MySQL

/p.mysql/galera/wsrep_cluster_size

Recommended (Average of the values of each node / cluster size), over the last 5 minutes
measurement

Recommended alert Yellow warning: < 3 (availability compromised)


thresholds Red critical: < 1 (cluster unavailable)

Recommended Run mysql-diag and check the MySQL Server logs for errors.
response

Highly Available Cluster WSREP Cluster Status

/p.mysql/galera/wsrep_cluster_status

Description Shows the primary status of the cluster component that the node is in.
Values are:
Primary: Node has a quorum.

Non-primary: Node has lost a quorum.

Disconnected: Node is unable to connect to other nodes.

Use: Any value other than Primary indicates that the node is part of a non-operational component. This
occurs in cases of multiple membership changes that cause a loss of quorum.

Origin: Doppler/Firehose
Type: integer (see above)
Frequency: 30 s (default)

Recommended Sum of each of the nodes, over the last 5 minutes


measurement

Recommended Yellow warning: < 3


alert thresholds Red critical: < 1

Recommended Verify that all nodes are in working order and can receive write-sets
response
Run mysql-diag and check the MySQL Server logs for errors

Hours Since Last Successful Backup

/p.mysql/p.mysql/last_successful_backup

Description Using the configured backup schedule for the service instance as a threshold, this
metric shows how many hours have passed since the last successful backup.

Recommended measurement Hours elapsed since the last successful backup

Recommended alert thresholds Red critical: Metric exceeds your organization's policy for maximum backup interval.

Recommended response Check the adbr-agent logs for the service instance to determine why recent automated
backups are failing.

286
Tanzu for MySQL

Component Metrics
In addition to the KPIs, the MySQL service emits the followings metrics for monitoring and alerting:

MySQL metrics

Disk metrics

Leader-Follower metrics

Highly Available cluster Metrics

MySQL metrics
The metrics that all Tanzu for MySQL service instances emit:

/p.mysql/available

Description Indicates if the local database server is available and responding.

Unit boolean

/p.mysql/variables/max_connections

Description The maximum permitted number of simultaneous client connections.

Unit count

/p.mysql/variables/open_files_limit

Description The number of files that the operating system permits mysqld to open.

Unit files

/p.mysql/variables/read_only

Description Whether the server is in read-only mode

Unit boolean

/p.mysql/performance/questions

Description The number of statements run by the server since the server started or the last FLUSH
STATUS. This includes only statements sent to the server by clients and not statements
run within stored programs, unlike the queries variable.

Unit count

/p.mysql/performance/queries

Description The number of statements run by the server, excluding COM_PING and
COM_STATISTICS. Differs from questions in that it also counts statements run within
stored programs. Not affected by FLUSH STATUS.

Unit count

/p.mysql/performance/queries_delta

Description The change in the /performance/queries metric since the last time it was emitted.

Unit integer greater than or equal to zero

287
Tanzu for MySQL

/p.mysql/innodb/buffer_pool_pages_free

Description The amount of free space, measured in pages, in the InnoDB Buffer Pool.

Unit count

/p.mysql/innodb/buffer_pool_pages_total

Description The total amount of free space, measured in pages, in the InnoDB Buffer Pool
containing data.

Unit count

/p.mysql/innodb/buffer_pool_pages_data

Description The number of pages in the InnoDB Buffer Pool containing data. The number includes
both dirty and clean pages.

Unit count

/p.mysql/innodb/row_lock_current_waits

Description The number of row locks currently being waited for by operations on InnoDB tables.

Unit count

/p.mysql/innodb/data_read

Description The amount of data read since the server started.

Unit bytes

/p.mysql/innodb/data_written

Description The amount of data written since the server started.

Unit bytes

/p.mysql/innodb/mutex_os_waits

Description The number of mutex OS waits.

Unit events/second

/p.mysql/innodb/mutex_spin_rounds

Description The number of mutex spin rounds.

Unit events/second

/p.mysql/innodb/mutex_spin_waits

Description The number of mutex spin waits.

Unit events/second

/p.mysql/innodb/os_log_fsyncs

Description The number of fsync() writes done to the InnoDB redo log files.

Unit count

288
Tanzu for MySQL

/p.mysql/innodb/row_lock_time

Description Total time spent in acquiring row locks.

Unit milliseconds

/p.mysql/innodb/row_lock_waits

Description The number of times a row lock had to be waited for since the server started.

Unit count

/p.mysql/net/connections

Description The number of connection attempts to the server, both successful and unsuccessful, to
the MySQL server.

Unit count

/p.mysql/net/max_used_connections

Description The maximum number of connections that have been in use simultaneously since the
server started.

Unit count

/p.mysql/performance/com_delete

Description The number of delete commands since the server started or the last FLUSH STATUS.

Unit count

/p.mysql/performance/com_delete_multi

Description The number of delete-multi commands since the server started or the last FLUSH
STATUS. Applies to DELETE statements that use multiple-table syntax.

Unit count

/p.mysql/performance/com_insert

Description The number of insert commands since the server started or the last FLUSH STATUS.

Unit count

/p.mysql/performance/com_insert_select

Description The number of insert-select commands since the server started or the last FLUSH
STATUS.

Unit count

/p.mysql/performance/com_replace_select

Description The number of replace-select commands since the server started or the last FLUSH
STATUS.

Unit count

289
Tanzu for MySQL

/p.mysql/performance/com_select

Description The number of select commands since the server started or the last FLUSH STATUS. If
a query result is returned from query cache, the server increments the qcache_hits
status variable, not com_select.

Unit count

/p.mysql/performance/com_update

Description The number of update commands since the server started or the last FLUSH STATUS.

Unit count

/p.mysql/performance/com_update_multi

Description The number of update-multi commands since the server started or the last FLUSH
STATUS. Applies to UPDATE statements that use multiple-table syntax.

Unit count

/p.mysql/performance/created_tmp_disk_tables

Description The number of internal on-disk temporary tables created by the server while executing
statements.

Unit count

/p.mysql/performance/created_tmp_files

Description The number of temporary files created by mysqld.

Unit count

/p.mysql/performance/created_tmp_tables

Description The number of internal temporary tables created by the server while executing
statements.

Unit count

/p.mysql/performance/cpu_utilization_percent

Description The percent of the CPU in use by all processes on the MySQL node.

Unit percent

/p.mysql/performance/open_files

Description The number of regular files currently open, which were opened by the server.

Unit count

/p.mysql/performance/open_tables

Description The number of tables that are currently open.

Unit count

/p.mysql/performance/opened_tables

Description The number of tables that have been opened.

290
Tanzu for MySQL

/p.mysql/performance/opened_tables

Unit count

/p.mysql/performance/open_table_definitions

Description The number of currently cached table definitions or .frm files.

Unit count

/p.mysql/performance/opened_table_definitions

Description The number of .frm files that have been cached.

Unit count

/p.mysql/performance/qcache_hits

Description The number of query cache hits. The query cache and qcache_hits metric is
deprecated as of MySQL v5.7.20.

Unit count

/p.mysql/performance/slow_queries

Description The number of queries that have taken more than long_query_time seconds.

Unit count

/p.mysql/performance/table_locks_waited

Description The total number of times that a request for a table lock could not be granted
immediately and a wait was needed.

Unit count

/p.mysql/performance/threads_connected

Description The number of currently open connections.

Unit count

/p.mysql/performance/threads_running

Description The number of threads that are not sleeping.

Unit count

/p.mysql/rpl_semi_sync_master_tx_avg_wait_time

Description The average time the leader has waited for the follower to accept transactions.

Unit microseconds

/p.mysql/rpl_semi_sync_master_no_tx

Description The number of transactions committed without follower acknowledgement.

Unit count

291
Tanzu for MySQL

/p.mysql/rpl_semi_sync_master_wait_sessions

Description The current number of connections waiting for a sync commit. For more information
about sync replication, see About Synchronous Replication.

Unit count

Disk metrics
The disk usage metrics that all Tanzu for MySQL services emit:

/p.mysql/system/persistent_disk_used_percent

Description The percentage of disk used on the persistent file system.

Unit percent

/p.mysql/system/persistent_disk_used

Description The amount of space used on the persistent disk.

Unit KB

/p.mysql/system/persistent_disk_free

Description The amount of space available on the persistent disk.

Unit KB

/p.mysql/system/persistent_disk_inodes_used_percent

Description The percentage of persistent disk inodes used by both the system and user
applications.

Unit percent

/p.mysql/system/persistent_disk_inodes_used

Description The number of inodes used on the persistent disk.

Unit count

/p.mysql/system/persistent_disk_inodes_free

Description The number of inodes available on the persistent disk.

Unit count

/p.mysql/system/ephemeral_disk_used_percent

Description The percentage of disk used on the ephemeral file system.

Unit percent

/p.mysql/system/ephemeral_disk_used

Description The amount of space used on the ephemeral disk.

Unit KB

292
Tanzu for MySQL

/p.mysql/system/ephemeral_disk_free

Description The amount of space available on the ephemeral disk.

Unit KB

/p.mysql/system/ephemeral_disk_inodes_used_percent

Description The percentage of ephemeral disk inodes used by both the system and user
applications.

Unit percent

/p.mysql/system/ephemeral_disk_inodes_used

Description The number of inodes used on the ephemeral disk.

Unit count

/p.mysql/system/ephemeral_disk_inodes_free

Description The number of inodes available on the ephemeral disk.

Unit count

Leader-Follower metrics
The metrics that leader-follower VMs emit:

/p.mysql/follower/is_follower

Description Shows whether a node is the follower VM.

Unit boolean

/p.mysql/follower/seconds_behind_master

Description The number of seconds the follower is behind in applying writes from the leader. For
example, a follower might have copied writes from the leader that are timestamped up to
11:00am, but has only applied writes up to 8:00am. Normal values for this metric
depend on your apps.

Unit seconds

/p.mysql/follower/seconds_since_leader_heartbeat

Description The number of seconds that elapse between the leader heartbeat and the replication of
the heartbeat in the follower. You can use this metric to determine how far behind the
follower is from the leader. Normal values for this metric depends on your app.

Unit seconds

/p.mysql/follower/relay_log_space

Description The total size of all existing relay log files.

Unit bytes

/p.mysql/follower/slave_io_running

Description Shows whether the I/O thread has started and has connected to the leader VM.

293
Tanzu for MySQL

/p.mysql/follower/slave_io_running

Unit boolean

/p.mysql/follower/slave_sql_running

Description Shows whether the SQL thread has started.

Unit boolean

Highly Available Cluster Metrics


The metrics that HA clusters emit:

/p.mysql/galera/wsrep_cluster_size

Description The current number of nodes in the HA cluster.

Unit count

/p.mysql/galera/wsrep_local_recv_queue

Description The current length of the local receive queue, in messages.

Unit count

/p.mysql/galera/wsrep_local_send_queue

Description The current length of the local send queue, in messages.

Unit count

/p.mysql/galera/wsrep_local_index

Description This node index in the cluster (base 0).

Unit count

/p.mysql/galera/wsrep_local_state

Description The local state of the node. Possible states include:


1 = JOINING

2 = DONOR/DESYNCED

3 = JOINED

4 = SYNCED

Unit integer

/p.mysql/galera/wsrep_ready

Description Shows whether the node can accept queries.

Unit boolean

294
Tanzu for MySQL

/p.mysql/galera/wsrep_cluster_status

Description Shows the primary status of the cluster component that the node is in. Values are:
Primary: Node has a quorum.

Non-primary: Node has lost a quorum.

Disconnected: Node is unable to connect to other nodes.

Unit Status code

/p.mysql/galera/wsrep_flow_control_paused

Description Proportion of time, as a unit interval (0 to 1), that replication was paused due to flow
control since the server started or last FLUSH STATUS. This metric is a measure of how
much replication lag is slowing down the cluster.

Unit float

/p.mysql/galera/wsrep_flow_control_sent

Description Number of FC_PAUSE or flow control pause events sent by this node. Unlike many
status variables, the counter for this metric does not reset every time you run the
query.

Unit count

/p.mysql/galera/wsrep_flow_control_recv

Description Number of FC_PAUSE or flow control pause events received by this node. Unlike many
status variables, the counter for this metric does not reset every time you run the
query.

Unit count

295

You might also like