0% found this document useful (0 votes)
64 views90 pages

Postgres Tanzu

The document provides comprehensive technical documentation for VMware Tanzu for Postgres on Cloud Foundry, including product snapshots, release notes, planning guides, and installation instructions. It covers various configurations, best practices, backup and restore procedures, and multi-site replication. The content is aimed at helping users effectively deploy and manage Postgres databases within the Tanzu ecosystem.

Uploaded by

Shalabi Shalabie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views90 pages

Postgres Tanzu

The document provides comprehensive technical documentation for VMware Tanzu for Postgres on Cloud Foundry, including product snapshots, release notes, planning guides, and installation instructions. It covers various configurations, best practices, backup and restore procedures, and multi-site replication. The content is aimed at helping users effectively deploy and manage Postgres databases within the Tanzu ecosystem.

Uploaded by

Shalabi Shalabie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Tanzu for Postgres on

Cloud Foundry
Tanzu for Postgres on Cloud Foundry 10.1
Tanzu for Postgres on Cloud Foundry

You can find the most up-to-date technical documentation on the VMware by Broadcom website at:

https://techdocs.broadcom.com/

VMware by Broadcom
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

Copyright © 2025 Broadcom. All Rights Reserved. The term “Broadcom” refers to Broadcom Inc. and/or its
subsidiaries. For more information, go to https://www.broadcom.com. All trademarks, trade names, service marks,
and logos referenced herein belong to their respective companies.

2
Tanzu for Postgres on Cloud Foundry

Contents
VMware Tanzu for Postgres on Cloud Foundry ................... 7
Product Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
About VMware Tanzu for Postgres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
About Tanzu for Postgres on Cloud Foundry . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
For more information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Release Notes for VMware Tanzu for Postgres on Cloud Foundry . 9


v10.1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
v10.1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
v10.0.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
New features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Planning Guide for Tanzu for Postgres on Cloud Foundry ........ 12


Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Smoke Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Availability Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
AZ Assignment for a Resilient HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Service Gateway Access for Tanzu for Postgres on Cloud


Foundry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

About multi-site replication ..................................... 15


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Restoration to initial state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Operator Guide for Tanzu for Postgres on Cloud Foundry ........ 18


Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Errands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Post-Deploy Errands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Pre-Delete Errands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Install Tanzu for Postgres on Cloud Foundry . . . . . . . . . . . . . . . . . . . . . 19


Download and Install the Tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Assign AZs and Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Assign AZs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Select Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Configure On-Demand Service Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3
Tanzu for Postgres on Cloud Foundry

Configure On-Demand Plan Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Preparing for TLS ............................................... 24


Generating a Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Postgres On-Demand Single Instance Plan Configuration ........ 24


Supported Configurations for Single Instance Plans . . . . . . . . . . . . . . . . . . . . . 25
TLS Enabled but HA Deactivated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Both TLS and HA Deactivated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Postgres On-Demand HA Plan Configuration .................... 26


Supported Configurations for HA Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Both HA and TLS Enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Postgres Release Acceptance Tests (PGATS) .................... 27


Get the Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Environment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Run Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Hooks .......................................................... 29
Scripts to Run Custom Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Using Hooks to Replace run_on_every_startup Property . . . . . . . . . . . . . . . . . . 30

Supported Extensions .......................................... 31


Enable Service Gateway Access for Postgres .................... 31
Enable TCP Router by Using the TAS for VMs Tile . . . . . . . . . . . . . . . . . . . . . . . 31
Enable Service Gateway by Using the TAS for VMs Tile . . . . . . . . . . . . . . . . . . 32
Developer Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Backup and Restore ............................................ 33


Configuring automated backups ................................ 34
Create a Custom policy and access key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Configure backups in Ops Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Backup and Restore with Bosh Backup and Restore (Legacy) .... 36
Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Restoring from S3 Backup Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Structure of S3 Backup Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Scheduled Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Backup and Restore with ADBR plugin .......................... 39


Prerequisite: adbr plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Backup a service instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Restore a service instance (NON-HA and TLS NON-HA) . . . . . . . . . . . . . . . . . . 40
Restore a service instance (HA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Stream Incremental Backup and Restore ....................... 41


Prerequisite: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Take incremental Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Restore an incremental backup (NON-HA and TLS NON-HA) . . . . . . . . . . . . . . 44

4
Tanzu for Postgres on Cloud Foundry

Restore an incremental backup (HA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Rotating Postgres Server Certificates .......................... 46


Tile version 1.1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Tile version 1.1.0, 1.1.1, and 1.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Observability ................................................... 47
Metric Exports .................................................. 47
Loggregator Firehose Endpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Metrics Polling Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Prometheus Endpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Configure Syslog Forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Upgrade Tanzu for Postgres on Cloud Foundry .................. 66


Upgrade from v1.2.x to v10.1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Upgrade tile in Tanzu Operations Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Disaster Recovery using multi-site replication . . . . . . . . . . . . . . . . . . . 68


Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Enable Multi-Site Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Creating Multi-Site Replication Standby Postgres Service . . . . . . . . . . . . . . . . 70
Promoting Standby Service Instance to Primary in case of Failover . . . . . . . . 72
Restoration to initial state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Gather credential and IP address information ................... 73


Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Log in to the Tanzu Operations Manager VM with SSH . . . . . . . . . . . 74


AWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Azure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
OpenStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
vSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Application Developer Guide for Tanzu for Postgres on Cloud


Foundry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Use Tanzu for Postgres on Cloud Foundry in Your App ........... 78


Confirm Service Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Create a Service Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Set up Your App to Consume Postgres Service with Single


Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Set up Your App for Postgres HA Service with Multiple Instances 82
Bind a Service Instance to Your App ............................ 85
Create a Postgres Service Instance with Service Gateway
Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5
Tanzu for Postgres on Cloud Foundry

Delete Router Group Workaround ............................... 86


Workaround before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Workaround post upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Create Router Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Update service instance to use another router group . . . . . . . . . . . . . . . . . . . 88

Setup UAA CLI .................................................. 88

6
Tanzu for Postgres on Cloud Foundry

VMware Tanzu for Postgres on Cloud


Foundry

This documentation provides information about how to install, configure, and use VMware Tanzu for
Postgres on Cloud Foundry.

VMware Tanzu for Postgres on Cloud Foundry (short name, Tanzu for Postgres on Cloud
Foundry) was formerly known as VMware Postgres for VMware Tanzu Application Service.

You can download the Tanzu for Postgres on Cloud Foundry tile from Broadcom Support.

This documentation:

Describes features and architecture of Tanzu for Postgres on Cloud Foundry.

Guides operators on how to install, configure, maintain, backup, and restore Tanzu for Postgres on
Cloud Foundry.

Guides app developers on how to choose a service plan, create and delete Postgres service
instances, and bind an app.

Product Snapshot
Element Details

Version v10.1.0

Release date 15 May


2025

VMware Tanzu v16.6


Postgres

Tanzu v3.0 and Ste Ubuntu Jammy Tanzu v6.0 IaaS AWS, Azure, IPsec N
Operations above mce 1.824, FIPS Ubuntu Platform for and supp GCP, supp o
Manager ll Jammy v1.785 Cloud v10.0 ort OpenStack, ort
Foundry vSphere

About VMware Tanzu for Postgres


Tanzu for Postgres is based on open-sourced PostgreSQL, and a collection of additional open-source
software. It is a fully ACID-compliant, relational database system known for reliability, feature robustness,
performance, scalability, and security. It is a popular database choice for web, enterprise, and real-time
applications.

7
Tanzu for Postgres on Cloud Foundry

About Tanzu for Postgres on Cloud Foundry


Tanzu for Postgres on Cloud Foundry empowers Cloud Foundry users to establish dedicated instances of
Tanzu for Postgres databases. Tanzu for Postgres on Cloud Foundry supports a distributed-consensus
based high-availability (HA) system, ensuring the continuous operation of its managed PostgreSQL
clusters, even without a PostgreSQL Operator.

For more information


Tanzu for Postgres documentation

Tanzu Platform for Cloud Foundry documentation

8
Tanzu for Postgres on Cloud Foundry

Release Notes for VMware Tanzu for


Postgres on Cloud Foundry

This topic describes the changes in Tanzu for Postgres on Cloud Foundry v10.x.x releases.

Tanzu for Postgres on Cloud Foundry was formerly known as VMware Postgres for
VMware Tanzu Application Service.

v10.1.1
Release date: July 22, 2025

Fixes
1. When the Network name in BOSH cloud-config is not a valid DNS hostname string, the HA
deployment fails.

Compatibility
The following components are compatible with this release:

Element Details

Version 10.1.1

Release date 22 July 2025

Software VMware
component PostgreSQL
version 16.6

Compatible Ops 3.0 and Stem Ubuntu Jammy Compatible 6.0, IaaS AWS, Azure, IPse N
Manager above cell 1.866, FIPS VMware Tanzu 10.0 sup GCP, c o
versions Versi Ubuntu Jammy Application Service and port OpenStack, supp
on 1.785 for VMs versions 10.2 vSphere ort

v10.1.0
Release date: May 15, 2025

New features
New features included in this release:

9
Tanzu for Postgres on Cloud Foundry

1. Support for in-place upgrade from Postgres Tile (1.2.x) to current release v10.1.0.

2. Support for disaster recovery of HA Postgres service instances deployed across TAS foundations
(multi-site) using Patroni for handling failover and switchover.

Fixes
1. Updated the source_id for service metrics emitted by Postgres service instances to use a
common value of postgres instead of each instance’s service GUID. This enables consistent
filtering across all instances in their observability dashboards (e.g, Grafana).

2. Previously, HA service instances shared the same password for the pgadmin user. This has now
been fixed to ensure that each HA service instance has its own unique pgadmin user password.

3. Fixed an issue where WAL files were being archived even when no backup was enabled, leading to
unnecessary disk space usage.

Compatibility
The following components are compatible with this release:

Element Details

Version 10.1.0

Release date 15 May


2025

Software VMware
component PostgreSQL
version 16.6

Compatible Ops 3.0 and Stem Ubuntu Jammy Compatible VMware 6.0 IaaS AWS, Azure, IPse N
Manager above cell 1.824, FIPS Tanzu Application and sup GCP, c o
versions Versi Ubuntu Jammy Service for VMs 10.0 port OpenStack, supp
on 1.785 versions vSphere ort

v10.0.0
Release date: Feb 03, 2025

New features
New features included in this release:

1. Replaced pg_auto_failover with Patroni to manage high availability (HA) for the PostgreSQL service

2. Replaced open-source PostgreSQL v15.8 with VMware Postgres v16.6

Limitations
1. Direct In-place Upgrade from previous Postgres Tile (1.2.x or 1.1.x) is not supported to current
release v10.0.0. The next release will support direct in-place upgrade from previous Postgres Tile
(v1.x).

Compatibility

10
Tanzu for Postgres on Cloud Foundry

The following components are compatible with this release:

Element Details

Version 10.0.0

Release date 3 Feb 2025

Software VMware
component PostgreSQL
version 16.6

Compatible Ops 3.0 and Stem Ubuntu Jammy Compatible VMware 6.0 IaaS AWS, Azure, IPse N
Manager above cell 1.737, FIPS Tanzu Application and sup GCP, c o
versions Versi Ubuntu Jammy Service for VMs 10.0 port OpenStack, supp
on 1.719 versions vSphere ort

11
Tanzu for Postgres on Cloud Foundry

Planning Guide for Tanzu for Postgres on


Cloud Foundry

This topic provides the information you need when planning a deployment of Tanzu for Postgres on Cloud
Foundry.

There is one service offering:

On-demand service provides a dedicated VM running a Postgres instance. App developers can
provision an instance for any of the on-demand plans and configure certain Postgres settings.

Architecture
The available plans are:

Single instance

Postgres HA

The following diagram shows the architecture for the single instance plan:

The following diagram shows the architecture for the Postgres HA plan:

12
Tanzu for Postgres on Cloud Foundry

Operators configure plans in VMware Tanzu Operations Manager.

Developers can create instances of each plan when needed, until a quota is reached, and bind their apps to
the instances.

The preceding diagrams show the postgres service broker creating a single postgres instance or a postgres
HA cluster when you run the cf create-service command, based on the service plan you have selected.
They also show the user app bound to the postgres service. Each instance has its own VM.

Smoke Tests
Smoke tests allow for a simple check to ensure that your service instances can be created properly before
affecting any already running service instances. Smoke tests are run as an errand after tile installation. You
can deactivate smoke tests by clearing the smoke test option when reviewing changes.

Smoke tests run in the org system and the space postgres-smoke-test.

VMware recommends that you use the smallest plan for smoke tests, which reduces installation time.

If any errors occur while running a smoke test, the service instance persists so that log retrieval can be
performed. You should remove these service instances in the postgres-smoke-test space when done.
Ideally, after a tile is installed, there are no service instances in the postgres-smoke-test space.

Postgres plans allow you to determine which plan can be used for smoke tests through the Smoke Test
checkbox. If no plans have a Smoke Test enabled, then no Smoke tests are run. If multiple plans have a
Smoke Test enabled, then only one plan is selected for testing.

Availability Zones
The Tanzu for Postgres on Cloud Foundry HA plan uses the pg_autofailover extension and supports
configuring multiple availability zones (AZs) to provide high availability and resiliency. The tile supports
selecting multiple AZs for both the Monitor instance VMs and the Postgres DB VMs. Both kinds of service
instances are distributed evenly among the selected AZs.

AZ Assignment for a Resilient HA Cluster

13
Tanzu for Postgres on Cloud Foundry

You can make the Tanzu for Postgres on Cloud Foundry HA plan resilient against the loss of an AZ. To
achieve this, the AZs for Monitor and Postgres instances must be separate.

For example, if your IaaS provider has 3 AZs (az0, az1, az2) then one AZ distribution for resiliency is:

Monitor AZ: az0

Postgres AZs: az1 and az2

In this assignment, the loss of az0, az1, or az2 does not reduce service availability. Although, to maintain
the resiliency, the operator must back up the data of the cluster and restore it within a new resilient HA
cluster. This requires re-binding the apps to the new service.

Service Gateway Access for Tanzu for Postgres on Cloud


Foundry
Service-gateway access enables a Tanzu for Postgres on Cloud Foundry service instance to connect to
external components that are not on the same foundation as the service instance. These components might
be on another foundation or hosted outside of the foundation.

For related procedures, see:

Enable service-gateway access for Postgres

Create a Postgres service instance with service-gateway access

There are multiple use cases for service-gateway access, such as:

Accessing Redis from apps deployed to VMware Tanzu Application Service for VMs (TAS for VMs)
in a different foundation.

Using Redis as a service for apps that are not deployed to TAS for VMs.

Architecture
Service-gateway access to Tanzu for Postgres on Cloud Foundry instances leverages the TCP Router in
TAS for VMs.

Any Postgres requests that an app makes are forwarded through DNS to a load balancer that can route
traffic from outside to inside the foundation. This load balancer (the TCP Router) opens a range of ports that
are reserved for any TAS application traffic. This has to be configured in the TAS for VMs tile’s Tanzu
Operations Manager form.

When an app developer creates a service instance on a plan with service-gateway access enabled, they
have to specify the port in the request parameters from the configured port range in the aforementioned
TCP router. The load balancer then forwards the requests for this Tanzu for Postgres on Cloud Foundry
service instance to the TCP router.

14
Tanzu for Postgres on Cloud Foundry

About multi-site replication


This topic describes multi-site replication for disaster recovery in VMware Tanzu for Postgres on Cloud
Foundry.

This documentation uses the term service instance and cluster interchangeably.

Overview
If a TAS foundation is running a Postgres service instance, you can use the Postgres for TAS Tile to
configure a Disaster Recovery (DR) solution. This is referred to as multi-site replication for disaster
recovery.

When this is configured, if the primary foundation goes down, a standby replica in another TAS foundation
must be updated to take over the functions of the primary cluster.

The foundation types are:

Primary Foundation: Deployed in your main data center, this foundation generally receives the
majority of app traffic. Tanzu for Postgres assumes that the primary service instance is deployed
on this foundation when it is healthy.

Secondary Foundation: Deployed in your failover data center, this foundation receives minimal or no
app traffic. Tanzu for Postgres assumes that the standby/secondary service instance is deployed
on this foundation unless a developer triggers a failover.

15
Tanzu for Postgres on Cloud Foundry

Failover
Failover is the process of switching from the primary foundation to the secondary foundation. It must be
manually configured if the primary foundation becomes unavailable or unreachable.

During failover, there will be downtime due to the following:

The time required to detect the failure

The time required to promote the secondary/standby service instance to primary

Any data written to the old primary that hadn’t been replicated to the standby before the failover will be lost.
This usually occurs if the failover happens before the replication lag (WAL shipping) is caught up.

PostgreSQL replication in a Patroni cluster is based on the Write-Ahead Log (WAL)


syncing between the cluster leader and replica. Replication might occasionally lag due to
networking issues, missing WAL segments (on rotation or recycle), high Patroni Pods CPU
usage, or hardware failure.

16
Tanzu for Postgres on Cloud Foundry

For more information,see Disaster Recovery using multi-site replication.

Restoration to initial state


When primary foundation is back online, and you want to return to initial state (with the old primary
foundation hosting the primary cluster again, and the new primary foundation acting as the standby), we
support the following method as described in the Patroni documentation:

Rebuild the standby cluster from scratch:

For the current v10.1.0 release, we suggest letting the newly promoted primary run as it is and
create a new standby in primary foundation.

For more information, see Restoration to initial state in Disaster Recovery using multi-site
replication.

17
Tanzu for Postgres on Cloud Foundry

Operator Guide for Tanzu for Postgres on


Cloud Foundry

This topic for operators outlines best practices when deploying Tanzu for Postgres on Cloud Foundry.

Tanzu for Postgres on Cloud Foundry was formerly known as VMware Postgres for
VMware Tanzu Application Service.

Best Practices
VMware recommends that operators follow these guidelines:

Resource allocation: Work with app developers to anticipate memory requirements and to configure VM
sizes. App developers can choose from two different plans: Single Instance Plan or Postgres HA Plan, each
with its own VM size and quota.

Backing up data: Configure automatic backups so that data can be restored in an emergency. Validate the
backed-up data with a test restore.

Errands
The following sections list the errands included in Tanzu for Postgres on Cloud Foundry.

Post-Deploy Errands
Tanzu for Postgres on Cloud Foundry includes the following post-deploy errands.

Tanzu Operations BOSH errand


Description
Manager UI name name

Register on-demand register-broker Registers the on-demand Postgres broker with Tanzu Application Service for
broker VMs to offer the postgres service (on-demand plans).

Upgrade all on-demand upgrade-all- Upgrades on-demand service instances to use the latest plan configuration,
service instances service- service releases, and stemcell. This causes downtime to any service instances
instances with available upgrades.

The following post-deploy errands do not run by default when Apply Changes is triggered. These errands
help operators to troubleshoot and maintain their service fleet.

18
Tanzu for Postgres on Cloud Foundry

Tanzu Operations BOSH errand


Description
Manager UI name name

Recreate all on-demand recreate-all- Re-creates on-demand service instances one-by-one. This causes
service instances service-instances downtime for all service instances.

Find orphan on-demand orphan- Finds all orphan on-demand service instances. The cleanup of orphan
service instances deployments on-demand service instances can be carried out manually.

Pre-Delete Errands
The following pre-delete errands are run by default when the Tanzu for Postgres on Cloud Foundry tile is
deleted:

Tanzu Operations Manager UI name BOSH errand name Description

Delete all on-demand service delete-all-service-instances- Deletes all on-demand instances and deregisters
instances and deregister broker and-deregister-broker the on-demand Postgres broker.

Install Tanzu for Postgres on Cloud Foundry


This topic tells you how to install Tanzu for Postgres on Cloud Foundry by adding it to VMware Tanzu
Operations Manager.

Download and Install the Tile


To add Tanzu for Postgres on Cloud Foundry to VMware Tanzu Operations Manager, follow the procedure
for adding Tanzu Operations Manager tiles:

1. Download the Tanzu for Postgres on Cloud Foundry file from the Broadcom Customer Support
Portal. Select the latest release from the Releases drop-down menu.

2. In the Tanzu Operations Manager Installation Dashboard, click Import a Product to upload the
Tanzu for Postgres on Cloud Foundry file.

3. Click the + sign next to the uploaded product description to add the tile to your staging area.

4. To configure Tanzu for Postgres on Cloud Foundry, click the newly added tile. See configuration
instructions in the following sections.

5. Click Apply Changes.

Assign AZs and Networks


To assign AZs and networks, click the Assign AZs and Networks settings tab.

19
Tanzu for Postgres on Cloud Foundry

Assign AZs
You can assign multiple availability zones (AZs) to Postgres jobs. However, this does not ensure high
availability. You must select AZs that are in the service network you configured in your BOSH Director. To
assign AZs:

1. Select Assign AZs and Networks.

2. Click Save.

Select Networks
To use the Tanzu for Postgres on Cloud Foundry on-demand service, you must select a network in
which the service instances are created.

1. In the Assign AZs and Networks tab, select a network. VMware recommends that each type of
service run in its own network. Usually the service broker network and service instance networks
are the same.

Configure On-Demand Service Settings

20
Tanzu for Postgres on Cloud Foundry

To configure settings that apply across the whole on-demand service offering:

1. In the Tanzu for Postgres on Cloud Foundry tile, select On-Demand Service Settings.

2. Enter the Maximum service instances across all on-demand plans. The maximum number of
instances you set for all your on-demand plans combined cannot exceed this number.

3. Select the Allow outbound internet access from service instances check box. You must select
this check box to allow external log forwarding, send backup artifacts to external destinations, and
communicate with an external BOSH blobstore.

Outbound network traffic rules also depend on your IaaS settings. Consult your
network or IaaS admin to ensure that your IaaS allows outbound traffic to the
external networks you need.

4. (Optional) Use the Maximum Parallel Upgrades text box to configure the maximum number of
Postgres service instances that can be upgraded at the same time.

When you click Apply Changes, the on-demand broker upgrades all service instances. By default,
each instance is upgraded serially. Allowing parallel upgrades reduces the time taken to apply
changes.

5. (Optional) Use the Number of Canaries to run before proceeding with upgrade field and the
Specify Org and Space that Canaries will be selected from? options to specify settings for
upgrade canaries. Canaries are service instances that are upgraded first. The upgrade fails if any
canaries fail to upgrade.

21
Tanzu for Postgres on Cloud Foundry

You can limit canaries by number and by org and space. To use all service instances in an org and
space as canaries, set the number of canaries to zero. This upgrades all service instances in the
selected org and space first.

If you specify that canaries should be limited to an org and space that has no service instances,
the upgrade fails.

Canary upgrades comply with the Maximum Parallel Upgrades settings. If you
specify three canaries and a Maximum Parallel Upgrades of two, then two canaries
upgrade, followed by the third.

Configure On-Demand Plan Settings


You can configure multiple on-demand plans with memory and disk sizes suited to different use cases. The
configuration of resources varies depending on your IaaS. To add and configure each on-demand service
plan:

1. In the Tanzu for Postgres on Cloud Foundry tile, select On-Demand Plans.

2. Click Add to add an on-demand plan.

22
Tanzu for Postgres on Cloud Foundry

3. Configure the settings in the following table for your on-demand plans and then click Save.

Field Default Description

Plan name on-demand- The name that you choose for the plan. This is displayed in the Marketplace.
postgres-db VMware recommends that you give your plans descriptive names based on
their configuration.

23
Tanzu for Postgres on Cloud Foundry

Field Default Description

Plan ID Empty Provide a random name.

Plan quota 20 The maximum number of instances of this plan that app developers can create.

AZs to deploy None The AZs in which to deploy the Postgres instances from the plan. These must
postgres selected be AZs of the service network, which are configured in the BOSH Director tile. If
instances of this you select multiple AZs, instances are distributed randomly among them.
plan

Server VM type Varies VMware recommends that the persistent disk is at least 2.5x the VM memory
depending for on-demand service instances.
on IaaS

Server disk type Varies VMware recommends that the persistent disk is at least 2.5x the VM memory
depending for on-demand service instances.
on IaaS

Postgres client 3600 The server timeout for an idle client specified in seconds. Adjust this setting as
timeout needed.

Postgres TCP 60 The interval in seconds at which TCP ACKs are sent to clients. Adjust this
keepalive setting as needed.

Max clients 1000 The maximum number of clients that can be connected at any one time. Adjust
this setting as needed.

Preparing for TLS


When you enable Transport Layer Security (TLS), it provisions Tanzu for Postgres on Cloud Foundry
service instances with a Certificate Authority (CA) certificate. Apps can then establish encrypted
connections with the Tanzu for Postgres on Cloud Foundry service instance.

Generating a Certificate
The certificate deployed on the Tanzu for Postgres on Cloud Foundry service instance is a server
certificate. Either the operator provides the certificate to CredHub or CredHub generates it. CredHub
generates the server certificate by using a CA certificate. CredHub is a component designed for centralized
credential management in Tanzu Application Service for VMs. It is deployed on the same VM as the BOSH
Director.

Apps can use a Postgres database password, or the CA certificate to authenticate with Tanzu for Postgres
on Cloud Foundry service instances. Apps that communicate with Tanzu for Postgres on Cloud Foundry
must have access to the CA certificate in order to validate that the server certificate can be trusted.

For more information about providing or generating a certificate, see Preparing for Transport Layer Security
(TLS).

Postgres On-Demand Single Instance Plan Configuration


This topic explains possible combinations of HA & TLS options to be used when creating on-demand single
instance plans with Tanzu for Postgres on Cloud Foundry.

24
Tanzu for Postgres on Cloud Foundry

Supported Configurations for Single Instance Plans


Possible configurations TLS HA

TLS enabled but HA deactivated ✅ ❌

Both HA and TLS deactivated ❌ ❌

TLS Enabled but HA Deactivated


To create an on-demand plan with TLS enabled and HA deactivated:
Select Enable TLS

Deselect Enable HA

Both TLS and HA Deactivated


To create an on-demand plan with both TLS and HA deactivated:

Deselect Enable TLS

Deselect Enable HA

See the following screenshot:

25
Tanzu for Postgres on Cloud Foundry

Postgres On-Demand HA Plan Configuration


This topic explains possible combinations of HA & TLS options to be used when creating on-demand HA
plans with Tanzu for Postgres on Cloud Foundry.

26
Tanzu for Postgres on Cloud Foundry

Supported Configurations for HA Plans


Possible Configurations TLS HA

Both HA and TLS enabled ✅ ✅

Both HA and TLS Enabled


To create an on-demand plan with both HA and TLS enabled:
Select Enable TLS.

Select Enable HA.

Tanzu for Postgres on Cloud Foundry does not support HA plans with TLS disabled.

Postgres Release Acceptance Tests (PGATS)


The postgres-release acceptance tests run several deployments of the postgres-release in order to exercise
a variety of scenarios:

Verify that customizable configurations are properly reflected in the PostgreSQL server
Roles

Databases

Database extensions

Properties, for example, max_connections

Test supported upgrade paths from previous versions

Test ssl support, backup and restore, and hooks

Get the Code


$ go get github.com/cloudfoundry/postgres-release
$ cd $GOPATH/src/github.com/cloudfoundry/postgres-release

Environment Setup
Upload to the BOSH Director the latest stemcell and your dev postgres-release:

$ postgres_release upload-stemcell STEMCELL_URL_OR_PATH_TO_DOWNLOADED_STEMCELL


$ postgres_release create-release --force
$ postgres_release upload-release

The acceptance tests are written in Go. Make sure that:

Golang (>=1.7) is installed on the machine

the postgres-release is inside your $GOPATH

Some test cases make use of bbr. Make sure that it is available in your $PATH.

Go dependencies are managed by using dep. Make sure that it is installed.

27
Tanzu for Postgres on Cloud Foundry

If you are not using BOSH Lite according to the quick start documentation, note that:

PGATS must have access to the target BOSH director and to the postgres VM deployed from it.

The BOSH director must be configured with the cloud_config.yml.

The director must be configured with verifiable certificates because PGATS use the bosh-cli
director package for programmatic access to the Director API.

Configuration
A config file for bosh-lite looks similar to this example:

$ cat > $GOPATH/src/github.com/cloudfoundry/postgres-release/pgats_config.yml << EOF


---
bosh:
target: 192.168.50.6
credentials:
client: admin
# bosh interpolate creds.yml --path /admin_password
client_secret: admin
# insert CA cert, e.g. from creds.yml
# bosh interpolate creds.yml --path /director_ssl/ca
ca_cert: |+
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
use_uaa: true
cloud_configs:
default_azs: [z1]
default_networks:
- name: default
default_persistent_disk_type: 10GB
default_vm_type: small
EOF

The full set of config parameters is explained by the following.

boshparameters are used to connect to the BOSH director that hosts the test environment:

bosh.target (required) Public BOSH director IP address.

bosh.use_uaa (required) Set to true if the BOSH director is configured to delegate user
management to the UAA server.

bosh.credentials.client (required) User name for the BOSH director login.

bosh.credentials.client_secret (required) Password for the BOSH director login.

bosh.credentials.ca_cert (required) BOSH director CA Certificate.

cloud_config parameters are used to generate a BOSH v2 manifest that matches your IaaS configuration:

cloud_config.default_azs List of availability zones. It defaults to [z1].

cloud_config.default_networks List of networks. It defaults to [{name: default}].

cloud_config.default_persistent_disk_type Persistent disk type. It defaults to 10GB.

cloud_config.default_vm_type VM type. It defaults to small.

28
Tanzu for Postgres on Cloud Foundry

Other parameters:

postgres_release_version The postgres-release version to test. If not specified, the latest


version uploaded to the director is used.

postgresql_version The PostgreSQL version expected to be deployed. You only need to specify
it if your changes include a PostgreSQL version upgrade. If not specified, we expect that the one in
the latest published postgres-release is deployed.

Run Tests
Run all the tests with:

$ export PGATS_CONFIG=$GOPATH/src/github.com/cloudfoundry/postgres-release/pgats_confi
g.yml
$ $GOPATH/src/github.com/cloudfoundry/postgres-release/src/acceptance-tests/scripts/te
st

Run a specific set of tests with:

$ export PGATS_CONFIG=$GOPATH/src/github.com/cloudfoundry/postgres-release/pgats_confi
g.yml
$ $GOPATH/src/github.com/cloudfoundry/postgres-release/src/acceptance-tests/scripts/te
st <some test packages>

The PGATS_CONFIG environment variable must point to the absolute path of the configuration file.

Hooks
This topic discusses how you can run custom code with Tanzu for Postgres on Cloud Foundry.

Scripts to Run Custom Code


The postgres job has two monit processes that you can use to run custom code:

postgres runs databases.hooks before or after PostgreSQL starts or stops.

pg_janitor periodically runs the janitor.script and it also takes care of creating roles and
database.

If you plan to use these scripts to run custom code, consider the following:

The return code of the database.hooks scripts is not propagated to the monit control job.

The output of the hook scripts is logged into:

/var/vcap/sys/log/postgres/hooks.std{out,err}.log

/var/vcap/sys/log/postgres/janitor{,.err}.log

The time spent in databases.hooks.pre-start delays the start of PostgreSQL. In the same way,
the time spent in databases.hooks.pre-stop delays the stop of PostgreSQL. This influences an
eventual deployment. For this reason, VMware recommends avoiding long running actions in the
hooks and leveraging the databases.hooks.timeout property to prevent unexpected delays.

29
Tanzu for Postgres on Cloud Foundry

The following environment variables are available in the scripts:

DATA_DIR: the PostgreSQL data directory. For example,


/var/vcap/store/postgres/postgres-x.x.x.

PORT: the PostgreSQL port. For example, 5432.

PACKAGE_DIR: the PostgreSQL binaries directory. For example,


/var/vcap/packages/postgres-x.x.x.

If for example you want to use psql in your hook, you can specify: ${PACKAGE_DIR}/bin/psql -p
${PORT} -U vcap postgres -c "\l"

In relation to the start up sequence, databases.hooks.post-start and pg_janitor may run


concurrently. It implies that databases.hooks.post-start may or may not run before pg_janitor
actually creates the roles and databases. If you need a database or a role there, wait until it has
been created before using it.

Since monit starts and stops postgres based on the PostgreSQL process ID, the
databases.hooks.post-stop script may run concurrently with the restart of PostgreSQL.
Likewise, the databases.hooks.post-start script may run concurrently with the stop of
PostgreSQL.

Using Hooks to Replace run_on_every_startup Property


The run_on_every_startup property is allowed to run a list of SQL commands at each postgres start
against a given database as vcap. This property has been removed in postgres-release v29. You can
migrate from this property to hooks instead.

Replace:

properties:
databases:
databases:
- name: sandbox
run_on_every_startup:
- "SQL-QUERY1"
- "SQL-QUERY2"

with:

databases:
hooks:
post_start: |
#!/bin/bash
for i in {10..0}; do
result=$(${PACKAGE_DIR}/bin/psql -p ${PORT} -U vcap postgres -t -P format=unal
igned -c "SELECT 1 from pg_database WHERE datname='sandbox'")
if [ "$result" == "1" ]; then
break
fi
echo "Database sandbox does not exists yet; trying $i more times"
sleep 1
done
if [ "$i" == "0" ]; then

30
Tanzu for Postgres on Cloud Foundry

echo "Time out waiting for the database to be created"


exit 1
fi
${PACKAGE_DIR}/bin/psql -p ${PORT} -U vcap sandbox -c "SQL-QUERY1"
${PACKAGE_DIR}/bin/psql -p ${PORT} -U vcap sandbox -c "SQL-QUERY2"

Supported Extensions
This topic provides the list of supported extensions in Tanzu for Postgres on Cloud Foundry.

Name Version Description

citext 1.6 Data type for case-insensitive character strings

fuzzystrmatch 1.1 Determine similarities and distance between strings

hstore 1.8 Data type for storing sets of (key,value) pairs

pg_stat_statements 1.10 Track planning and execution statistics of all SQL statements executed

pgcrypto 1.3 Cryptographic functions

plpgsql 1.0 PL/pgSQL procedural language

plr 8.4.6 Load R interpreter and execute R script from within a database

postgis 3.4.0 PostGIS geometry and geography spatial types and functions

postigis_tiger_geocoder 3.4.0 PostGIS tiger geocoder and reverse geocoder

postgis_topology 3.4.0 PostGIS topology spatial types and functions

uuid_ossp 1.1 Generate universally unique identifiers (UUIDs)

pgvector 0.5.0 vector data type and ivfflat access method

Enable Service Gateway Access for Postgres


You can enable service gateway access for Tanzu for Postgres on Cloud Foundry. This topic tells you how.

Service gateway access enables a Tanzu for Postgres on Cloud Foundry on-demand instance to connect to
external components that are not on the same foundation as the service instance. These components might
be on another foundation or hosted outside of the foundation.

To enable service gateway access for an on-demand offering, you must Enable TCP routing by using the
Tanzu Application Service for VMs tile.

Enable TCP Router by Using the TAS for VMs Tile


TCP routing is deactivated by default.

To enable TCP routing:

1. Go to the Networking tab on the sidebar of the TAS for VMs tile.

2. Under TCP routing, select Enable.

3. For TCP routing ports, enter a range of valid ports. For example, 1024–1123

31
Tanzu for Postgres on Cloud Foundry

4. The ports you assign must not overlap with any other application or tile.

5. Apply your changes in Tanzu Operations Manager for the TAS for VMs tile to create the TCP router.

To enable service gateway access for an on-demand offering:

1. Enable Service Gateway Using the TAS for VMs Tile

Enable Service Gateway by Using the TAS for VMs Tile


Service gateway is deactivated by default.

To enable service gateway:

1. Go to the On-Demand Service Settings tab on the sidebar of the TAS for VMs tile.

2. Under Enable Services Gateway, select Yes and specify FQDN for TCP Router

32
Tanzu for Postgres on Cloud Foundry

3. Define a port range within the TCP routing ports range defined earlier in previous section. Also, you
must make sure to avoid conflicting or overlapping port ranges, if used by any other Tile(s).

Providing this port range is mandatory to assign a free port dynamically to the
service instance, when configured with services gateway enabled.

4. Apply your changes in Tanzu Operations Manager for the TAS for VMs tile to create the service
gateway.

Developer Workflow
For instructions for app developers, see Create a Service Instance with Service Gateway Access.

Backup and Restore


VMware Postgres for TAS supports full and incremental backup and restore.

You can perform one of the following manual full backups:

Backup and Restore with ADBR plugin

Backup and Restore with Bosh Backup and Restore (Legacy)

You can also do a manual incremental backup and restore using BOSH errands:

Stream Incremental Backup and Restore

You can also configure automated backups using full and incremental scheduled backups:

33
Tanzu for Postgres on Cloud Foundry

Configuring automated backups

Configuring automated backups


You can configure VMware Tanzu Postgres for TAS to automatically back up databases to external storage.

VMware Tanzu Postgres for TAS backs up your database on a schedule. You configure this schedule with a
cron expression.

Configuring a cron expression overrides the default schedule for your service instance.

Currently, we support backing up to Amazon S3.

To back up your database to Amazon S3:

Configuring automated backups


Create a Custom policy and access key

Configure backups in Ops Manager

Create a Custom policy and access key


VMware Tanzu Postgres for TAS accesses your S3 bucket through a user account. VMware recommends
that this account be only used by VMware Tanzu Postgres for TAS. You must apply a minimal policy that
enables the user account upload backups to your S3 bucket. Then give the policy the permission to list and
upload to buckets.

The procedure in this section assumes that you are using an Amazon S3 bucket.

To create a policy and access key in Amazon Web Services (AWS):

1. Create a policy for your VMware Tanzu Postgres for TAS user account.

In AWS, create a new custom policy by following this procedure in the AWS documentation.

Paste in the following permissions:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PostgresLBackupPolicy",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::MY_BUCKET_NAME/*",
"arn:aws:s3:::MY_BUCKET_NAME"
]
}
]
}

34
Tanzu for Postgres on Cloud Foundry

2. Record the Access Key ID and Secret Access Key user credentials for a new user account by
following this procedure in the AWS documentation. Ensure you select Programmatic access and
Attach existing policies to user directly. You must attach the policy you created in the previous
step.

Configure backups in Ops Manager


Use Ops Manager to connect VMware Postgres for TAS to your S3 account.

Prerequisite: Before beginning this procedure, you must have an S3 bucket in which to store the backups.

1. In Ops Manager, open the VMware Tanzu for Postgres on Cloud Foundry tile Backups pane.

2. Select Upload backups to AWS S3.

Configure the fields as follows:

Field Instructions

35
Tanzu for Postgres on Cloud Foundry

Access Enter the S3 Access Key ID and Secret Access Key that you created in Create a Custom Policy and Access Key.
Key ID
and
Secret
Access
Key

Endpoi Enter the S3-compatible endpoint URL for uploading backups.


nt URL The URL must start with http:// or https://.

Region Enter the region where your bucket is located.

Bucket Enter the name of your bucket.


Name Do not include an s3:// prefix, a trailing /, or underscores. VMware recommends using the naming convention
DEPLOYMENT-backups. For example, sandbox-backups.

Force The default behavior uses a virtual host-style URL. Select this check box if you use an Amazon S3 and your
path bucket name is not compatible with virtual hosted-style URLs or an S3-compatible endpoint such as Minio that
style might require path-style URLs. If you are using a blobstore that uses a specific set of domains in its server
access certificate, add a new wildcard domain or use path-style URLs if supported by the blobstore. For general
to information about the deprecation of S3 path-style URLs, see AWS blog posts: Amazon S3 Path Deprecation Plan
bucket – The Rest of the Story and the subsequent Update to Amazon S3 Path Deprecation Plan.

Bucket Enter the path in the bucket to store backups.


Path You can use this to keep the backups from this foundation separate from those of other foundations that might also
backup to this bucket. For example, Foundation-1. This field is only available as of v2.10.3.

Cron Enter a cron expression using standard syntax. The cron expression sets the schedule for taking full backup for
Schedul each service instance. Test your cron expression using a website such as Crontab Guru.
e for full
backup

Cron Enter a cron expression using standard syntax. The cron expression sets the schedule for taking incremental
Schedul backup for each service instance. Test your cron expression using a website such as Crontab Guru.
e for
increme
ntal
backup

Backup Enter the number of full backups to be retained. Default value is 2.


retentio
n count

Backup Period of time to allow for backup to complete (measured in minutes)


timeout

Backup and Restore with Bosh Backup and Restore


(Legacy)
This topic describes how to use BOSH Backup and Restore (BBR) to back up and restore a BOSH
deployment. If you have deployed a high availability (HA) deployment, only the instances other than the
“monitor” node can be backed up.

The ADBR plugin is the recommended method to perform backup and restore. The bbr

36
Tanzu for Postgres on Cloud Foundry

method is currently supported, but using ADBR is highly recommended.

Backup
Back up a BOSH deployment

1. Run the BBR pre-backup check to confirm that your BOSH Director is reachable and has a
deployment that you can back up with BBR:

bbr deployment -d postgres_service_instance_deployment pre-backup-check

2. If the pre-backup check command fails, run the command again adding the –debug flag to enable
debug logs:

bbr deployment -d postgres_service_instance_deployment pre-backup-check --debug

3. If the pre-backup check succeeds, run the BBR backup command to back up your BOSH
deployment:

bbr deployment -d postgres_service_instance_deployment backup --artifact-path <


name-backup>

This creates 2 files: a backup artifact, and a metadata file that looks like this:

instances:
- name: postgres-instance
index: "0"
artifacts:
- name: postgres-backup
checksums:
./pgdump: ae407e4d9ada961f1e12f96e7e88c80b5f838f22067519eeea92c2eb6f3c948c
backup_activity:
start_time: 2023/08/03 15:20:33 IST
finish_time: 2023/08/03 15:20:39 IST

Both files must be saved together in the same directory for restore to use them.

Restore
You can only restore by using files from your local machine/jumpbox with access to the service instance’s
network. The backup files must be those you downloaded by running the bbr command.

1. If the service instance no longer exists, you must re-create it. For example:

cf create-service postgres -w

After this, edit the metadata file’s “index” fields corresponding to each artifact. To get the index of
an instance, look at the “index” column in the output from the following command:

bosh -d <service instance deployment> instances -i

37
Tanzu for Postgres on Cloud Foundry

2. To restore a BOSH deployment, ensure that the BOSH deployment backup artifact is in the
directory from which you run BBR.

bbr deployment -d postgres_service_instance_deployment restore --artifact-path


<PATH-TO-DEPLOYMENT-BACKUP>

Restoring from S3 Backup Artifacts


You can use the Amazon S3 backup files that originate from a non-HA service as a source for the bbr
restore, without any modification. Download the backup artifacts from the S3 bucket and run the bbr restore
command as described in the restore section.

To restore an HA service instance using an S3 scheduled backup artifacts folder, which should have
originated from another HA service instance, see Sample App for Postgres for TAS.

Structure of S3 Backup Artifacts


For example: If you take a backup of your HA service deployment, service-instance_ffde758f-093d-
4877-ba69-a5307a9884db, on time, 20231018T16:29:17UTC, which has 3 etcd instances and 2 postgres
instances, one of which is primary and the other is replication node, then ALL 2 nodes data are fully backed
up. This is despite duplication of data.

The reason for this choice is that the “Primary” role can switch from the current instance VM to another at
the time of restore. You can’t keep other instance VM’s backup data empty/dummy because it’s possible
that between the time of backup and restore, one of the replication nodes has become primary.

If you set the S3 prefix to be pg-test-2 at the time of Tile configuration, and if you open your S3 bucket
containing the backup artifacts, you can see the following structure:

Non-HA Backup:

```pre
pg-test-2
├── service-instance_ffde758f-093d-4877-ba69-a5307a9884db
├──postgres-instance
├── 7dd4cf9f-6fa0-44ef-a63d-4875e8690fba
├── 20231018T16:29:17UTC
├── metadata
├── postgres-instance-0-postgres-backup.tar
```

HA backup:

```pre
pg-test-2
├── service-instance_ffde758f-093d-4877-ba69-a5307a9884db
├──postgres-instance
├── 7dd4cf9f-6fa0-44ef-a63d-4875e8690fba
├── 20231018T16:29:17UTC
├── metadata
├── postgres-instance-0-postgres-backup.tar
├── 52be4e9b-28eb-42be-b295-40ee01ea5607
├── 20231018T16:29:17UTC
├── metadata

38
Tanzu for Postgres on Cloud Foundry

├── postgres-instance-1-postgres-backup.tar
```

Scheduled Backup
No longer supported post release 1.2.0

Backup and Restore with ADBR plugin


This topic describes how to use cf ADBR (Application Data Backup Recovery) plugin to back up and restore
a running Postgres service instance. It is not intended to list or restore backups created by a deleted
service instance.

Prerequisite: adbr plug-in


1. Before you can manually back up or restore a service instance, you must have installed the
ApplicationDataBackupRestore (adbr) plug-in for the Cloud Foundry Command Line Interface (cf
CLI) tool.

To install the plug-in, run:

cf install-plugin -r CF-Community "ApplicationDataBackupRestore"

The procedures on this page assume that you are using the adbr plug-in v0.7.0 or later.

2. Make sure to configure the S3 backup values during tile configuration, in order to provide a backup
store.

Backup a service instance


Back up a service instance using ADBR plugin

1. Make sure that the service instance to be backed up is created and running:

cf services | grep BACKUP_SERVICE_NAME

2. Run the backup command with adbr:

cf adbr backup BACKUP_SERVICE_NAME

Where BACKUP_SERVICE_NAME is the service instance you are backing up.

3. Check the status of your backup and confirm if it is successful:

cf adbr get-status BACKUP_SERVICE_NAME

For example:

$ cf adbr get-status test_backup_svc


Getting status of service instance test_backup_svc in org my-org / space my-s

39
Tanzu for Postgres on Cloud Foundry

pace as user...
[Tue Sep 16 18:08:25 UTC 2024] Status: Backup was successful. Uploaded 3.2M

Restore a service instance (NON-HA and TLS NON-HA)


When restoring a service instance, create a new service instance and then restore the backup to it. Finally,
you rebind and restage apps that use the service instance.

1. Make sure you have completed the backup of service and the backup was successful.

2. Create a new service instance of the desired plan (except HA):

cf create-service postgres POSTGRES_PLAN RESTORE_SERVICE_NAME

Where POSTGRES_PLAN is the same plan as the backup service instance plan.

3. Retrieve the backup artifacts for your service by running the command:

cf adbr list-backups BACKUP_SERVICE_NAME

Where BACKUP_SERVICE_NAME is the backup service you created earlier during backup.

For example:

$ cf adbr list-backups test_backup_svc


Getting backups of service instance test_backup_svc in org my-org / space my-
space as user...
Backup ID Time of Backup
g51bf733-2338-4126-9e17-50cacn7o7c8d_158982532 Tue Sep 16 18:08:04 UTC 202
4

Record the Backup ID from the output

4. Perform restore by running the command:

cf adbr restore RESTORE_SERVICE_NAME BACKUP_ID


Restoring service instance RESTORE_SERVICE_NAME in org $ORG / space $SPACE as
$USER...
This action will overwrite all data in this service instance.
Really restore the service instance RESTORE_SERVICE_NAME? [yN]: y
OK

5. Check the status of restore and confirm it is successful:

cf adbr get-status RESTORE_SERVICE_NAME

For example:

$ cf adbr get-status test_restore_svc


Getting status of service instance test_restore_svc in org my-org / space my-
space as user...
[Tue Sep 16 22:29:24 UTC 2024] Status: Restore was successful

6. Determine if your app is bound to a service instance by running:

40
Tanzu for Postgres on Cloud Foundry

cf services

For example:

$ cf services
Getting services in org my-org / space my-space as user...
OK
name service plan bound apps last
operation
test_backup_svc postgres on-demand-postgres-db my-app creat
e succeeded
test_restore_svc postgres on-demand-postgres-db creat
e succeeded

1. Update your app to bind to the new service instance by running:

cf bind-service APP-NAME RESTORE_SERVICE_NAME

2. Restage your app by running:

cf restage APP-NAME

Restore a service instance (HA)


If you are restoring a HA service instance, create a new service instance of TLS NON-HA type and update
the plan after restoring the data.

1. Create a new service instance of plan type TLS NON-HA

cf create-service postgres TLS_NON_HA_PLAN_NAME RESTORE_SERVICE_NAME

2. Complete the steps 3, 4 and 5 as per the Restore a service instance (NON-HA and TLS NON-HA)
section mentioned above.

3. Once the restore is successful, perform an upgrade of the restore service instance to HA plan

cf update-service RESTORE_SERVICE_NAME -p HA_PLAN_NAME

4. Make sure to bind the app if any, as shown in step 6 in Restore a service instance NON-HA and
TLS NON-HA section.

Stream Incremental Backup and Restore


This topic describes how to manually perform incremental backup and restore for a running Postgres
service instance using BOSH errands.

Prerequisite:
1. Make sure to configure the S3 backup values during tile configuration, to provide a backup store.

Take incremental Backup

41
Tanzu for Postgres on Cloud Foundry

1. Make sure that the service instance to be backed up is created and running:

cf services | grep BACKUP_SERVICE_NAME

2. Get the service instance deployment name by fetching its GUID:

cf service BACKUP_SERVICE_NAME --guid

Where BACKUP_SERVICE_NAME is the service instance you are backing up. The deployment
name for the service instance will then be: service-instance_GUID_FETCHED

For example: service-instance_6053e3c4-0714-4982-ac01-d9c5d49418e0

3. Trigger the incremental backup using the postgres-stream-incremental-backup errand:

bosh -d service-instance_GUID run-errand postgres-stream-incremental-backup

Wait for the errand to be succeeded.

For example:

bosh -d service-instance_aa845d5d-a631-4a28-a69d-db400cff4212 run-errand postgr


es-stream-incremental-backup

Using environment '10.10.0.6' as client 'ops_manager'

Using deployment 'service-instance_aa845d5d-a631-4a28-a69d-db400cff4212'

Task 92

Task 92 | 11:24:44 | Preparing deployment: Preparing deployment


Task 92 | 11:24:45 | Warning: Executing errand on multiple instances in paralle
l. Use the `--instance` flag to run the errand on a single instance.
Task 92 | 11:24:45 | Preparing deployment: Preparing deployment (00:00:01)
Task 92 | 11:24:45 | Running errand: postgres-instance/272907cc-9660-41d8-8de8-
00ad754529c5 (0) (00:00:14)
Task 92 | 11:24:59 | Fetching logs for postgres-instance/272907cc-9660-41d8-8de
8-00ad754529c5 (0): Finding and packing log files (00:00:01)
Task 92 | 11:25:00 | Running errand: postgres-instance/fa2a1a9f-ea23-42f9-9d2d-
ed0c328a66f0 (1) (00:00:01)
Task 92 | 11:25:01 | Fetching logs for postgres-instance/fa2a1a9f-ea23-42f9-9d2
d-ed0c328a66f0 (1): Finding and packing log files (00:00:01)

Task 92 Started Wed Feb 5 11:24:44 UTC 2025


Task 92 Finished Wed Feb 5 11:25:02 UTC 2025
Task 92 Duration 00:00:18
Task 92 done

Instance postgres-instance/272907cc-9660-41d8-8de8-00ad754529c5
Exit Code 0
Stdout [20250205T11:24:45] Started postgres-stream-incremental-backup erran
d...
[20250205T11:24:45] Upload selector: S3 Backups
[20250205T11:24:45] In case of HA setup, backup will be taken only f
or the primary node
This is primary/leader node and in healthy state.
[20250205T11:24:45] Initiating incremental backup process on this pr
imary node of HA setup.

42
Tanzu for Postgres on Cloud Foundry

2025-02-05 11:24:45.936 P00 INFO: backup command begin 2.51: --con


fig=/var/vcap/store/pgbackrest/pgbackrest.conf --exec-id=9876-a8647522 --log-le
vel-console=info --log-level-file=info --log-path=/var/vcap/sys/log/pgbackrest
--pg1-path=/var/vcap/store/postgres/vmware-postgres --pg1-user=vcap --process-m
ax=2 --repo1-path=/test-shabir --repo1-retention-full=2 --repo1-s3-bucket=pg-ti
le-backup-test1 --repo1-s3-endpoint=s3.us-east-1.amazonaws.com --repo1-s3-key=<
redacted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=us-east-1 --repo1-
s3-uri-style=host --repo1-type=s3 --stanza=service-instance_aa845d5d-a631-4a28-
a69d-db400cff4212 --start-fast
2025-02-05 11:24:47.396 P00 INFO: last backup label = 20250205-112
229F, version = 2.51
2025-02-05 11:24:47.396 P00 INFO: execute non-exclusive backup sta
rt: backup begins after the requested immediate checkpoint completes
2025-02-05 11:24:48.097 P00 INFO: backup start archive = 000000010
000000000000007, lsn = 0/7000028
2025-02-05 11:24:48.097 P00 INFO: check archive for prior segment
000000010000000000000006
2025-02-05 11:24:53.576 P00 INFO: execute non-exclusive backup sto
p and wait for all WAL segments to archive
2025-02-05 11:24:53.776 P00 INFO: backup stop archive = 0000000100
00000000000007, lsn = 0/7000138
2025-02-05 11:24:53.863 P00 INFO: check archive for segment(s) 000
000010000000000000007:000000010000000000000007
2025-02-05 11:24:56.547 P00 INFO: new backup label = 20250205-1122
29F_20250205-112447I
2025-02-05 11:24:58.021 P00 INFO: incr backup size = 8.3KB, file t
otal = 970
2025-02-05 11:24:58.021 P00 INFO: backup command end: completed su
ccessfully (12087ms)
2025-02-05 11:24:58.021 P00 INFO: expire command begin 2.51: --con
fig=/var/vcap/store/pgbackrest/pgbackrest.conf --exec-id=9876-a8647522 --log-le
vel-console=info --log-level-file=info --log-path=/var/vcap/sys/log/pgbackrest
--repo1-path=/test-shabir --repo1-retention-full=2 --repo1-s3-bucket=pg-tile-ba
ckup-test1 --repo1-s3-endpoint=s3.us-east-1.amazonaws.com --repo1-s3-key=<redac
ted> --repo1-s3-key-secret=<redacted> --repo1-s3-region=us-east-1 --repo1-s3-ur
i-style=host --repo1-type=s3 --stanza=service-instance_aa845d5d-a631-4a28-a69d-
db400cff4212
2025-02-05 11:24:58.577 P00 INFO: expire command end: completed su
ccessfully (556ms)
Incremental backup completed sucesfully for service instance: servic
e-instance_aa845d5d-a631-4a28-a69d-db400cff4212.

Stderr -

Instance postgres-instance/fa2a1a9f-ea23-42f9-9d2d-ed0c328a66f0
Exit Code 0
Stdout [20250205T11:25:00] Started postgres-stream-incremental-backup erran
d...
[20250205T11:25:00] Upload selector: S3 Backups
[20250205T11:25:00] In case of HA setup, backup will be taken only f
or the primary node
This node is secondary/replica and in healthy state.
[20250205T11:25:00] Skipping backup process on secondry node of HA s
etup. Exiting.

Stderr -

2 errand(s)

43
Tanzu for Postgres on Cloud Foundry

Succeeded

Restore an incremental backup (NON-HA and TLS NON-


HA)
When restoring a service instance, you create a new service instance and then restore the backup to it.
Finally, you rebind and restage apps that use the service instance.

1. Make sure you have successfully completed the backup of the service.

2. Create a new service instance of the same plan (except HA):

cf create-service postgres POSTGRES_PLAN RESTORE_SERVICE_NAME

Where POSTGRES_PLAN is the plan depending on the backup service instance plan you are
restoring.

3. SSH to the service instance VM using bosh command line interface (cli):

cf service RESTORE_SERVICE_NAME --guid

Where RESTORE_SERVICE_NAME is the service instance you are restoring to. The deployment
name for service instance will then be: service-instance_RESTORE_SERVICE_GUID

For example: service-instance_6053e3c4-0714-4982-ac01-d9c5d49418e0

bosh -d service-instance_RESTORE_SERVICE_GUID ssh

4. Edit the name of service-instance in the pgbackrest.conf file with the backup service instance
deployment name that needs to be restored.

Before updating the file, make sure to keep a copy of this file:

cp /var/vcap/store/pgbackrest/pgbackrest.conf /tmp/pgbackrest.conf

Finally, edit the file:

sudo vim /tmp/pgbackrest.conf

[global]
#S3 configuration
repo1-type=s3
repo1-s3-key=AKIA2***********
repo1-s3-key-secret=EweMcMs2F1noh***********
repo1-s3-endpoint=s3.us-east-1.amazonaws.com
repo1-s3-region=us-east-1
repo1-s3-bucket=pg-tile-backup-test1
repo1-path=/pg-test-2
repo1-retention-full=2
repo1-s3-uri-style=host

#configure logs
log-level-console=info
log-level-file=debug
log-path=/var/vcap/sys/log/pgbackrest

44
Tanzu for Postgres on Cloud Foundry

#other settings
archive-async=y
spool-path=/var/vcap/store/pgbackrest/pgbackrest_spool_dir
start-fast=y
# configure parallelism
process-max=2

#Backup configuration for database cluster


[service-instance_BACKUP_SERVICE_GUID] ##Update this value
pg1-path=/var/vcap/store/postgres/vmware-postgres
pg1-user=vcap

5. Stop the postgres process before performing restore:

sudo monit stop postgres

6. Run the restore command, once postgres is stopped:

Switch to vcap user:

sudo su - vcap

Run the restore command:

export LD_LIBRARY_PATH="/var/vcap/packages/vmware-postgres/lib/x86_64-linux-gn
u" && /var/vcap/packages/pgbackrest/bin/pgbackrest restore --delta --stanza="se
rvice-instance_BACKUP_SERVICE_GUID" --config="/tmp/pgbackrest.conf"

7. Restart the postgres service:

Exit as the vcap user and run:

sudo monit start postgres

8. Upgrade the stanza:

Switch back to vcap user:

sudo su - vcap

Run the upgrade stanza command:

export LD_LIBRARY_PATH="/var/vcap/packages/vmware-postgres/lib/x86_64-linux-gn
u" && /var/vcap/packages/pgbackrest/bin/pgbackrest stanza-upgrade --stanza="ser
vice-instance_RESTORE_SERVICE_GUID" --config="/var/vcap/store/pgbackrest/pgback
rest.conf"

9. Determine if your app is bound to a service instance by running:

cf services

For example:

$ cf services
Getting services in org my-org / space my-space as user...

45
Tanzu for Postgres on Cloud Foundry

OK
name service plan bound apps last
operation
test_backup_svc postgres on-demand-postgres-db my-app creat
e succeeded
test_restore_svc postgres on-demand-postgres-db creat
e succeeded

1. Update your app to bind to the new service instance by running:

cf bind-service APP-NAME RESTORE_SERVICE_NAME

2. Restage your app by running:

cf restage APP-NAME

Restore an incremental backup (HA)


If you are restoring a HA service instance, create a new service instance of TLS NON-HA type and update
the plan after restoring the data.

1. Create a new service instance of plan type TLS NON-HA:

cf create-service postgres TLS_NON_HA_PLAN RESTORE_SERVICE_NAME

2. Complete steps 3 to 10 in Restore an incremental backup (NON-HA and TLS NON-HA) above.

3. Once the restore is successful, perform an upgrade of the restore service instance to HA plan:

cf update-service RESTORE_SERVICE_NAME -p HA_PLAN_NAME

4. Make sure to bind the app if any, as shown in step 11 in Restore an incremental backup (NON-HA
and TLS NON-HA).

Rotating Postgres Server Certificates

Tile version 1.1.3


This topic tells you how to access BOSH CredHub, check expiration dates, and rotate certificates when
using Tanzu for Postgres on Cloud Foundry.

To rotate the Services TLS CA and its leaf certificates, use one of the following procedures:

Tanzu Operations Manager v3.0: See Rotate the Services TLS CA and its leaf certificates.

Tanzu Operations Manager v2.10: See Rotate the Services TLS CA and its leaf certificates.

Tanzu for Postgres on Cloud Foundry v1.1.3 and later is compatible with CredHub Maestro.

Tile version 1.1.0, 1.1.1, and 1.1.2


Tanzu for Postgres on Cloud Foundry v1.1.0, v1.1.1, and v1.1.2 do not support the recommended certificate
rotation procedure for Tanzu Platform for Cloud Foundry tiles (which uses Credhub Maestro).

46
Tanzu for Postgres on Cloud Foundry

There is, however, a workaround to rotate your deployment’s certificates using Maestro. This workaround
involves adding a step in the recommended steps using Credhub Meastro.

Step 1: No change.

Step 2: No change.

Step 3: Workaround step: Get the newly generated CA certificate from credhub into a local file by using the
command:

`credhub get --output-json -n /services/tls_ca --versions=2 | jq --raw-output .version


s[0].value.ca > new_cert`

In the Ops manager UI, add the contents of this file into the “Trusted Certificates” text box in the BOSH
director tile’s “Security” section:

bosh-trusted-certs-box

From this point, all the remaining steps are identical to the recommended steps using Credhub Meastro.

The remaining steps start from Step 3 in the recommended procedure page: Redeploy the affected
services: All services that use certs signed by /setvices/tls_ca.

Observability
This section provides information about how to configure and consume observability features of VMware
Tanzu for Postgres on Cloud Foundry Tile.

Metric Exports
This topic describes how to consume Postgres metrics exported by Tanzu for Postgres on Cloud Foundry
component VMs.

Loggregator Firehose Endpoint


Postgres metrics are available through Loggregator Firehose endpoint. To view them, you can install the
Firehose plug-in for CF CLI. Then run the following command to stream the metrics to your console output:

cf nozzle --debug --filter ValueMetric

Metrics are provided in the following format:

origin:"postgres" eventType:ValueMetric timestamp:1699174329855507352


deployment:"service-instance_7a10aff2-e603-44ec-abd2-9a71adb0ac3d" job:"postgres-
instance" index:"bc5be5d8-3ab5-4f6c-8fa0-a4b3be1f646d" ip:"10.0.8.5" tags:
<key:"instance_id" value:"bc5be5d8-3ab5-4f6c-8fa0-a4b3be1f646d" > tags:<key:"source_id"
value:"7a10aff2-e603-44ec-abd2-9a71adb0ac3d" > valueMetric:<name:"pg_up" value:1
unit:"metric" >

Metrics Polling Interval


The metrics polling interval defaults to 30 seconds. You can change this by navigating to the Metrics
configuration page in Tanzu Operations Manager and entering a new value in the Metrics polling interval text

47
Tanzu for Postgres on Cloud Foundry

box (minimum: 10 seconds).

Prometheus Endpoint
VMware recommends using the prometheus-boshrelease release on GitHub, the default metrics endpoint is:
http://prometheus.<sys sub-domain set in your Cloud foundry installation>:9187/metrics.
This is configurable in the deployment manifest of prometheus-boshrelease. If you don’t explicitly provide
secret vars needed by the boshrelease, then prometheus-boshrelease creates file: ./tmp/deployment-
vars.yml containing secrets:
alertmanager_password,grafana_password,grafana_secret_key,postgres_grafana_password,prome
theus_password.

If an existing Prometheus instance needs to collect metrics from the Postgres component VMs, it can be
configured to collect from each component VM’s 9187 port. If the Prometheus instance is outside the CF
network of the Postgres deployment, a CF route must be created to this port for each VM you wish to
collect metrics from.

Metric type Description

pg_database_size_bytes gauge Disk space used by the database.

pg_exporter_last_scrape gauge Duration of the last scrape of metrics from PostgreSQL.


_duration_seconds

pg_exporter_last_scrape gauge Whether the last scrape of metrics from PostgreSQL resulted in an error (1 for error, 0
_error for success).

pg_exporter_scrapes_tot count Total number of times PostgreSQL was scraped for metrics.
al er

pg_locks_count gauge Number of locks.

pg_replication_is_replica gauge Indicates whether the server is a replica.

pg_replication_lag_secon gauge Replication lag behind master in seconds.


ds

pg_scrape_collector_dur gauge postgres_exporter: Duration of a collector scrape.


ation_seconds

48
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_scrape_collector_suc gauge postgres_exporter: Whether a collector succeeded.


cess

pg_settings_allow_in_pla gauge Server Parameter: allow_in_place_tablespaces


ce_tablespaces

pg_settings_allow_syste gauge Server Parameter: allow_system_table_mods


m_table_mods

pg_settings_archive_time gauge Server Parameter: archive_timeout [Units converted to seconds.]


out_seconds

pg_settings_array_nulls gauge Server Parameter: array_nulls

pg_settings_authenticatio gauge Server Parameter: authentication_timeout [Units converted to seconds.]


n_timeout_seconds

pg_settings_autovacuum gauge Server Parameter: autovacuum

pg_settings_autovacuum gauge Server Parameter: autovacuum_analyze_scale_factor


_analyze_scale_factor

pg_settings_autovacuum gauge Server Parameter: autovacuum_analyze_threshold


_analyze_threshold

pg_settings_autovacuum gauge Server Parameter: autovacuum_freeze_max_age


_freeze_max_age

pg_settings_autovacuum gauge Server Parameter: autovacuum_max_workers


_max_workers

pg_settings_autovacuum gauge Server Parameter: autovacuum_multixact_freeze_max_age


_multixact_freeze_max_
age

pg_settings_autovacuum gauge Server Parameter: autovacuum_naptime [Units converted to seconds.]


_naptime_seconds

pg_settings_autovacuum gauge Server Parameter: autovacuum_vacuum_cost_delay [Units converted to seconds.]


_vacuum_cost_delay_se
conds

pg_settings_autovacuum gauge Server Parameter: autovacuum_vacuum_cost_limit


_vacuum_cost_limit

pg_settings_autovacuum gauge Server Parameter: autovacuum_vacuum_insert_scale_factor


_vacuum_insert_scale_f
actor

pg_settings_autovacuum gauge Server Parameter: autovacuum_vacuum_insert_threshold


_vacuum_insert_threshol
d

pg_settings_autovacuum gauge Server Parameter: autovacuum_vacuum_scale_factor


_vacuum_scale_factor

pg_settings_autovacuum gauge Server Parameter: autovacuum_vacuum_threshold


_vacuum_threshold

49
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_autovacuum gauge Server Parameter: autovacuum_work_mem [Units converted to bytes.]


_work_mem_bytes

pg_settings_backend_flu gauge Server Parameter: backend_flush_after [Units converted to bytes.]


sh_after_bytes

pg_settings_bgwriter_del gauge Server Parameter: bgwriter_delay [Units converted to seconds.]


ay_seconds

pg_settings_bgwriter_flus gauge Server Parameter: bgwriter_flush_after [Units converted to bytes.]


h_after_bytes

pg_settings_bgwriter_lru gauge Server Parameter: bgwriter_lru_maxpages


_maxpages

pg_settings_bgwriter_lru gauge Server Parameter: bgwriter_lru_multiplier


_multiplier

pg_settings_block_size gauge Server Parameter: block_size

pg_settings_bonjour gauge Server Parameter: bonjour

pg_settings_check_functi gauge Server Parameter: check_function_bodies


on_bodies

pg_settings_checkpoint_ gauge Server Parameter: checkpoint_completion_target


completion_target

pg_settings_checkpoint_f gauge Server Parameter: checkpoint_flush_after [Units converted to bytes.]


lush_after_bytes

pg_settings_checkpoint_t gauge Server Parameter: checkpoint_timeout [Units converted to seconds.]


imeout_seconds

pg_settings_checkpoint_ gauge Server Parameter: checkpoint_warning [Units converted to seconds.]


warning_seconds

pg_settings_client_conne gauge Server Parameter: client_connection_check_interval [Units converted to seconds.]


ction_check_interval_sec
onds

pg_settings_commit_dela gauge Server Parameter: commit_delay


y

pg_settings_commit_sibli gauge Server Parameter: commit_siblings


ngs

pg_settings_cpu_index_t gauge Server Parameter: cpu_index_tuple_cost


uple_cost

pg_settings_cpu_operato gauge Server Parameter: cpu_operator_cost


r_cost

pg_settings_cpu_tuple_c gauge Server Parameter: cpu_tuple_cost


ost

pg_settings_cursor_tuple gauge Server Parameter: cursor_tuple_fraction


_fraction

50
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_data_check gauge Server Parameter: data_checksums


sums

pg_settings_data_directo gauge Server Parameter: data_directory_mode


ry_mode

pg_settings_data_sync_r gauge Server Parameter: data_sync_retry


etry

pg_settings_db_user_na gauge Server Parameter: db_user_namespace


mespace

pg_settings_deadlock_ti gauge Server Parameter: deadlock_timeout [Units converted to seconds.]


meout_seconds

pg_settings_debug_asse gauge Server Parameter: debug_assertions


rtions

pg_settings_debug_disca gauge Server Parameter: debug_discard_caches


rd_caches

pg_settings_debug_prett gauge Server Parameter: debug_pretty_print


y_print

pg_settings_debug_print gauge Server Parameter: debug_print_parse


_parse

pg_settings_debug_print gauge Server Parameter: debug_print_plan


_plan

pg_settings_debug_print gauge Server Parameter: debug_print_rewritten


_rewritten

pg_settings_default_stati gauge Server Parameter: default_statistics_target


stics_target

pg_settings_default_tran gauge Server Parameter: default_transaction_deferrable


saction_deferrable

pg_settings_default_tran gauge Server Parameter: default_transaction_read_only


saction_read_only

pg_settings_effective_ca gauge Server Parameter: effective_cache_size [Units converted to bytes.]


che_size_bytes

pg_settings_effective_io_ gauge Server Parameter: effective_io_concurrency


concurrency

pg_settings_enable_asyn gauge Server Parameter: enable_async_append


c_append

pg_settings_enable_bitm gauge Server Parameter: enable_bitmapscan


apscan

pg_settings_enable_gath gauge Server Parameter: enable_gathermerge


ermerge

pg_settings_enable_hash gauge Server Parameter: enable_hashagg


agg

51
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_enable_hash gauge Server Parameter: enable_hashjoin


join

pg_settings_enable_incre gauge Server Parameter: enable_incremental_sort


mental_sort

pg_settings_enable_inde gauge Server Parameter: enable_indexonlyscan


xonlyscan

pg_settings_enable_inde gauge Server Parameter: enable_indexscan


xscan

pg_settings_enable_mate gauge Server Parameter: enable_material


rial

pg_settings_enable_mem gauge Server Parameter: enable_memoize


oize

pg_settings_enable_mer gauge Server Parameter: enable_mergejoin


gejoin

pg_settings_enable_nestl gauge Server Parameter: enable_nestloop


oop

pg_settings_enable_paral gauge Server Parameter: enable_parallel_append


lel_append

pg_settings_enable_paral gauge Server Parameter: enable_parallel_hash


lel_hash

pg_settings_enable_partit gauge Server Parameter: enable_partition_pruning


ion_pruning

pg_settings_enable_partit gauge Server Parameter: enable_partitionwise_aggregate


ionwise_aggregate

pg_settings_enable_partit gauge Server Parameter: enable_partitionwise_join


ionwise_join

pg_settings_enable_seqs gauge Server Parameter: enable_seqscan


can

pg_settings_enable_sort gauge Server Parameter: enable_sort

pg_settings_enable_tidsc gauge Server Parameter: enable_tidscan


an

pg_settings_escape_stri gauge Server Parameter: escape_string_warning


ng_warning

pg_settings_exit_on_erro gauge Server Parameter: exit_on_error


r

pg_settings_extra_float_ gauge Server Parameter: extra_float_digits


digits

pg_settings_from_collaps gauge Server Parameter: from_collapse_limit


e_limit

pg_settings_fsync gauge Server Parameter: fsync

52
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_full_page_wr gauge Server Parameter: full_page_writes


ites

pg_settings_geqo gauge Server Parameter: geqo

pg_settings_geqo_effort gauge Server Parameter: geqo_effort

pg_settings_geqo_gener gauge Server Parameter: geqo_generations


ations

pg_settings_geqo_pool_s gauge Server Parameter: geqo_pool_size


ize

pg_settings_geqo_seed gauge Server Parameter: geqo_seed

pg_settings_geqo_selecti gauge Server Parameter: geqo_selection_bias


on_bias

pg_settings_geqo_thresh gauge Server Parameter: geqo_threshold


old

pg_settings_gin_fuzzy_s gauge Server Parameter: gin_fuzzy_search_limit


earch_limit

pg_settings_gin_pending gauge Server Parameter: gin_pending_list_limit [Units converted to bytes.]


_list_limit_bytes

pg_settings_hash_mem_ gauge Server Parameter: hash_mem_multiplier


multiplier

pg_settings_hot_standby gauge Server Parameter: hot_standby

pg_settings_hot_standby gauge Server Parameter: hot_standby_feedback


_feedback

pg_settings_huge_page_ gauge Server Parameter: huge_page_size [Units converted to bytes.]


size_bytes

pg_settings_idle_in_trans gauge Server Parameter: idle_in_transaction_session_timeout [Units converted to seconds.]


action_session_timeout_
seconds

pg_settings_idle_session gauge Server Parameter: idle_session_timeout [Units converted to seconds.]


_timeout_seconds

pg_settings_ignore_chec gauge Server Parameter: ignore_checksum_failure


ksum_failure

pg_settings_ignore_invali gauge Server Parameter: ignore_invalid_pages


d_pages

pg_settings_ignore_syst gauge Server Parameter: ignore_system_indexes


em_indexes

pg_settings_in_hot_stand gauge Server Parameter: in_hot_standby


by

pg_settings_integer_date gauge Server Parameter: integer_datetimes


times

53
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_jit gauge Server Parameter: jit

pg_settings_jit_above_co gauge Server Parameter: jit_above_cost


st

pg_settings_jit_debuggin gauge Server Parameter: jit_debugging_support


g_support

pg_settings_jit_dump_bit gauge Server Parameter: jit_dump_bitcode


code

pg_settings_jit_expressio gauge Server Parameter: jit_expressions


ns

pg_settings_jit_inline_abo gauge Server Parameter: jit_inline_above_cost


ve_cost

pg_settings_jit_optimize_ gauge Server Parameter: jit_optimize_above_cost


above_cost

pg_settings_jit_profiling_s gauge Server Parameter: jit_profiling_support


upport

pg_settings_jit_tuple_def gauge Server Parameter: jit_tuple_deforming


orming

pg_settings_join_collapse gauge Server Parameter: join_collapse_limit


_limit

pg_settings_krb_caseins gauge Server Parameter: krb_caseins_users


_users

pg_settings_lo_compat_p gauge Server Parameter: lo_compat_privileges


rivileges

pg_settings_lock_timeout gauge Server Parameter: lock_timeout [Units converted to seconds.]


_seconds

pg_settings_log_autovac gauge Server Parameter: log_autovacuum_min_duration [Units converted to seconds.]


uum_min_duration_seco
nds

pg_settings_log_checkpo gauge Server Parameter: log_checkpoints


ints

pg_settings_log_connecti gauge Server Parameter: log_connections


ons

pg_settings_log_disconn gauge Server Parameter: log_disconnections


ections

pg_settings_log_duration gauge Server Parameter: log_duration

pg_settings_log_executo gauge Server Parameter: log_executor_stats


r_stats

pg_settings_log_file_mod gauge Server Parameter: log_file_mode


e

54
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_log_hostnam gauge Server Parameter: log_hostname


e

pg_settings_log_lock_wai gauge Server Parameter: log_lock_waits


ts

pg_settings_log_min_dur gauge Server Parameter: log_min_duration_sample [Units converted to seconds.]


ation_sample_seconds

pg_settings_log_min_dur gauge Server Parameter: log_min_duration_statement [Units converted to seconds.]


ation_statement_second
s

pg_settings_log_paramet gauge Server Parameter: log_parameter_max_length [Units converted to bytes.]


er_max_length_bytes

pg_settings_log_paramet gauge Server Parameter: log_parameter_max_length_on_error [Units converted to bytes.]


er_max_length_on_error
_bytes

pg_settings_log_parser_ gauge Server Parameter: log_parser_stats


stats

pg_settings_log_planner_ gauge Server Parameter: log_planner_stats


stats

pg_settings_log_recover gauge Server Parameter: log_recovery_conflict_waits


y_conflict_waits

pg_settings_log_replicati gauge Server Parameter: log_replication_commands


on_commands

pg_settings_log_rotation_ gauge Server Parameter: log_rotation_age [Units converted to seconds.]


age_seconds

pg_settings_log_rotation_ gauge Server Parameter: log_rotation_size [Units converted to bytes.]


size_bytes

pg_settings_log_startup_ gauge Server Parameter: log_startup_progress_interval [Units converted to seconds.]


progress_interval_secon
ds

pg_settings_log_stateme gauge Server Parameter: log_statement_sample_rate


nt_sample_rate

pg_settings_log_stateme gauge Server Parameter: log_statement_stats


nt_stats

pg_settings_log_temp_fil gauge Server Parameter: log_temp_files [Units converted to bytes.]


es_bytes

pg_settings_log_transacti gauge Server Parameter: log_transaction_sample_rate


on_sample_rate

pg_settings_log_truncate gauge Server Parameter: log_truncate_on_rotation


_on_rotation

pg_settings_logging_colle gauge Server Parameter: logging_collector


ctor

55
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_logical_deco gauge Server Parameter: logical_decoding_work_mem [Units converted to bytes.]


ding_work_mem_bytes

pg_settings_maintenance gauge Server Parameter: maintenance_io_concurrency


_io_concurrency

pg_settings_maintenance gauge Server Parameter: maintenance_work_mem [Units converted to bytes.]


_work_mem_bytes

pg_settings_max_conne gauge Server Parameter: max_connections


ctions

pg_settings_max_files_p gauge Server Parameter: max_files_per_process


er_process

pg_settings_max_functio gauge Server Parameter: max_function_args


n_args

pg_settings_max_identifi gauge Server Parameter: max_identifier_length


er_length

pg_settings_max_index_ gauge Server Parameter: max_index_keys


keys

pg_settings_max_locks_ gauge Server Parameter: max_locks_per_transaction


per_transaction

pg_settings_max_logical gauge Server Parameter: max_logical_replication_workers


_replication_workers

pg_settings_max_parallel gauge Server Parameter: max_parallel_maintenance_workers


_maintenance_workers

pg_settings_max_parallel gauge Server Parameter: max_parallel_workers


_workers

pg_settings_max_parallel gauge Server Parameter: max_parallel_workers_per_gather


_workers_per_gather

pg_settings_max_pred_l gauge Server Parameter: max_pred_locks_per_page


ocks_per_page

pg_settings_max_pred_l gauge Server Parameter: max_pred_locks_per_relation


ocks_per_relation

pg_settings_max_pred_l gauge Server Parameter: max_pred_locks_per_transaction


ocks_per_transaction

pg_settings_max_prepar gauge Server Parameter: max_prepared_transactions


ed_transactions

pg_settings_max_replicat gauge Server Parameter: max_replication_slots


ion_slots

pg_settings_max_slot_w gauge Server Parameter: max_slot_wal_keep_size [Units converted to bytes.]


al_keep_size_bytes

pg_settings_max_stack_ gauge Server Parameter: max_stack_depth [Units converted to bytes.]


depth_bytes

56
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_max_standb gauge Server Parameter: max_standby_archive_delay [Units converted to seconds.]


y_archive_delay_second
s

pg_settings_max_standb gauge Server Parameter: max_standby_streaming_delay [Units converted to seconds.]


y_streaming_delay_seco
nds

pg_settings_max_sync_ gauge Server Parameter: max_sync_workers_per_subscription


workers_per_subscriptio
n

pg_settings_max_wal_se gauge Server Parameter: max_wal_senders


nders

pg_settings_max_wal_si gauge Server Parameter: max_wal_size [Units converted to bytes.]


ze_bytes

pg_settings_max_worker gauge Server Parameter: max_worker_processes


_processes

pg_settings_min_dynami gauge Server Parameter: min_dynamic_shared_memory [Units converted to bytes.]


c_shared_memory_byte
s

pg_settings_min_parallel gauge Server Parameter: min_parallel_index_scan_size [Units converted to bytes.]


_index_scan_size_bytes

pg_settings_min_parallel gauge Server Parameter: min_parallel_table_scan_size [Units converted to bytes.]


_table_scan_size_bytes

pg_settings_min_wal_siz gauge Server Parameter: min_wal_size [Units converted to bytes.]


e_bytes

pg_settings_old_snapsho gauge Server Parameter: old_snapshot_threshold [Units converted to seconds.]


t_threshold_seconds

pg_settings_parallel_lead gauge Server Parameter: parallel_leader_participation


er_participation

pg_settings_parallel_setu gauge Server Parameter: parallel_setup_cost


p_cost

pg_settings_parallel_tupl gauge Server Parameter: parallel_tuple_cost


e_cost

pg_settings_pg_stat_stat gauge Server Parameter: pg_stat_statements.max


ements_max

pg_settings_pg_stat_stat gauge Server Parameter: pg_stat_statements.save


ements_save

pg_settings_pg_stat_stat gauge Server Parameter: pg_stat_statements.track_planning


ements_track_planning

pg_settings_pg_stat_stat gauge Server Parameter: pg_stat_statements.track_utility


ements_track_utility

pg_settings_port gauge Server Parameter: port

57
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_post_auth_d gauge Server Parameter: post_auth_delay [Units converted to seconds.]


elay_seconds

pg_settings_pre_auth_de gauge Server Parameter: pre_auth_delay [Units converted to seconds.]


lay_seconds

pg_settings_quote_all_id gauge Server Parameter: quote_all_identifiers


entifiers

pg_settings_random_pag gauge Server Parameter: random_page_cost


e_cost

pg_settings_recovery_mi gauge Server Parameter: recovery_min_apply_delay [Units converted to seconds.]


n_apply_delay_seconds

pg_settings_recovery_ta gauge Server Parameter: recovery_target_inclusive


rget_inclusive

pg_settings_recursive_w gauge Server Parameter: recursive_worktable_factor


orktable_factor

pg_settings_remove_tem gauge Server Parameter: remove_temp_files_after_crash


p_files_after_crash

pg_settings_restart_after gauge Server Parameter: restart_after_crash


_crash

pg_settings_row_securit gauge Server Parameter: row_security


y

pg_settings_segment_siz gauge Server Parameter: segment_size [Units converted to bytes.]


e_bytes

pg_settings_seq_page_c gauge Server Parameter: seq_page_cost


ost

pg_settings_server_versi gauge Server Parameter: server_version_num


on_num

pg_settings_shared_buff gauge Server Parameter: shared_buffers [Units converted to bytes.]


ers_bytes

pg_settings_shared_me gauge Server Parameter: shared_memory_size [Units converted to bytes.]


mory_size_bytes

pg_settings_shared_me gauge Server Parameter: shared_memory_size_in_huge_pages


mory_size_in_huge_pag
es

pg_settings_ssl gauge Server Parameter: ssl

pg_settings_ssl_passphr gauge Server Parameter: ssl_passphrase_command_supports_reload


ase_command_supports
_reload

pg_settings_ssl_prefer_s gauge Server Parameter: ssl_prefer_server_ciphers


erver_ciphers

pg_settings_standard_co gauge Server Parameter: standard_conforming_strings


nforming_strings

58
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_statement_ti gauge Server Parameter: statement_timeout [Units converted to seconds.]


meout_seconds

pg_settings_superuser_r gauge Server Parameter: superuser_reserved_connections


eserved_connections

pg_settings_synchronize gauge Server Parameter: synchronize_seqscans


_seqscans

pg_settings_syslog_sequ gauge Server Parameter: syslog_sequence_numbers


ence_numbers

pg_settings_syslog_split gauge Server Parameter: syslog_split_messages


_messages

pg_settings_tcp_keepaliv gauge Server Parameter: tcp_keepalives_count


es_count

pg_settings_tcp_keepaliv gauge Server Parameter: tcp_keepalives_idle [Units converted to seconds.]


es_idle_seconds

pg_settings_tcp_keepaliv gauge Server Parameter: tcp_keepalives_interval [Units converted to seconds.]


es_interval_seconds

pg_settings_tcp_user_ti gauge Server Parameter: tcp_user_timeout [Units converted to seconds.]


meout_seconds

pg_settings_temp_buffer gauge Server Parameter: temp_buffers [Units converted to bytes.]


s_bytes

pg_settings_temp_file_lim gauge Server Parameter: temp_file_limit [Units converted to bytes.]


it_bytes

pg_settings_trace_notify gauge Server Parameter: trace_notify

pg_settings_trace_sort gauge Server Parameter: trace_sort

pg_settings_track_activiti gauge Server Parameter: track_activities


es

pg_settings_track_activit gauge Server Parameter: track_activity_query_size [Units converted to bytes.]


y_query_size_bytes

pg_settings_track_comm gauge Server Parameter: track_commit_timestamp


it_timestamp

pg_settings_track_count gauge Server Parameter: track_counts


s

pg_settings_track_io_timi gauge Server Parameter: track_io_timing


ng

pg_settings_track_wal_io gauge Server Parameter: track_wal_io_timing


_timing

pg_settings_transaction_ gauge Server Parameter: transaction_deferrable


deferrable

pg_settings_transaction_ gauge Server Parameter: transaction_read_only


read_only

59
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_transform_n gauge Server Parameter: transform_null_equals


ull_equals

pg_settings_unix_socket gauge Server Parameter: unix_socket_permissions


_permissions

pg_settings_update_proc gauge Server Parameter: update_process_title


ess_title

pg_settings_vacuum_co gauge Server Parameter: vacuum_cost_delay [Units converted to seconds.]


st_delay_seconds

pg_settings_vacuum_co gauge Server Parameter: vacuum_cost_limit


st_limit

pg_settings_vacuum_co gauge Server Parameter: vacuum_cost_page_dirty


st_page_dirty

pg_settings_vacuum_co gauge Server Parameter: vacuum_cost_page_hit


st_page_hit

pg_settings_vacuum_co gauge Server Parameter: vacuum_cost_page_miss


st_page_miss

pg_settings_vacuum_def gauge Server Parameter: vacuum_defer_cleanup_age


er_cleanup_age

pg_settings_vacuum_fail gauge Server Parameter: vacuum_failsafe_age


safe_age

pg_settings_vacuum_fre gauge Server Parameter: vacuum_freeze_min_age


eze_min_age

pg_settings_vacuum_fre gauge Server Parameter: vacuum_freeze_table_age


eze_table_age

pg_settings_vacuum_mul gauge Server Parameter: vacuum_multixact_failsafe_age


tixact_failsafe_age

pg_settings_vacuum_mul gauge Server Parameter: vacuum_multixact_freeze_min_age


tixact_freeze_min_age

pg_settings_vacuum_mul gauge Server Parameter: vacuum_multixact_freeze_table_age


tixact_freeze_table_age

pg_settings_wal_block_si gauge Server Parameter: wal_block_size


ze

pg_settings_wal_buffers_ gauge Server Parameter: wal_buffers [Units converted to bytes.]


bytes

pg_settings_wal_decode gauge Server Parameter: wal_decode_buffer_size [Units converted to bytes.]


_buffer_size_bytes

pg_settings_wal_init_zer gauge Server Parameter: wal_init_zero


o

pg_settings_wal_keep_si gauge Server Parameter: wal_keep_size [Units converted to bytes.]


ze_bytes

60
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_settings_wal_log_hint gauge Server Parameter: wal_log_hints


s

pg_settings_wal_receiver gauge Server Parameter: wal_receiver_create_temp_slot


_create_temp_slot

pg_settings_wal_receiver gauge Server Parameter: wal_receiver_status_interval [Units converted to seconds.]


_status_interval_second
s

pg_settings_wal_receiver gauge Server Parameter: wal_receiver_timeout [Units converted to seconds.]


_timeout_seconds

pg_settings_wal_recycle gauge Server Parameter: wal_recycle

pg_settings_wal_retrieve gauge Server Parameter: wal_retrieve_retry_interval [Units converted to seconds.]


_retry_interval_seconds

pg_settings_wal_segmen gauge Server Parameter: wal_segment_size [Units converted to bytes.]


t_size_bytes

pg_settings_wal_sender_ gauge Server Parameter: wal_sender_timeout [Units converted to seconds.]


timeout_seconds

pg_settings_wal_skip_thr gauge Server Parameter: wal_skip_threshold [Units converted to bytes.]


eshold_bytes

pg_settings_wal_writer_d gauge Server Parameter: wal_writer_delay [Units converted to seconds.]


elay_seconds

pg_settings_wal_writer_fl gauge Server Parameter: wal_writer_flush_after [Units converted to bytes.]


ush_after_bytes

pg_settings_work_mem_ gauge Server Parameter: work_mem [Units converted to bytes.]


bytes

pg_settings_zero_damag gauge Server Parameter: zero_damaged_pages


ed_pages

pg_stat_activity_count gauge Number of connections in this state.

pg_stat_activity_max_tx_ gauge Max duration in seconds any active transaction has been running.
duration

pg_stat_archiver_archive count Number of WAL files that have been successfully archived.
d_count er

pg_stat_archiver_failed_ count Number of failed attempts for archiving WAL files.


count er

pg_stat_archiver_last_ar gauge Time in seconds since last WAL segment was successfully archived.
chive_age

pg_stat_bgwriter_buffers count Number of buffers allocated.


_alloc_total er

pg_stat_bgwriter_buffers count Number of times a back end had to execute its own fsync call. Normally the background
_backend_fsync_total er writer handles those even when the back end does its own write.

61
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_stat_bgwriter_buffers count Number of buffers written directly by a back end.


_backend_total er

pg_stat_bgwriter_buffers count Number of buffers written during checkpoints.


_checkpoint_total er

pg_stat_bgwriter_buffers count Number of buffers written by the background writer.


_clean_total er

pg_stat_bgwriter_checkp count Total amount of time that has been spent in the portion of checkpoint processing where
oint_sync_time_total er files are synchronized to disk, in milliseconds.

pg_stat_bgwriter_checkp count Total amount of time that has been spent in the portion of checkpoint processing where
oint_write_time_total er files are written to disk, in milliseconds.

pg_stat_bgwriter_checkp count Number of requested checkpoints that have been performed.


oints_req_total er

pg_stat_bgwriter_checkp count Number of scheduled checkpoints that have been performed.


oints_timed_total er

pg_stat_bgwriter_maxwri count Number of times the background writer stopped a cleaning scan because it had written
tten_clean_total er too many buffers.

pg_stat_bgwriter_stats_r count Time at which these statistics were last reset.


eset_total er

pg_stat_database_blk_re count Time spent reading data file blocks by back ends in this database, in milliseconds.
ad_time er

pg_stat_database_blk_w count Time spent writing data file blocks by back ends in this database, in milliseconds.
rite_time er

pg_stat_database_blks_ count Number of times disk blocks were found already in the buffer cache, so that a read was
hit er not necessary. This only includes hits in the PostgreSQL buffer cache, not the operating
system’s file system cache.

pg_stat_database_blks_r count Number of disk blocks read in this database.


ead er

pg_stat_database_conflic count Number of queries canceled due to conflicts with recovery in this database. (Conflicts
ts er occur only on standby servers; see pg_stat_database_conflicts for details.)

pg_stat_database_conflic count Number of queries in this database that have been canceled due to pinned buffers.
ts_confl_bufferpin er

pg_stat_database_conflic count Number of queries in this database that have been canceled due to deadlocks.
ts_confl_deadlock er

pg_stat_database_conflic count Number of queries in this database that have been canceled due to lock timeouts.
ts_confl_lock er

pg_stat_database_conflic count Number of queries in this database that have been canceled due to old snapshots.
ts_confl_snapshot er

pg_stat_database_conflic count Number of queries in this database that have been canceled due to dropped
ts_confl_tablespace er tablespaces.

pg_stat_database_deadl count Number of deadlocks detected in this database.


ocks er

62
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_stat_database_numb gauge Number of back ends currently connected to this database. This is the only column in
ackends this view that returns a value reflecting current state. All other columns return the
accumulated values since the last reset.

pg_stat_database_stats_ count Time at which these statistics were last reset.


reset er

pg_stat_database_temp_ count Total amount of data written to temporary files by queries in this database. All temporary
bytes er files are counted, regardless of why the temporary file was created, and regardless of
the log_temp_files setting.

pg_stat_database_temp_ count Number of temporary files created by queries in this database. All temporary files are
files er counted, regardless of why the temporary file was created (for example, sorting or
hashing), and regardless of the log_temp_files setting.

pg_stat_database_tup_d count Number of rows deleted by queries in this database.


eleted er

pg_stat_database_tup_fe count Number of rows fetched by queries in this database.


tched er

pg_stat_database_tup_in count Number of rows inserted by queries in this database.


serted er

pg_stat_database_tup_re count Number of rows returned by queries in this database.


turned er

pg_stat_database_tup_u count Number of rows updated by queries in this database.


pdated er

pg_stat_database_xact_ count Number of transactions in this database that have been committed.
commit er

pg_stat_database_xact_r count Number of transactions in this database that have been rolled back.
ollback er

pg_stat_user_tables_ana count Number of times this table has been manually analyzed.
lyze_count er

pg_stat_user_tables_aut count Number of times this table has been analyzed by the autovacuum daemon.
oanalyze_count er

pg_stat_user_tables_aut count Number of times this table has been vacuumed by the autovacuum daemon.
ovacuum_count er

pg_stat_user_tables_idx count Number of index scans initiated on this table.


_scan er

pg_stat_user_tables_idx count Number of live rows fetched by index scans.


_tup_fetch er

pg_stat_user_tables_last gauge Last time at which this table was manually analyzed.
_analyze

pg_stat_user_tables_last gauge Last time at which this table was analyzed by the autovacuum daemon.
_autoanalyze

pg_stat_user_tables_last gauge Last time at which this table was vacuumed by the autovacuum daemon.
_autovacuum

63
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_stat_user_tables_last gauge Last time at which this table was manually vacuumed, not counting VACUUM FULL.
_vacuum

pg_stat_user_tables_n_d gauge Estimated number of dead rows.


ead_tup

pg_stat_user_tables_n_li gauge Estimated number of live rows.


ve_tup

pg_stat_user_tables_n_ gauge Estimated number of rows changed since last analyze.


mod_since_analyze

pg_stat_user_tables_n_t count Number of rows deleted.


up_del er

pg_stat_user_tables_n_t count Number of rows HOT updated, with no separate index update required.
up_hot_upd er

pg_stat_user_tables_n_t count Number of rows inserted.


up_ins er

pg_stat_user_tables_n_t count Number of rows updated.


up_upd er

pg_stat_user_tables_seq count Number of sequential scans initiated on this table.


_scan er

pg_stat_user_tables_seq count Number of live rows fetched by sequential scans.


_tup_read er

pg_stat_user_tables_vac count Number of times this table has been manually vacuumed, not counting VACUUM FULL.
uum_count er

pg_static unkno Version string as reported by postgres.


wn

pg_statio_user_tables_h count Number of buffer hits in this table.


eap_blocks_hit er

pg_statio_user_tables_h count Number of disk blocks read from this table.


eap_blocks_read er

pg_statio_user_tables_id count Number of buffer hits in all indexes on this table.


x_blocks_hit er

pg_statio_user_tables_id count Number of disk blocks read from all indexes on this table.
x_blocks_read er

pg_statio_user_tables_tid count Number of buffer hits in this table’s TOAST table indexes, if any.
x_blocks_hit er

pg_statio_user_tables_tid count Number of disk blocks read from this table’s TOAST table indexes, if any.
x_blocks_read er

pg_statio_user_tables_to count Number of buffer hits in this table’s TOAST table, if any.
ast_blocks_hit er

pg_statio_user_tables_to count Number of disk blocks read from this table’s TOAST table, if any.
ast_blocks_read er

64
Tanzu for Postgres on Cloud Foundry

Metric type Description

pg_up gauge Whether the last scrape of metrics from PostgreSQL was able to connect to the server
(1 for yes, 0 for no).

postgres_exporter_build_ gauge A metric with a constant ‘1’ value labeled by version, revision, branch, goversion from
info which postgres_exporter was built, and the goos and goarch for the build.

Logging
Postgres Service instance and Broker VM log everything under their respective /var/vcap/sys/log/
directory. If syslog is configured, all these logs are forwarded to it.

Configure Syslog Forwarding


Syslog forwarding is enabled by default. VMware recommends keeping this default setting because it is
good operational practice. However, you can opt out by selecting “No” for “Do you want to configure
syslog?” in the Ops Manager Settings tab.

To enable monitoring for VMware Tanzu for Postgres on Cloud Foundry, operators must designate an
external syslog endpoint for log entries. The endpoint serves as the input to a monitoring platform such as
Datadog, Papertrail, or SumoLogic.

To specify the destination for VMware Tanzu for Postgres on Cloud Foundry log entries:

1. From the Ops Manager Installation Dashboard, click the VMware Tanzu for Postgres on Cloud
Foundry tile.

2. In the tile, click the Settings tab.

3. Click Syslog.

4. Configure the fields on the Syslog pane as follows:

Field Description

Syslog Enter the IP or DNS address of the syslog server.


Address

Syslog Port Enter the port of the syslog server.

Transport Select the transport protocol of the syslog server. The options are TLS, UDP, or RELP.
Protocol

Enable TLS Enable TLS to the syslog server.

Permitted Peer If there are several peer servers that can respond to remote syslog connections, enter a wildcard in the
domain, such as *.example.com.

SSL Certificate If the server certificate is not signed by a known authority, such as an internal syslog server, enter the CA
certificate of the log management service endpoint.

Queue Size The number of log entries the buffer holds before dropping messages. A larger buffer size might overload
the system. The default is 100000.

Forward Debug Some components produce very long debug logs. This option prevents them from being forwarded. These
Logs logs are still written to local disk.

65
Tanzu for Postgres on Cloud Foundry

Field Description

Custom Rules The custom rsyslog rules are written in RainerScript and are inserted before the rule that forwards logs.

Upgrade Tanzu for Postgres on Cloud Foundry


This topic describes how to upgrade Tanzu for Postgres on Cloud Foundry.

A direct upgrade from v1.2.x to v10.1.0 is supported.

Upgrade from v10.0.0 to v10.1.0 is not supported.

Upgrade from v1.2.x to v10.1.0


This major upgrade also changes the Postgres major version from v15.x to v16.x. Consider the following
points and limitations before upgrading:

It is strongly recommended to backup the data from v1.2.x services before upgrading. Follow the
instructions in the Backup topic.

Backups created using adbr(pgBackRest) in v1.2.x cannot be restored in v10.1.0 due to a major
Postgres version change. This is a known limitation of pgBackRest

The pg_upgrade tool is used to upgrade the major version of Postgres. It requires both the old and
new Postgres data directories, so additional disk space is needed—approximately equal to the size
of the old data directory. If there isn’t enough space on the persistent disk, the upgrade will fail. To
fix this, increase the disk size using Ops Manager and reapply the changes.

If there are failed services in version 1.2.x, the upgrade will fail for those services. To resolve this
issue, delete the failed services and reapply the changes in Ops Manager.

Upgrade tile in Tanzu Operations Manager


Follow these steps to upgrade Tanzu for Postgres on Cloud Foundry:

1. Download the new version of the tile from Broadcom Support.

66
Tanzu for Postgres on Cloud Foundry

2. Upload the product to Tanzu Operations Manager.

67
Tanzu for Postgres on Cloud Foundry

3. Click Add (+) next to the uploaded product.

4. Click on the Tanzu for Postgres on Cloud Foundry tile and configure the upgrade options.

Under the Errands section, choose the Default (On) value for the Upgrade All Service
Instances post-deploy errand. Save the change.

5. Click Review Pending Changes. For more information, see Reviewing your pending product
changes in Tanzu Operations Manager in the Tanzu Operations Manager documentation.

6. Click Apply Changes.

Disaster Recovery using multi-site replication


This topic describes how to set up a multi-site replication cluster for disaster recovery in VMware Tanzu for
Postgres on Cloud Foundry. This setup allows you to have a primary cluster and a secondary cluster that
can take over in case of a failure in the primary cluster.

For more information about multi-site architecture, see Multi-Site Architecture Guide.

68
Tanzu for Postgres on Cloud Foundry

Prerequisites
1. Two Tanzu Platform for Cloud Foundry foundations with VMware Tanzu for Postgres on Cloud
Foundry tile v10.1.0 or later installed.

2. Primary foundation must have services gateway enabled to allow secondary foundation to access
the primary foundation. Refer to the Enable Service Gateway Access to enable the service gateway
access.

3. This concept works only with HA-TLS plan. The service gateway access is not supported for single
instance plan.

4. The service gateway access must be enabled for the primary service instance in the primary
foundation.

Enable Multi-Site Replication


To enable TLS communication between the two foundations, you must exchange the TLS CA certificate
between the two foundations.

The following procedure involves restarting all of the VMs in your deployment to apply a CA
certificate. The operation can take a long time to complete.

1. Start with primary foundation and get the CredHub credentials in Tanzu Operations Manager:

1. In the Tanzu Ops Manager Installation Dashboard, click the BOSH Director tile.

2. Click the Credentials tab.

3. In the BOSH Director section, click the link to the BOSH Commandline Credentials.

4. Record/Copy the credentials BOSH_CLIENT and BOSH_CLIENT_SECRET to a text file for


future reference. Example of credentials page:

`{"credential":"BOSH_CLIENT=ops_manager BOSH_CLIENT_SECRET=abCdE1FgHIjkL2m3n-3P
qrsT4EUVwXy5 BOSH_CA_CERT=/var/tempest/workspaces/default/root_ca_certificate B
OSH_ENVIRONMENT=10.0.0.5 bosh "}`

Where
BOSH_CLIENT is the BOSH CredHub client name
BOSH_CLIENT_SECRET is the BOSH CredHub client secret

2. Record the information needed to log in to the BOSH Director VM by following the procedure in
Gather Credential and IP Address Information.

3. Log in to the Tanzu Operations Manager VM by following the procedure in Log in to the Tanzu
Operations Manager VM with SSH.

4. Set the API target of the CredHub CLI as your CredHub server by running:

credhub api https://<BOSH-Director-IP>:8844 \


--ca-cert=/var/tempest/workspaces/default/root_ca_certificate

69
Tanzu for Postgres on Cloud Foundry

Where BOSH-DIRECTOR-IP is the IP address of the BOSH Director VM.

For example:

credhub api \
https://10.0.0.5:8844 \
--ca-cert=/var/tempest/workspaces/default/root\_ca\_certificate

5. Log in to the CredHub CLI by running:

credhub login \
--client-name=<BOSH_CLIENT> \
--client-secret=CREDHUB-CLIENT-SECRET

Where

CREDHUB-CLIENT-SECRET is the BOSH_CLIENT_SECRET


CREDHUB-CLIENT-NAME is the value you recorded for BOSH_CLIENT you recorded in step 1.For
example:

credhub login \
--client-name=ops_manager \
--client-secret=abCdE1FgHIjkL2m3n-3PqrsT4EUVwXy5

6. Get the CA certificate from the primary foundation by running:

credhub get \
--name=/services/tls_ca \
-k ca

7. Go to Tanzu Ops Manager Installation Dashboard > BOSH Director > Security in your secondary
foundation.

8. Append the contents of the CA certificate you recorded in step 6 into Trusted Certificates.

9. Click Save.

10. Repeat Steps 1-9 for the secondary foundation to add the CA certificate to the primary foundation.

Creating Multi-Site Replication Standby Postgres Service


1. In your primary foundation create a new service instance with plan HA-TLS with services gateway
enabled as described here Create Service Instance.

2. Once the service instance is created, gather the information about primary host, service gateway
port, and the services gateway TCP domain IP.

3. To get the details, create a new service key, if not present, use the following command:

cf create-service-key <SERVICE_INSTANCE_NAME> <SERVICE_KEY_NAME>

Where:

SERVICE_INSTANCE_NAME is the name of the service instance you created in step 1

SERVICE_KEY_NAME is the name of the service key you want to create.

70
Tanzu for Postgres on Cloud Foundry

For example:

cf create-service-key pg-ha-tls pg-ha-tls-key

4. Get the service key details using the following command:

cf service-key <SERVICE_INSTANCE_NAME> <SERVICE_KEY_NAME>

Where:

SERVICE_INSTANCE_NAME is the name of the service instance you created in step 1.

SERVICE_KEY_NAME is the name of the service key you created in step 3.

For example:

cf service-key pg-ha-tls pg-ha-tls-key

{
"credentials": {
"db": "postgres",
"hosts": [
"q-s0.postgres-instance.network.service-instance-329c3459-c323-498f-85f
8-55ac62d5ac68.bosh"
],
"jdbcUrl": "jdbc:postgresql://q-s0.postgres-instance.network.service-inst
ance-329c3459-c323-498f-85f8-55ac62d5ac68.bosh:5432/postgres?targetServerType=p
rimary&user=pgadmin&password=admin",
"password": "admin",
"port": 5432,
"primary_host": "329c3459-c323-498f-85f8-55ac62d5ac68.postgres.service.in
ternal",
"service_gateway": {
"host": "tcp.tas.z9d10d4f1.shepherd.lease",
"jdbcUrl": "jdbc:postgresql://tcp.tas.z9d10d4f1.shepherd.lease:1024/pos
tgres?targetServerType=primary&user=pgadmin&password=admin",
"port": 1024,
"uri": "postgresql://pgadmin:[email protected]:102
4/postgres"
},
"uri": "postgresql://pgadmin:[email protected]
-instance-329c3459-c323-498f-85f8-55ac62d5ac68.bosh:5432/postgres",
"user": "pgadmin"
}
}

5. Create the standby service instance, using the information from the service key of primary service
instance:

cf create-service postgres POSTGRES_HA_PLAN SERVICE_NAME \


-c '{"remote_primary_host":"<PRIMARY_HOST>","remote_primary_port":<PORT>,"remot
e_primary_tcp_domain_ip":"<SERVICE_GATEWAY_TCP_DOMAIN_IP>"}'

Where

POSTGRES_HA_PLAN is the HA plan name

SERVICE_NAME is the name of the standby service instance you want to create

71
Tanzu for Postgres on Cloud Foundry

remote_primary_host is the primary host from the service key of primary service instance

remote_primary_port is the port from the service key of primary service instance under
service_gateway section

remote_primary_tcp_domain_ip is the IP resolved from service_gateway.host mentioned


in the service key of primary service instance

You can use tool like nslookup to resolve the IP address of the service gateway host.

For example:

nslookup tcp.tas.z9d10d4f1.shepherd.lease

This will give you the IP address of the service gateway host.

For example:

Server: 192.19.189.30
Address: 192.19.189.30#53

Non-authoritative answer:
Name: tcp.tas.z9d10d4f1.shepherd.lease
Address: 34.60.52.133

Finally, the command to create the standby service instance will look like this:

For example:

cf create-service postgres ha-tls pg-ha-tls-standby \


-c '{"remote_primary_host":"329c3459-c323-498f-85f8-55ac62d5ac68.postgres.servi
ce.internal", "remote_primary_port":1024, "remote_primary_tcp_domain_ip":"34.6
0.52.133"}'

Promoting Standby Service Instance to Primary in case of


Failover
If the primary service instance goes down, you can promote the standby service instance to primary using
the following procedure:

1. The operator needs to manually detect when primary foundation goes down, and trigger update
service command to promote the standby service to new primary.

2. To update the current Standby HA service to primary, run the following command:

cf update-service <SERVICE_INSTANCE_NAME> \
-c '{"promote_to_primary": true}'

Where

SERVICE_INSTANCE_NAME is the name of the standby service instance you created in step 5
mentioned in section Creating Multi-Site Replication Standby Postgres Service.

For example:

72
Tanzu for Postgres on Cloud Foundry

cf update-service pg-ha-tls-standby \
-c '{"promote_to_primary": true}'

This command will promote the standby service instance to primary and enable write operations to
your DB (this was a read-only replica before).

The command will take some time to complete, depending on the time taken to
detect the failure and time taken for update service to finish for promoting the
secondary/standby service to primary.

Restoration to initial state


When primary foundation is up and running again, and you want to return to initial state (make the old
primary as primary again and make current primary as standby), we support the following way as also
described in Patroni documentation:

1. Rebuild the standby cluster from scratch. For the current release, we suggest letting the newly
promoted primary run as it is and create a new standby in the old primary foundation (the one used
to create primary service instance before failover) or a new foundation altogether.

2. If you did not enable the services gateway while creating initial standby service, you will need to
enable the services gateway now, after you have promoted this standby to new primary, because
this will be used to create a new standby service in the old primary foundation or a new foundation.

To enable the services gateway, run the following command:

cf update-service <SERVICE_INSTANCE_NAME> \
-c '{"enable_service_gateway":true}'

Where SERVICE_INSTANCE_NAME is the name of the service instance you created in step 1.

For example:

cf update-service pg-ha-tls-standby \
-c '{"enable_service_gateway":true}'

This command will enable the services gateway for the service instance and allow you to create a
new standby service in primary foundation or a new foundation.

Gather credential and IP address information


This topic describes how to collect information from the Tanzu Operations Manager interface.

Procedure
Follow these instructions to collect the information you need from the Tanzu Operations Manager interface:

1. Open the Tanzu Operations Manager interface by going to the Tanzu Operations Manager fully
qualified domain name (FQDN) in a web browser.

73
Tanzu for Postgres on Cloud Foundry

2. Click the BOSH Director tile and click the Status tab.

3. Record the IP address for the Director job. This is the IP address of the VM where the BOSH
Director runs.

4. Click the Credentials tab.

5. Go to Director Credentials and click Link to Credential. Record these credentials.

6. Return to the Tanzu Operations Manager Installation Dashboard.

7. (Optional) To prepare to troubleshoot the job VM for any other product, click the product tile and
repeat the previous procedure to record the IP address and VM credentials for that job VM.

8. Log out of Tanzu Operations Manager.

Ensure that there are no Tanzu Operations Manager installations or updates in progress while using
the BOSH CLI.

Log in to the Tanzu Operations Manager VM with SSH


This topic describes how to log in to the Tanzu Operations Manager VM with SSH,

Use SSH to connect to the Tanzu Operations Manager VM. To log in to the Tanzu Operations Manager VM,
go to the procedure for your IaaS:

AWS

Azure

GCP

OpenStack

vSphere

AWS

74
Tanzu for Postgres on Cloud Foundry

To log in to the Tanzu Operations Manager VM with SSH in AWS, you need the key pair you used when you
created the Tanzu Operations Manager VM. To see the name of the key pair, click the Tanzu Operations
Manager VM and locate the key pair name in properties.

To log in to the Tanzu Operations Manager VM with SSH in AWS:

1. Locate the Tanzu Operations Manager FQDN on the AWS EC2 instances page.

2. Run chmod 600 ops_mgr.pem to change the permissions on the .pem file to be more restrictive.
For example:

chmod 600 ops_mgr.pem

3. Log in to the Tanzu Operations Manager VM with SSH.

ssh -i ops_mgr.pem ubuntu@<FQDN>

Where FQDN is the fully qualified domain name of Tanzu Operations Manager.

For example:

ssh -i ops_mgr.pem [email protected]

Azure
To log in to the Tanzu Operations Manager VM with SSH in Azure, you need the key pair you used when
creating the Tanzu Operations Manager VM. If you need to reset the SSH key, locate the Tanzu Operations
Manager VM in the Azure portal and click Reset Password.

To log in to the Tanzu Operations Manager VM with SSH in Azure:

1. From the Azure portal, locate the Tanzu Operations Manager FQDN by selecting the VM.

2. Change the permissions for your SSH private key by running the following command:

chmod 600 PRIVATE-KEY

Where PRIVATE-KEY is the name of your SSH private key.

3. SSH into the Tanzu Operations Manager VM and run:

ssh -i PRIVATE-KEY ubuntu@FQDN

Where

FQDN is the fully qualified domain name of Tanzu Operations Manager.

PRIVATE-KEY is the name of your SSH private key.

GCP
To log in to the Tanzu Operations Manager VM with SSH in GCP:

1. Confirm that you have installed the Google Cloud SDK and CLI. For more information, see the
Google Cloud Platform documentation.

75
Tanzu for Postgres on Cloud Foundry

2. Initialize Google Cloud CLI, using a user account with Owner, Editor, or Viewer permissions to
access the project. Ensure that the Google Cloud CLI can log in to the project by running the
command gcloud auth login.

3. From the GCP web console, go to Compute Engine.

4. Locate the Tanzu Operations Manager VM in the VM Instances list.

5. Under Remote access, click the SSH drop-down menu and click View gcloud command.

6. Copy the SSH command that appears in the pop-up window.

7. Paste the command into your terminal window to SSH to the VM. For example:

$ gcloud compute ssh "YOUR-VM" --zone "YOUR-ZONE-ID"

8. Run sudo su - ubuntu to switch to the ubuntu user.

OpenStack
To log in to the Tanzu Operations Manager VM with SSH in OpenStack, you need the key pair that you
created in Configure Security in Deploying Tanzu Operations Manager on OpenStack. If you must reset the
SSH key, locate the Tanzu Operations Manager VM in the OpenStack console and boot it in recovery mode
to generate a new key pair.

To log in to the Tanzu Operations Manager VM with SSH in OpenStack:

1. Locate the Tanzu Operations Manager FQDN on the Access & Security page.

2. Run chmod 600 ops_mgr.pem to change the permissions on the .pem file to be more restrictive.
For example:

chmod 600 ops_mgr.pem

3. Log in to the Tanzu Operations Manager VM with SSH.

ssh -i ops_mgr.pem ubuntu@<FQDN>

Where FQDN is the fully qualified domain name of Tanzu Operations Manager.

For example:

$ ssh -i ops_mgr.pem [email protected]

vSphere
To log in to the Tanzu Operations Manager VM with SSH in vSphere, you must have the public SSH key
that imports the Tanzu Operations Manager .ova or .ovf file into your virtualization system.

You set the public SSH key in the Public SSH Key text box of the Customize template screen when you
deployed Tanzu Operations Manager. For more information, see Deploy Tanzu Operations Manager in
Deploying Tanzu Operations Manager on vSphere.

If you lose your SSH key, you must shut down the Tanzu Operations Manager VM in the vSphere UI and
then reset the public SSH key. For more information, see Edit vApp Settings in the vSphere documentation.

76
Tanzu for Postgres on Cloud Foundry

To log in to the Tanzu Operations Manager VM with SSH in vSphere:

1. Run the following command:

ssh ubuntu@FQDN

Where FQDN is the fully qualified domain name of Tanzu Operations Manager.

For example:

$ ssh [email protected]

2. When you are prompted, enter the public SSH key.

77
Tanzu for Postgres on Cloud Foundry

Application Developer Guide for Tanzu for


Postgres on Cloud Foundry

This topic tells you how to begin using Tanzu for Postgres on Cloud Foundry.

Tanzu for Postgres on Cloud Foundry was formerly known as VMware Postgres for
VMware Tanzu Application Service.

Prerequisites
You must have:

1. An Ops Manager installation with Tanzu for Postgres on Cloud Foundry installed and listed in the
Marketplace.

2. A Space Developer or Admin account on the VMware Tanzu Application Service for VMs
installation.

3. A local machine with the following installed:


A browser

A shell

The Cloud Foundry Command-Line Interface (cf CLI)

Then you must log in to the org and space containing your app.

Next Steps
Use Tanzu for Postgres on Cloud Foundry in your app

Set up your app for single instance Postgres service

Set up your app for Postgres HA service with multiple instances

Bind a service instance to your app

Use Tanzu for Postgres on Cloud Foundry in Your App


This topic shows how to use Tanzu for Postgres on Cloud Foundry in your app.

Every app and service is scoped to a space. To use a service, an app must exist in the same space as an
instance of the service. To use Tanzu for Postgres on Cloud Foundry in an app:

1. Use the cf CLI to log in to the org and space that contains the app.

78
Tanzu for Postgres on Cloud Foundry

2. Make sure a Tanzu for Postgres on Cloud Foundry instance exists in the same space.
If the space does not already have a Tanzu for Postgres on Cloud Foundry instance, create
one.

If the space already has a Tanzu for Postgres on Cloud Foundry instance, you can bind
your app to the existing instance.

3. Bind the app to the Tanzu for Postgres on Cloud Foundry instance to enable the app to use
Postgres.

Confirm Service Availability


For an app to use a service, the following must be true:

1. The service must be available in the Marketplace.

To find out if it is available, run cf marketplace or cf m.

If the output lists postgres in the service column, on-demand Tanzu for Postgres on Cloud
Foundry is available. If it is not available, ask your operator to install it.

For example:

2. An instance of the service must exist in its space.

To confirm that a Tanzu for Postgres on Cloud Foundry instance is running in the space,
run cf services.

Any postgres listings in the service column are service instances of on-demand Tanzu for
Postgres on Cloud Foundry in the space.

For example:

Create a Service Instance


On-demand plans are listed under the postgres service in the Marketplace. To create a service instance of
the Tanzu for Postgres on Cloud Foundry on-demand plan, run:

cf create-service postgres POSTGRES_PLAN SERVICE_NAME

Where:

POSTGRES_PLAN is one of the plans configured by the operator.

SERVICE_NAME is a name for your service.

For example:

79
Tanzu for Postgres on Cloud Foundry

Set up Your App to Consume Postgres Service with Single


Instance
This topic shows how to set up your app to consume Tanzu for Postgres on Cloud Foundry with a single
instance.

The VCAP_APPLICATION and VCAP_SERVICES variables are provided in the container


environment.

These variables become available to the application when the service is bound to the application.

The following shows a sample of environment variables VCAP_SERVICES and VCAP_APPLICATION for
Postgres Service with single instance:

VCAP_SERVICES: {
"postgres": [
{
"binding_guid": "75e8053f-57ac-90cc-b7af-4bc40d714c2e",
"binding_name": null,
"credentials": {
"db": "my-db",
"hosts": [
"q-s0.postgres-instance.shamrockgreen-services-subnet.service-instance-f23
98a52-5430-2b17-50sd-b50f0c7c150c.bosh"
],
"jdbcUrl": "jdbc:postgresql://q-s0.postgres-instance.shamrockgreen-services-su
bnet.service-instance-f2398a52-5430-2b17-50sd-b50f0c7c150c.bosh:5432/postgres?user=pga
dmin&password=1a4E820y19XgFmH4143",
"password": "1a4E820y19XgFmH4143",
"port": 5432,
"service_gateway_access_port": 0,
"service_gateway_enabled": false,
"user": "myuser"
},
"instance_guid": "22a1998f-e30f-3032-b360-f7a21de7a461",
"instance_name": "postgres-instance",
"label": "postgres",
"name": "postgres-instance",
"plan": "on-demand-postgres-db",
"provider": null,
"syslog_drain_url": null,
"tags": [
"postgres",
"pivotal",
"on-demand"
],
"volume_mounts": []
}
]
}

VCAP_APPLICATION: {
"application_id": "9646bed4-d52d-4a6c-b781-6d589e3873f0",
"application_name": "sample-app",
"application_uris": [
"my-app.example.com"
],

80
Tanzu for Postgres on Cloud Foundry

"cf_api": "https://api.example.com",
"limits": {
"fds": 16384
},
"name": "pg-app-ci-2",
"organization_id": "eb4d1234-0w34-2e21-912c-f4ae36501845",
"organization_name": "my-org",
"space_id": "a32e9046-0167-4e64-95c3-abe04d99c2bd",
"space_name": "my-space",
"uris": [
"my-app.example.com"
],
"users": null
}

The application developer can use the following environment variables from VCAP_SERVICES to create a
Postgres connection URI:

hosts (hosts is an array.)

port

user

password

db

To connect an application using jdbc drivers to the Postgres service, you can use the environment variable
jdbcUrl. For example, the jdbcUrl looks like this:

For a single instance Postgres Service: jdbc:postgresql://q-s0.postgres-


instance.shamrockgreen-services-subnet.service-instance-f2398a52-5430-2b17-50sd-
b50f0c7c150c.bosh:5432/postgres?user=pgadmin&password=1a4E820y19XgFmH4143

The following is sample Java code to form a connection string for a JDBC driver for a Postgres Service with
Single Instance. This is just an example. You can use the jdbcUrl instead.

@Configuration
@Profile("cloud")
public class DataSourceConfiguration {

@Bean
public Cloud cloud() {
return new CloudFactory().getCloud();
}

Logger logger = LoggerFactory.getLogger(DataSourceConfiguration.class);


@Value("${VCAP_SERVICES}")
private String vsJson;

@Value("${SSL_MODE}")
private String sslMode;

private static Gson gson = new Gson();

@Bean
public DataSource dataSource() {
VcapServices vcapServices = gson.fromJson(vsJson, VcapServices.class);

81
Tanzu for Postgres on Cloud Foundry

DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();


List<String> hosts = vcapServices.getPostgres().get(0).getCredentials().getHos
ts();
Long port = vcapServices.getPostgres().get(0).getCredentials().getPort();
StringBuilder jdbcUri = new StringBuilder("jdbc:postgresql://");
Stream<String>withPort=hosts.stream().map(s-> String.format("%s:%d",s,port));
jdbcUri.append(withPort.collect(Collectors.joining(",")));
jdbcUri.append("/")
.append(vcapServices.getPostgres().get(0).getCredentials().getDb());

if(!StringUtils.isEmpty(sslMode)) {
jdbcUri.append("&sslmode="+sslMode);
jdbcUri.append("&sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory");
}

logger.info("-------------POSTGRES URL-----------" + jdbcUri);

HikariConfig hikariConfig = new HikariConfig();


hikariConfig.setJdbcUrl(jdbcUri.toString());
hikariConfig.setUsername(vcapServices.getPostgres().get(0).getCredentials().ge
tUser());
hikariConfig.setPassword(vcapServices.getPostgres().get(0).getCredentials().ge
tPassword());
hikariConfig.setInitializationFailTimeout(300000); //5 minutes
return new HikariDataSource(hikariConfig);

}
}

Set up Your App for Postgres HA Service with Multiple


Instances
This topic shows how to set up your app to consume Tanzu for Postgres on Cloud Foundry high availability
(HA) service with multiple instances.

The VCAP_APPLICATION and VCAP_SERVICES variables are provided in the container


environment.

These variables become available to the application when the service is bound to the application.

The following shows a sample of environment variables VCAP_SERVICES and VCAP_APPLICATION for
Postgres HA service:

VCAP_SERVICES: {
"postgres": [
{
"binding_guid": "75e8053f-57ac-90cc-b7af-4bc40d714c2e",
"binding_name": null,
"credentials": {
"db": "my-db",
"hosts": [
"q-s0.postgres-instance.shamrockgreen-services-subnet.service-instance-f23
98a52-5430-2b17-50sd-b50f0c7c150c.bosh"
],
"jdbcUrl": "jdbc:postgresql://q-s0.postgres-instance.shamrockgreen-services-su
bnet.service-instance-f2398a52-5430-2b17-50sd-b50f0c7c150c.bosh:5432/postgres?targetSe

82
Tanzu for Postgres on Cloud Foundry

rverType=primary&user=pgadmin&password=1a4E820y19XgFmH4143",
"password": "1a4E820y19XgFmH4143",
"port": 5432,
"service_gateway_access_port": 0,
"service_gateway_enabled": false,
"user": "myuser"
},
"instance_guid": "22a1998f-e30f-3032-b360-f7a21de7a461",
"instance_name": "postgres-instance",
"label": "postgres",
"name": "postgres-instance",
"plan": "on-demand-postgres-db",
"provider": null,
"syslog_drain_url": null,
"tags": [
"postgres",
"pivotal",
"on-demand"
],
"volume_mounts": []
}
]
}

VCAP_APPLICATION: {
"application_id": "9646bed4-d52d-4a6c-b781-6d589e3873f0",
"application_name": "sample-app",
"application_uris": [
"my-app.example.com"
],
"cf_api": "https://api.example.com",
"limits": {
"fds": 16384
},
"name": "pg-app-ci-2",
"organization_id": "eb4d1234-0w34-2e21-912c-f4ae36501845",
"organization_name": "my-org",
"space_id": "a32e9046-0167-4e64-95c3-abe04d99c2bd",
"space_name": "my-space",
"uris": [
"my-app.example.com"
],
"users": null
}

The application developer can use the following environment variables from VCAP_SERVICES to create a
Postgres connection URI:

hosts (hosts is an array.)

port

user

password

db

83
Tanzu for Postgres on Cloud Foundry

To connect an application using jdbc drivers to the Postgres HA service, you can simply use the
environment variable jdbcUrl. For example, the jdbcUrl would look like this:

For a high availability Postgres service: jdbc:postgresql://q-s0.postgres-


instance.shamrockgreen-services-subnet.service-instance-f2398a52-5430-2b17-50sd-
b50f0c7c150c.bosh:5432/postgres?
targetServerType=primary&user=pgadmin&password=75YEk68fX291b0j3zdT4

You can use targetServerType=primary to specify that the JDBC driver only connects to the primary
node in the HA cluster.

The following is sample Java code to form a connection string for a JDBC driver for a Postgres HA service.
This is just an example. You can simply use the jdbcUrl instead.

@Configuration
@Profile("cloud")
public class DataSourceConfiguration {

@Bean
public Cloud cloud() {
return new CloudFactory().getCloud();
}

Logger logger = LoggerFactory.getLogger(DataSourceConfiguration.class);


@Value("${VCAP_SERVICES}")
private String vsJson;

@Value("${SSL_MODE}")
private String sslMode;

private static Gson gson = new Gson();

@Bean
public DataSource dataSource() {
VcapServices vcapServices = gson.fromJson(vsJson, VcapServices.class);
DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create();
List<String> hosts = vcapServices.getPostgres().get(0).getCredentials().getHos
ts();
Long port = vcapServices.getPostgres().get(0).getCredentials().getPort();
StringBuilder jdbcUri = new StringBuilder("jdbc:postgresql://");
Stream<String>withPort=hosts.stream().map(s-> String.format("%s:%d",s,port));
jdbcUri.append(withPort.collect(Collectors.joining(",")));
jdbcUri.append("/")
.append(vcapServices.getPostgres().get(0).getCredentials().getDb())
.append("?targetServerType=primary");
if(!StringUtils.isEmpty(sslMode)) {
jdbcUri.append("&sslmode="+sslMode);
jdbcUri.append("&sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory");
}

logger.info("-------------POSTGRES URL-----------" + jdbcUri);

HikariConfig hikariConfig = new HikariConfig();


hikariConfig.setJdbcUrl(jdbcUri.toString());
hikariConfig.setUsername(vcapServices.getPostgres().get(0).getCredentials().ge
tUser());
hikariConfig.setPassword(vcapServices.getPostgres().get(0).getCredentials().ge
tPassword());

84
Tanzu for Postgres on Cloud Foundry

hikariConfig.setInitializationFailTimeout(300000); //5 minutes


return new HikariDataSource(hikariConfig);

}
}

Bind a Service Instance to Your App


This topic shows how to bind an app to a Tanzu for Postgres on Cloud Foundry instance.

1. Bind the service instance to your app by running:

cf bind-service APP-NAME SERVICE-INSTANCE

Where:

APP is the app you want to use the Postgres service instance for.

SERVICE_INSTANCE is the name you supplied when you ran cf create-service.

For example:

cf bind-service pg-app postgres

Create a Postgres Service Instance with Service Gateway


Access
You can create a Tanzu for Postgres on Cloud Foundry service instance with service-gateway access.
Service-gateway access enables a Tanzu for Postgres on Cloud Foundry on-demand service instance to
connect to external components that are not on the same foundation as the service instance. These
components might be on another foundation or hosted outside of the foundation.

The following information assumes that you meet the prerequisites for using on-demand Tanzu for Postgres
on Cloud Foundry. For more information, see Prerequisites.

If you have enabled a service-gateway plan, you can create a service instance that can connect to
components outside your foundation. Contact your operator if you are unsure which plans are enabled for
service-gateway access. For information about the architecture and use cases, see Service-Gateway
access.

As per release version 1.2 and onwards, you need not pass the external_port and
router_group params. This port will be assigned automatically from within the configurable
port range defined under Reservable ports range for TCP field in Ops Manager during
enabling of service gateway mentioned in Enable Service Gateway Using the TAS for VMs
Tile

To create a service instance that enables service-gateway access:

1. Create a service instance with the service-gateway by running:

85
Tanzu for Postgres on Cloud Foundry

cf create-service postgres POSTGRES_PLAN SERVICE-INSTANCE-NAME -c '{"svc_gw_ena


ble": true}' -w

2. Obtain credentials by creating a service key:

cf create-service-key SERVICE-INSTANCE-NAME SERVICE-KEY-NAME

The service key looks similar to the following:

{
"credentials": {
"db": "postgres",
"hosts": [
"q-s0.postgres-instance.tilt-541058-services-subnet.service-instance-97e48
698-93fa-4f5c-bdb5-874019238737.bosh"
],
"jdbcUrl": "jdbc:postgresql://q-s0.postgres-instance.tilt-541058-services-su
bnet.service-instance-97e48698-93fa-4f5c-bdb5-874019238737.bosh:5432/postgres?t
argetServerType=primary&user=pgadmin&password=9qX204t6v178kBAE35xC",
"password": "9qX204t6v178kBAE35xC",
"port": 5432,
"service_gateway": {
"host": "tcp.tilt-541058.cf-app.com",
"jdbcUrl": "jdbc:postgresql://tcp.tilt-541058.cf-app.com:1082/postgres?tar
getServerType=primary&user=pgadmin&password=9qX204t6v178kBAE35xC",
"port": 1082,
"uri": "postgresql://pgadmin:[email protected]
om:1082/postgres"
},
"uri": "postgresql://pgadmin:[email protected]
t-541058-services-subnet.service-instance-97e48698-93fa-4f5c-bdb5-874019238737.
bosh:5432/postgres",
"user": "pgadmin"
}
}

The port field in service_gateway informs you of the port that was reserved for the created
service instance. You can use this uri to connect to Postgres from outside your foundation.

Delete Router Group Workaround


This topic describes a workaround you can use when a running service fails after you upgrade to Tanzu for
Postgres on Cloud Foundry v1.2.0.

After you upgrade to Tanzu for Postgres on Cloud Foundry v1.2.0, if you delete a service instance other
running services might fail while connecting to their bound apps or trying to access services like psql on
exposed service gateway ports.

After you upgrade to Tanzu for Postgres on Cloud Foundry v1.2.0, when you delete a service instance,
other services fail if both of these conditions are met:

The service was created before you upgraded to Tanzu for Postgres on Cloud Foundry v1.2.0.

The service was created with the same router group name as the one being deleted.

86
Tanzu for Postgres on Cloud Foundry

Prior to Tanzu for Postgres on Cloud Foundry v1.2.x, you created a service instance with service gateway
enabled by mentioning the request params. For example,

cf create-service postgres POSTGRES_PLAN SERVICE-INSTANCE-NAME -c '{"svc_gw_enabl


e": true, "router_group": "default-tcp", "external_port": 1031}'

The router_group param requires the name of an existing router group, for example, default-tcp is the
default router group that already exists. Users might create multiple service instances using the default-
tcp router group or another router group that they create.

When you delete an existing service after upgrading to Tanzu for Postgres on Cloud Foundry v1.2, the
router group that was used to create the service is also deleted. This causes all service instances to fail to
connect over the exposed service gateway port. This happens because the “delete-router-group” pre-
delete errand that was added in Tanzu for Postgres on Cloud Foundry v1.2.x is invoked when you delete
the service instance.

Workaround before upgrade


Prior to upgrade, you can change the service instances to use different router groups as follows.

1. Create Router Group.

2. Update service instance to use another router group.

Workaround post upgrade


If you have already upgraded to Tanzu for Postgres on Cloud Foundry v1.2.0 and have an existing service
you want to delete without affecting other existing services, then you must manually re-create the
associated router group after every deletion. To achieve this, follow the steps in Create Router Group.

Create Router Group


Pre-requisites:

1. Before you run routing API, you must install and setup the UAA CLI described in Setup UAA CLI.

2. Get the value for UAA token and FQDN of your TAS env from the previous step.

Procedure:

Use the Cloud Foundry Routing API to create a router group:

1. Run this curl command:

curl -vvv -H "Authorization: bearer UAA-TOKEN" http://api.system-domain.com/rou


ting/v1/router_groups -X POST -d '{"name": "NAME", "type": "TYPE", "reservable_
ports": "RESERVABLE-PORTS"}'

Where

UAA-TOKEN is the bearer token from the UAA CLI setup step.

NAME is the name of the router group.

TYPE is tcp.

87
Tanzu for Postgres on Cloud Foundry

RESERVABLE-PORTS is a comma delimited list of reservable port or port ranges. These ports must
fall between 1024 and 65535 (inclusive).

Example

curl -vvv -H "Authorization: bearer eyJqa3UiOiJodHRwcz...." https://api.sys.ta


s.z9027a87e.shepherd.lease/routing/v1/router_groups X POST -d '{"name": "defaul
t-tcp", "type": "tcp", "reservable_ports": "1024-32567"}'

2. List created router groups to verify:

curl -vvv -H "Authorization: bearer <uaa_token>" http://api.system-domain.com/r


outing/v1/router_groups

Example

curl -vvv -H "Authorization: bearer <uaa_token>" https://api.sys.tas.z9027a87e.


shepherd.lease/routing/v1/router_groups

[
{
"guid": "966b0dea-8ee1-4ffd-7891-15f924b20074",
"name": "default-tcp",
"type": "tcp",
"reservable_ports": "1024-32567"
}
]

Update service instance to use another router group


Update the service instance to use another existing router group with below command:

cf update-service SERVICE_NAME -c '{"router_group": "ROUTER_GROUP_NAME"}'

Where

`SERVICE_NAME` is the name of service instance to be updated,

`ROUTER_GROUP_NAME` is the name of existing router group (or created in previous step)

When the service is updated to use the new router group name, it won’t affect other services on deletion.

Setup UAA CLI


To use Routing API, you must setup the UAA (User Account and Authentication) CLI to create a new client
with scope to perform router group write operations and use its token to authenticate the API calls.

1. Download uaa-cli from https://github.com/cloudfoundry/uaa-cli/releases and download the tar.gz file.

2. Run this commands to install uaa-cli:

tar xvf uaa-*


rm uaa-*.tar.gz

88
Tanzu for Postgres on Cloud Foundry

cp uaa-linux-amd64-0.14.0 /usr/local/bin (Copy to executable folder as per your


OS)

3. Check uaa-cli is installed:

uaa version

4. Get the fqdn for the env where TAS is deployed and save it in a shell variable so that you can set
the uaa target:

export uaa_fqdn="https://<fqdn_for_tas_env>"

Example:

export uaa_fqdn="https://uaa.sys.tas.z9027a87e.shepherd.lease"

And then target this domain:

uaa target "${uaa_fqdn}" --skip-ssl-validation

5. Get the UAA Admin Client credentials from the Ops manager UI:

1. Click on Small Footprint VMware Tanzu Application Service.

2. Go to Credentials tab and navigate down to UAA section.

3. Find the Link to Credential under header “Admin Client Credentials”.

4. Copy the password, you will use it in the next steps.

6. Create a new UAA client with following scopes to be able to create new router group:

uaa create-client CLIENT-NAME \


--client_secret "<UAA-ADMIN-CLIENT-PASSWORD>" \
--authorized_grant_types client_credentials \
--authorities routing.router_groups.read,routing.router_groups.write,routing.ro
utes.read,routing.routes.write

Where

CLIENT-NAME is the name you want to keep for this new client.

UAA-ADMIN-CLIENT-PASSWORD is the password obtained from Step 5.4 above.

Example:

uaa create-client routing-client \


--client_secret "uekwHHGTwjcYkjwdenmfdhf" \
--authorized_grant_types client_credentials \
--authorities routing.router_groups.read,routing.router_groups.write,routing.ro
utes.read,routing.routes.write

This will generate a new client with appropriate permissions to run the routing API to create a new
router group later.

7. Once, the client is created, get the client Bearer token to authorize API calls for routing:

89
Tanzu for Postgres on Cloud Foundry

uaa get-client-credentials-token <client_name> -s "<uaa_admin_client_password>"


Access token successfully fetched and added to context.

8. To display the token and use it sub sequentially for running routing API:

uaa context

which should print your Token in json format as below:

{
"client_id": "routing-client",
"grant_type": "client_credentials",
"username": "",
"Token":
{
"access_token": "eyJqa3UiOiJodHRwczovL3VhYSIkqx6M-2GsIJYS0nQ...(tru
ncated)",
"token_type": "bearer",
"expiry": "2024-11-21T00:02:07.493058+05:30"
}
}

This token will be used during API call for creating router group, it has default expiry of 24 hours.

90

You might also like