0% found this document useful (0 votes)
25 views8 pages

Platform3 Module Architecture To Implementation

Uploaded by

MALLAIAH V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views8 pages

Platform3 Module Architecture To Implementation

Uploaded by

MALLAIAH V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Platform 3 Controllers Module –

Architecture ↔ Implementation
Mapping
Version: 1.0
Prepared for: Platform 3 Team
Date: Generated automatically

Executive Summary
This document reconstructs the design and ties it directly to the implementation recovered
from the provided screenshots of the GitLab project ‘Platform 3 Controllers Module’ (file:
cluster.py). It provides an end-to-end view of the module’s responsibilities, external
integrations, and data models, then maps each architectural capability to the exact
functions, inputs, outputs, and notable behaviors.

1. Project Overview
The Platform 3 Controllers Module is a backend service layer for lifecycle and governance of
Anthos/GKE-on-prem clusters. It orchestrates cluster inventory reservation/binding,
secrets management via Vault, diagnostics/status via Anthos admin & user kubeconfigs, and
GitLab-backed configuration repositories for declarative cluster state (cluster.yaml, hosts).

Core Capabilities
• Secrets Management (Vault KV v2): list namespaces & keys; CRUD operations with CAS
and patch semantics

• Inventory Orchestration: reserve, bind, unbind and release machines and VIP addresses

• Cluster Status/Diagnostics: infer state by inspecting latest controller jobs and Anthos
admin/user clusters

• Networking Helpers: flatten VIPs, detect dual-stack, external load balancer intent
(managed/unmanaged/none)

• DNS Helpers: compute DNS record and domain, default determination, presence checks

• Config Repository (GitLab): fetch/validate cluster.yaml & hosts; commit generated


payloads with audit tags
• Payload Generation: build a full cluster payload from models, labels, address pools
(v4/v6), groups, and options

External Systems & Facades


System Purpose Referenced In

HashiCorp Vault (KV v2) Secrets storage for cluster get_cluster_secrets_client,


and namespace get_namespace_secrets,
credentials/values create/patch/update/delete
_*

Inventory Service Reservations and bindings reserve_and_bind_inventory


for machines and VIP ,
addresses unbind_and_release_invento
ry

GitLab Storage of declarative config get_config_repo,


(cluster.yaml, hosts) and get_cluster_yaml_from_confi
payload commits g_repo,
get_hosts_from_config_repo,
commit_payload_to_config_r
epo

Anthos Admin/User Diagnostics and status get_admin_cluster_kube_con


Kubeconfig derivation fig, get_kube_config,
get_cluster_status

Azure AD (Groups) Resolve owner/viewer generate_cluster_payload


group IDs for payloads

2. Data Model Overview


Key entities referenced by the controller and visible in screenshots:
• Cluster (document): name, gcp_project, environment, network labels, control_plane_vip,
ingress/egress VIPs, machines, storage_provider, logging_monitoring, dns_* fields.
• Machine: id, name, labels, type (control-plane or worker).
• NetworkAddress: id, address (IPv4/IPv6), use_types (cp_vip, ig_vip, lb_vip, eg_vip),
metadata.
• Network hierarchy: data_center, environment, network_space, network_zone, CIDR
block(s).
3. Architecture → Implementation Mapping

3.1 Secrets Management (Vault KV v2)


Function Responsibility Key Inputs Key Outputs / Side
Effects

get_cluster_secrets_ Initialize Vault KV requires_read: bool, dict {'READ':


client(requires_read v2 read/write requires_write: hvac.Client?,
, requires_write) clients using bool; 'WRITE':
AppRole per self.conf.VAULT_UR hvac.Client?}; raises
environment and L, ClusterSecretsConn
cluster network; VAULT_NAMESPAC ectionInformationEr
validate auth E, mount points & ror on path missing
paths

get_namespace_secr List secret keys namespace: str; dict {'keys': [...]}


ets(namespace) under a namespace mount_point=f"kv/{
path env}/{cluster}"

get_cluster_secrets() List all namespaces none (uses dict {namespace:


in a cluster and their network/env, [secret1, ...]}
secret keys mount point)

create_namespace_s Create a new secret; namespace, secret, {namespace: secret}


ecrets(namespace, fails if already exists body (dict) or
secret, body) (cas=0) ClusterSecretsCreat
eError

patch_namespace_se JSON Merge Patch namespace, secret, {namespace: secret}


crets(namespace, the secret (Vault KV body or
secret, body) v2 patch) ClusterSecretsPatch
Error

update_namespace_ Read latest version, namespace, secret, {namespace: secret}


secrets(namespace, then update with body or
secret, body) CAS for concurrency ClusterSecretsUpdat
eError

delete_namespace_s Delete secret’s namespace, secret {namespace: secret}


ecrets(namespace, metadata and all or
secret) versions ClusterSecretsDelet
eError

3.2 Inventory Orchestration


Function Responsibility Key Inputs Key Outputs / Side
Effects

reserve_and_bind_in Compose payload, counts or explicit result payload;


ventory(control_pla reserve machines machine arrays, document mutated
ne_machines, and VIPs, bind label filters, with machine and
worker_machines, reserved assets to network labels, data VIP bindings
lb_vips_qty, the cluster center/env/space/z
eg_vips_qty, document one, cidr_block
needs_ingress_vip, ...
)

unbind_and_release_ Unbind assets from assets: document updated;


inventory(assets) the cluster and List[NetworkAddres inventory service
release back to s|Machine] release/delete
inventory invoked
(machines or VIPs)

3.3 Cluster Status & Diagnostics


Function Responsibility Key Inputs Key Outputs / Side
Effects

get_admin_cluster_k Fetch kubeconfigs Vault mount points kubeconfig dict or


ube_config() / (admin/user) from and per-DC/cluster None
get_kube_config() Vault paths; return paths
dict material

get_cluster_status() Infer status by latest Jobs "PROVISIONING",


jobs and Anthos (Queued/Running/C "UPDATING",
diagnostics ompleted), "UPGRADING",
admin/user cluster "DEPROVISIONING",
lists & nodes "OPERATIONAL",
"NONOPERATIONA
L", "ORPHANED",
"UNKNOWN"

3.4 Helper Properties (Networking & DNS)


Function Responsibility Key Inputs Key Outputs / Side
Effects

control_plane_machi Return machine — List[str]


nes / names from
worker_machines document

load_balancer_vips / Flatten — List[str]


ingress_vips / NetworkAddress
egress_vips objects to IP list(s)

is_dual_stack Check network network_controller, bool


controller capability labels
and 'dual_stack'
label

is_managed_externa Determine external network_labels bool or


l_load_balancer / load balancer intent 'managed'|'unmana
is_unmanaged_exter via labels ged'|'none'
nal_load_balancer /
is_none_external_loa
d_balancer /
desired_external_loa
d_balancer_type

dns_record / Derive DNS key & document.dns_* and str|bool


dns_domain / domain from fields control_plane_vip
is_default_dns_doma or network defaults
in / has_dns

3.5 Config Repository (GitLab)


Function Responsibility Key Inputs Key Outputs / Side
Effects

get_config_repo() Resolve GitLab config_repo_project_ gitlab project object


project by id (int) or LookupError
configured ID

get_cluster_yaml_fro Load and parse GitLab file path & dict parsed YAML;
m_config_repo() inventory/group_va branch raises if missing
rs/GKEv2/cluster.ya
ml

get_hosts_from_conf Load and parse GitLab file path & dict {section: {k:v}};
ig_repo() inventory/hosts branch raises if missing
(ini)

commit_payload_to_ Commit generated payload dict, tag, commit summary


config_repo(commit payload changes strategy dict
_message, back to config repo
commit_tag,
user_defined_cluster
_payload,
updated_load_balan
cer)

validate_cluster_ya Schema validation cluster_yaml dict normalized JSON


ml(cluster_yaml) for cluster.yaml dict or validation
using pydantic-like errors
model

3.6 Cluster Payload Generation


Function Responsibility Key Inputs Key Outputs / Side
Effects

generate_cluster_pa Produce full exclude_list, Azure dict payload used


yload(exclude_list, declarative payload AD group lookups, for commits &
ignore_v1_address_ from document, address pools, downstream
pools=False) groups, v4/v6 pools, network hierarchy automation
ELB intent, DNS,
logging, storage, and
labels

4. Primary Flows
Provisioning (happy path):

1) User defines cluster metadata and requests inventory reservation


(controller.reserve_and_bind_inventory).

2) Controller composes payload, calls Inventory Service; pending jobs are logged else
bindings applied to document.

3) Secrets created for namespaces and components as needed via


create_namespace_secrets.

4) Config repo updated: generate_cluster_payload → validate_cluster_yaml →


commit_payload_to_config_repo.

5) Status reflects job activity (PROVISIONING) and transitions to OPERATIONAL once


Anthos reports cluster nodes for user cluster.

Deprovisioning:

1) Controller unbinds and releases assets (unbind_and_release_inventory).

2) Secrets deleted (delete_namespace_secrets) if policy requires.


3) Config repo changes committed with a decommission tag.

4) Status transitions via job tracking to DEPROVISIONING then DELETED.

5. Identified Gaps vs. Current Scope


Gap Impact / Proposal

Authorization granularity Decorators indicate READ/WRITE/ADMIN


checks, but role mapping for fine-grained
secret CRUD and inventory actions is not
centrally expressed.

Error taxonomy Custom exceptions exist but many code


paths raise generic Exception; standardize
and surface error codes for UI/automation.

Observability Extensive log_info/warning calls but no


structured tracing/metrics; add correlation
IDs and OpenTelemetry spans across
external calls.

Idempotency Update/patch paths rely on last version


CAS; provide idempotency keys for
inventory reservations & GitLab commits.

Validation coverage Schema covers cluster.yaml; add validation


for hosts.ini, secret bodies (per component
contracts), and IP pool overlaps.

Testing Add unit tests with fake


Vault/GitLab/Inventory plus contract tests
for payload generation.

Configuration Hard-coded repo paths


(inventory/group_vars/GKEv2/…);
parameterize via settings and support
per-env branches.

Bulk operations Listing cluster secrets is O(N namespaces);


add paged/filtered queries and
export/import routines.

6. Proposed Scope (Phase-wise)


Phase 1 – Stabilization
– Introduce structured error types and consistent returns

– Wrap Vault/GitLab/Inventory calls with retries and timeouts

– Add telemetry (metrics + traces) and correlation IDs

– Finalize schema for cluster.yaml and secrets; add validators

Phase 2 – UX & Productivity

– Generate diff previews for GitLab commits

– Bulk secrets import/export with validation

– Role mapping UI for permissions; audit trails

Phase 3 – Scale & Resilience

– Idempotency keys for reservations/commits

– Async job runner with backoff & dead-letter

– Pagination and filtering for large secret sets

7. Function Index
• get_kube_config, get_admin_cluster_kube_config

• get_cluster_secrets_client, get_namespace_secrets, get_cluster_secrets

• create_namespace_secrets, patch_namespace_secrets, update_namespace_secrets,


delete_namespace_secrets

• reserve_and_bind_inventory, unbind_and_release_inventory

• get_cluster_status

• control_plane_machines, worker_machines, load_balancer_vips, ingress_vips, egress_vips

• is_dual_stack, is_managed_external_load_balancer, is_unmanaged_external_load_balancer,


is_none_external_load_balancer, desired_external_load_balancer_type

• dns_record, dns_domain, is_default_dns_domain, has_dns

• generate_cluster_payload

• get_config_repo, get_cluster_yaml_from_config_repo, get_hosts_from_config_repo,


commit_payload_to_config_repo, validate_cluster_yaml

You might also like