Skip to content

JSON patch with kubectl has exponential time complexity for nesting depth of manifest #91615

@tapih

Description

@tapih

What happened:

When updating a deeply nested CRD in a local k8s cluster with kubectl apply,
the command finishes after more than 5 minutes.
The CRD has a field like corev1.PodSpec.

I did a brief experiment for how the nesting depth of manifests affects computation time,
and found that JSON patch creation with kubectl apply has exponential time complexity for the depth.

What you expected to happen:

Finish updating deeply nested CRDs with kubectl apply in some seconds, at most 5s.

How to reproduce it (as minimally and precisely as possible):

Please execute kubectl apply -f https://gist.githubusercontent.com/tapih/f339a2d558cf49b0cd6320b7b2f1821b/raw/2fa11c4935f1976059c6517caef647115e1dcbda/crd.yaml twice.
The second one will take too long to finish.

This is the detail of my experiment.

  1. Create a Foo CRD project of which API version is apiextensions.k8s.io/v1 with kubebuilder.
  2. Change the type definition of the CRD to have deeply nested slices.
type FooSpec struct {
	// +optional
	T1 []T1 `json:"t1,omitempty"`
}

type T9 struct {
	// +optional
	T8 []T8 `json:"t8,omitempty"`
}

type T8 struct {
	// +optional
	T7 []T7 `json:"t7,omitempty"`
}

type T7 struct {
	// +optional
	T6 []T6 `json:"t6,omitempty"`
}

:

type T1 struct {
	// +optional
	Field string `json:"field"`
}
  1. Run make manifests and create the CRD manifest.
  2. Create a CRD object with kubectl apply -f ./config/crd/bases/your.domain.name_foo.yaml.
  3. Update the CRD object with kubectl apply -f ./config/crd/bases/your.domain.name_foo.yaml.
  4. Repeat this with incrementing the nesting depth of FooSpec like T1, T2, T3...T8.

The wall-clock time to finish the second kubectl apply -f ./config/crd/bases/your.domain.name_foo.yaml was as follow:

...
T4 => 0.1s
T5 => 1.2s
T6 => 8.7s
T7 => 68s
T8 => 544s

Anything else we need to know?:

evanphx/json-patch#87 has already resolved this issue.
However, kubernetes/kubernetes/go.mod at master HEAD does not take this fix.

kubernetes/kubectl#698 is about the same problem.
However, it was closed without fixing this problem.

Environment:

  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-02-07T01:05:17Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
  • Hardware: VM on Hyper-V
  • OS: Ubuntu 18.04
  • Kernel: 4.15.0-66-generic
  • Install tools: kind v0.8.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.sig/cliCategorizes an issue or PR as relevant to SIG CLI.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions