-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Labels
Description
What is the issue?
I'm observing lots of failed to retrieve job from indexer warnings in the destination container logs, for pods belonging to Jobs:
linkerd2/controller/k8s/metadata_api.go
Lines 230 to 243 in 225ce93
| case "Job": | |
| parentObj, err = api.getByNamespace(Job, pod.Namespace, parent.Name) | |
| if err != nil { | |
| log.Warnf("failed to retrieve job from indexer %s/%s: %s", pod.Namespace, parent.Name, err) | |
| if retry { | |
| parentObj, err = api.client. | |
| Resource(batchv1.SchemeGroupVersion.WithResource("jobs")). | |
| Namespace(pod.Namespace). | |
| Get(ctx, parent.Name, metav1.GetOptions{}) | |
| if err != nil { | |
| log.Warnf("failed to retrieve job from direct API call %s/%s: %s", pod.Namespace, parent.Name, err) | |
| } | |
| } | |
| } |
It appears the metadata API client is only configured for Nodes and ReplicaSets:
linkerd2/controller/cmd/destination/main.go
Line 120 in 225ce93
| metadataAPI, err := k8s.InitializeMetadataAPI(*kubeConfigPath, "local", k8s.Node, k8s.RS) |
How can it be reproduced?
Start some k8s Jobs (I don't have a repro)?
Logs, error output, etc
seeing these warnings in the destination container:
2023-10-24T02:58:13.881735669Z time="2023-10-24T02:58:13Z" level=warning msg="failed to retrieve job from indexer [namespace]/[pod]: metadata informer (6) not configured"
output of linkerd check -o short
N/A
Environment
stable-2.14.1
Possible solution
Modify the call to InitializeMetadataAPI to include Job.
Additional context
No response
Would you like to work on fixing this bug?
maybe
Reactions are currently unavailable