-
Notifications
You must be signed in to change notification settings - Fork 42.2k
Description
What happened:
With the exec credentials plugin configured in my kubeconf, and a stale/expired token present in the same file issuing e.g. kubectl get pod triggers the authentication flow as handled by the plugin - once the flow is returned to kubectl (by printing the ExecCredential object) the new token is stored to the kubeconfig but not immediately used to resolve the kubectl get pod request, leading to:
error: You must be logged in to the server (Unauthorized)
Immediately issuing kubectl get pod again works as kubectl now seem to use the credentials (token) stored following the first request.
What you expected to happen:
That kubectl would read the new token as fetched by the exec credential plugin and use it already for the first command issued.
How to reproduce it (as minimally and precisely as possible):
- Configure the kubernetes API server with the OIDC authenticator enabled.
- Configure kubectl to use an exec credential plugin printing the token received out of band.
- Verify that the token was indeed written to the kubeconf file.
- See that the first kubectl command failed.
- See that the next identical command succeeds.
Anything else we need to know?:
This works as expected when there is no token previously stored in the kubeconf - the behavior is only triggered when there is a token stored but it expired, or for whatever other reason not accepted by the API server.
Environment:
- Kubernetes version (use
kubectl version): 1.14, also tried with 1.17 - Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release): Mac OS for kubectl client - Kernel (e.g.
uname -a): - Install tools:
- Network plugin and version (if this is a network-related bug):
- Others: