You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Kernel (e.g. uname -a): Linux gke-wordpress-cluster-default-pool-b41e0322-m764 4.4.21+ Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Fri Feb 17 15:34:45 PST
2017 x86_64 Intel(R) Xeon(R) CPU @ 2.60GHz GenuineIntel GNU/Linux
Install tools:
Others:
What happened:
Warning FailedMount Unable to mount volumes for pod "wordpress-4199438522-50xjb_default(5603b982-0ef2-11e7-9fd7-42010a80002d)": timeout expired waiting for volumes to attach/mount for pod "default"/"wordpress-4199438522-50xjb". list of unattached/unmounted volumes=[wordpress-persistent-storage]
50s 50s 1 {kubelet gke-wordpress-cluster-default-pool-b41e0322-m764} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"wordpress-4199438522-50xjb". list of unattached/unmounted volumes=[wordpress-persistent-storage]
I was able to bring up wordpress okay the first time, except GKE wasn't creating loadbalancer ips due to a quota issue, which I resolved, note at this point the mysql pod was up and had attached to it's volume. Upon deleting the wordpress deployment and creating it again I started getting the above errors. I deleted the mysql pod as well and brought it up again to see that it had the same issue.
The volumes are backed by a gluster cluster on GCE. Looking at the brick logs on one of the gluster nodes I see
[2017-03-22 09:42:14.354542] I [MSGID: 115029] [server-handshake.c:612:server_setvolume] 0-gluster_vol-1-server: accepted client from gluster-1-7439-2017/03/22-09:42:10:325146
-gluster_vol-1-client-0-0-0 (version: 3.7.6)
[2017-03-22 09:42:46.355221] I [MSGID: 115029] [server-handshake.c:612:server_setvolume] 0-gluster_vol-1-server: accepted client from gke-wordpress-cluster-default-pool-b41e03
22-m764-2447-2017/03/22-09:42:46:301893-gluster_vol-1-client-0-0-0 (version: 3.7.6)
[2017-03-22 09:42:57.316248] I [MSGID: 115029] [server-handshake.c:612:server_setvolume] 0-gluster_vol-1-server: accepted client from gke-wordpress-cluster-default-pool-b41e03
22-m764-2730-2017/03/22-09:42:57:272881-gluster_vol-1-client-0-0-0 (version: 3.7.6)
[2017-03-22 10:03:29.117920] I [MSGID: 115036] [server.c:552:server_rpc_notify] 0-gluster_vol-1-server: disconnecting connection from gke-wordpress-cluster-default-pool-b41e03
22-m764-2730-2017/03/22-09:42:57:272881-gluster_vol-1-client-0-0-0
[2017-03-22 10:03:29.117984] I [MSGID: 101055] [client_t.c:419:gf_client_unref] 0-gluster_vol-1-server: Shutting down connection gke-wordpress-cluster-default-pool-b41e0322-m7
64-2730-2017/03/22-09:42:57:272881-gluster_vol-1-client-0-0-0
[2017-03-22 10:45:53.074843] I [MSGID: 115036] [server.c:552:server_rpc_notify] 0-gluster_vol-1-server: disconnecting connection from gke-wordpress-cluster-default-pool-b41e03
22-m764-2447-2017/03/22-09:42:46:301893-gluster_vol-1-client-0-0-0
[2017-03-22 10:45:53.074905] I [MSGID: 115013] [server-helpers.c:294:do_fd_cleanup] 0-gluster_vol-1-server: fd cleanup on /mysql/ib_logfile1
[2017-03-22 10:45:53.074942] I [MSGID: 115013] [server-helpers.c:294:do_fd_cleanup] 0-gluster_vol-1-server: fd cleanup on /mysql/ib_logfile0
[2017-03-22 10:45:53.074997] I [MSGID: 115013] [server-helpers.c:294:do_fd_cleanup] 0-gluster_vol-1-server: fd cleanup on /mysql/ibdata1
[2017-03-22 10:45:53.075112] I [MSGID: 101055] [client_t.c:419:gf_client_unref] 0-gluster_vol-1-server: Shutting down connection gke-wordpress-cluster-default-pool-b41e0322-m7
64-2447-2017/03/22-09:42:46:301893-gluster_vol-1-client-0-0-0
I've tried restarting kubelet on the node. I can't find the kubelet log file on any of gke nodes. And I don't know how to get the kube-controller and apiserver logs from the master (GKE).
I suspect it's a failing of the glusterfs client on the gke nodes?
What you expected to happen:
Deployment to mount the volumes successfully.
How to reproduce it (as minimally and precisely as possible):
Not sure how, but I've run into this intermittently
Anything else we need to know:
The volumes are backed by gluster
Kubernetes version (use
kubectl version):Environment:
uname -a): Linux gke-wordpress-cluster-default-pool-b41e0322-m764 4.4.21+ Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Fri Feb 17 15:34:45 PST2017 x86_64 Intel(R) Xeon(R) CPU @ 2.60GHz GenuineIntel GNU/Linux
What happened:
I was able to bring up wordpress okay the first time, except GKE wasn't creating loadbalancer ips due to a quota issue, which I resolved, note at this point the mysql pod was up and had attached to it's volume. Upon deleting the wordpress deployment and creating it again I started getting the above errors. I deleted the mysql pod as well and brought it up again to see that it had the same issue.
The volumes are backed by a gluster cluster on GCE. Looking at the brick logs on one of the gluster nodes I see
I've tried restarting kubelet on the node. I can't find the kubelet log file on any of gke nodes. And I don't know how to get the kube-controller and apiserver logs from the master (GKE).
I suspect it's a failing of the glusterfs client on the gke nodes?
What you expected to happen:
Deployment to mount the volumes successfully.
How to reproduce it (as minimally and precisely as possible):
Not sure how, but I've run into this intermittently
Anything else we need to know:
The volumes are backed by gluster