12:56:06 STEP: Running BeforeAll block for EntireTestsuite K8sServicesTest Checks ClusterIP Connectivity
12:56:07 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp")
12:56:10 STEP: WaitforPods(namespace="default", filter="-l zgroup=testapp") => <nil>
12:56:10 STEP: WaitforPods(namespace="default", filter="-l name=echo")
12:56:10 STEP: WaitforPods(namespace="default", filter="-l name=echo") => <nil>
12:56:10 STEP: testing connectivity via cluster IP 10.99.144.86
FAIL: cannot curl to service IP from host: time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.001225'command terminated with exit code 28
Expected command: kubectl exec -n kube-system log-gatherer-87scr -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.99.144.86:80/ -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.001225'
Stderr:
command terminated with exit code 28
=== Test Finished at 2021-11-17T12:56:15Z====
12:56:15 STEP: Running JustAfterEach block for EntireTestsuite K8sServicesTest
===================== TEST FAILED =====================
12:56:15 STEP: Running AfterFailed block for EntireTestsuite K8sServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-5747bcc8f9-9j4jg 1/1 Running 0 69m 10.0.0.160 k8s1 <none> <none>
cilium-monitoring prometheus-655fb888d7-krw6p 1/1 Running 0 69m 10.0.0.6 k8s1 <none> <none>
default app1-6bf9bf9bd5-4c7nv 2/2 Running 0 10s 10.0.0.80 k8s1 <none> <none>
default app1-6bf9bf9bd5-vtlc7 2/2 Running 0 10s 10.0.0.226 k8s1 <none> <none>
default app2-58757b7dd5-vflb2 1/1 Running 0 10s 10.0.0.194 k8s1 <none> <none>
default app3-5d69599cdd-c8xsl 1/1 Running 0 10s 10.0.0.238 k8s1 <none> <none>
default echo-55fdf5787d-2vjf9 2/2 Running 0 10s 10.0.1.213 k8s2 <none> <none>
default echo-55fdf5787d-cvrhk 2/2 Running 0 10s 10.0.1.88 k8s2 <none> <none>
kube-system cilium-bk9t4 1/1 Running 0 29s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-kn78w 1/1 Running 0 29s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-79d45c8cb4-d5b5p 1/1 Running 0 29s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-79d45c8cb4-lprsw 1/1 Running 0 29s 192.168.56.12 k8s2 <none> <none>
kube-system coredns-755cd654d4-vsxln 1/1 Running 0 36m 10.0.1.168 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 71m 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 71m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 71m 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-lhv6m 1/1 Running 0 70m 192.168.56.12 k8s2 <none> <none>
kube-system kube-proxy-qf4r6 1/1 Running 0 71m 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 71m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-87scr 1/1 Running 0 69m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-mdq9g 1/1 Running 0 69m 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-gf79f 1/1 Running 0 69m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-mb6xn 1/1 Running 0 69m 192.168.56.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-bk9t4 cilium-kn78w]
cmd: kubectl exec -n kube-system cilium-bk9t4 -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.1.168:53
3 10.96.0.10:9153 ClusterIP 1 => 10.0.1.168:9153
4 10.111.123.112:3000 ClusterIP 1 => 10.0.0.160:3000
5 10.100.212.255:9090 ClusterIP 1 => 10.0.0.6:9090
6 10.99.144.86:80 ClusterIP 1 => 10.0.0.80:80
2 => 10.0.0.226:80
7 10.99.144.86:69 ClusterIP 1 => 10.0.0.80:69
2 => 10.0.0.226:69
8 10.104.134.202:80 ClusterIP 1 => 10.0.1.213:80
2 => 10.0.1.88:80
9 10.104.134.202:69 ClusterIP 1 => 10.0.1.213:69
2 => 10.0.1.88:69
10 [fd03::1a89]:69 ClusterIP 1 => [fd02::6d]:69
2 => [fd02::db]:69
11 [fd03::1a89]:80 ClusterIP 1 => [fd02::6d]:80
2 => [fd02::db]:80
12 [fd03::4e6c]:80 ClusterIP 1 => [fd02::196]:80
2 => [fd02::137]:80
13 [fd03::4e6c]:69 ClusterIP 1 => [fd02::196]:69
2 => [fd02::137]:69
14 10.103.244.8:69 ClusterIP 1 => 10.0.1.213:69
2 => 10.0.1.88:69
15 10.103.244.8:80 ClusterIP 1 => 10.0.1.213:80
2 => 10.0.1.88:80
16 [fd03::971a]:80 ClusterIP 1 => [fd02::196]:80
2 => [fd02::137]:80
17 [fd03::971a]:69 ClusterIP 1 => [fd02::196]:69
2 => [fd02::137]:69
Stderr:
cmd: kubectl exec -n kube-system cilium-bk9t4 -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
164 Disabled Disabled 42550 k8s:appSecond=true fd02::b2 10.0.0.194 ready
k8s:id=app2
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
333 Disabled Disabled 53046 k8s:id=app1 fd02::db 10.0.0.226 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
529 Disabled Disabled 12722 k8s:app=grafana fd02::f4 10.0.0.160 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
686 Disabled Disabled 22293 k8s:id=app3 fd02::7b 10.0.0.238 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1422 Disabled Disabled 30554 k8s:app=prometheus fd02::3d 10.0.0.6 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
3013 Disabled Disabled 4 reserved:health fd02::d7 10.0.0.74 ready
3100 Disabled Disabled 53046 k8s:id=app1 fd02::6d 10.0.0.80 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
3828 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/control-plane
k8s:node-role.kubernetes.io/master
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
Stderr:
cmd: kubectl exec -n kube-system cilium-kn78w -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443
2 10.96.0.10:53 ClusterIP 1 => 10.0.1.168:53
3 10.96.0.10:9153 ClusterIP 1 => 10.0.1.168:9153
4 10.111.123.112:3000 ClusterIP 1 => 10.0.0.160:3000
5 10.100.212.255:9090 ClusterIP 1 => 10.0.0.6:9090
6 10.99.144.86:80 ClusterIP 1 => 10.0.0.80:80
2 => 10.0.0.226:80
7 10.99.144.86:69 ClusterIP 1 => 10.0.0.80:69
2 => 10.0.0.226:69
8 10.104.134.202:80 ClusterIP 1 => 10.0.1.213:80
2 => 10.0.1.88:80
9 10.104.134.202:69 ClusterIP 1 => 10.0.1.213:69
2 => 10.0.1.88:69
10 [fd03::1a89]:69 ClusterIP 1 => [fd02::6d]:69
2 => [fd02::db]:69
11 [fd03::1a89]:80 ClusterIP 1 => [fd02::6d]:80
2 => [fd02::db]:80
12 [fd03::4e6c]:80 ClusterIP 1 => [fd02::196]:80
2 => [fd02::137]:80
13 [fd03::4e6c]:69 ClusterIP 1 => [fd02::196]:69
2 => [fd02::137]:69
14 10.103.244.8:80 ClusterIP 1 => 10.0.1.213:80
2 => 10.0.1.88:80
15 10.103.244.8:69 ClusterIP 1 => 10.0.1.213:69
2 => 10.0.1.88:69
16 [fd03::971a]:80 ClusterIP 1 => [fd02::196]:80
2 => [fd02::137]:80
17 [fd03::971a]:69 ClusterIP 1 => [fd02::196]:69
2 => [fd02::137]:69
Stderr:
cmd: kubectl exec -n kube-system cilium-kn78w -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
128 Disabled Disabled 21726 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::1f5 10.0.1.168 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
863 Enabled Enabled 304 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::196 10.0.1.213 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:name=echo
2291 Disabled Disabled 4 reserved:health fd02::1db 10.0.1.202 ready
2969 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
k8s:status=lockdown
reserved:host
3056 Enabled Enabled 304 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::137 10.0.1.88 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:name=echo
Stderr:
===================== Exiting AfterFailed =====================
12:56:27 STEP: Running AfterEach for block EntireTestsuite K8sServicesTest
12:56:27 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|ac21a6cd_K8sServicesTest_Checks_ClusterIP_Connectivity_Checks_service_on_same_node.zip]]
Test Name
Failure Output
Stacktrace
Click to show.
Standard Output
Click to show.
Standard Error
Click to show.
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9//173/artifact/25b7f8f0_K8sLRPTests_Checks_local_redirect_policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9//173/artifact/753fdde2_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9//173/artifact/ac21a6cd_K8sServicesTest_Checks_ClusterIP_Connectivity_Checks_service_on_same_node.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9//173/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.9_173_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/173/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.