Skip to content

Makes hubble-ui working with ipv6#7

Merged
michi-covalent merged 1 commit intocilium:masterfrom
bengentil:add-ipv6-support
Feb 23, 2020
Merged

Makes hubble-ui working with ipv6#7
michi-covalent merged 1 commit intocilium:masterfrom
bengentil:add-ipv6-support

Conversation

@bengentil
Copy link
Contributor

Context

Currently the ipv6 is broken for multiple reasons:

  1. The kubernetes-client needs at least 0.11.1 to construct a correct URI with ipv6
    see in-cluster config builder does not work properly in an ipv6 cluster kubernetes-client/javascript#380
  2. dns.resolve4 is used in dbHubble.ts:244
  3. If the kubernetes URI contains an ipv6 plus the 443 port the request will fail to verify the TLS
    see issue Bad Host header when using ipv6 and the default protocol ports request/request#3274)

Scope

This PR address the 2 first points, as the third still need to be fixed in request

Copy link
Collaborator

@michi-covalent michi-covalent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the pr @bengentil ! it looks good. could you update the commit message:

  • add Signed-off-by using the --signoff option for git commit, although we haven't been doing this for hubble-ui.
  • include the description in the commit message, like the context section you provided in the PR description.

Currently the ipv6 is broken for multiple reasons:

1. The kubernetes-client needs at least 0.11.1 to construct a correct URI with ipv6
      see kubernetes-client/javascript#380
2. dns.resolve4 is used in dbHubble.ts:244
3. If the kubernetes URI contains an ipv6 plus the 443 port the request will fail to verify the TLS
see issue request/request#3274) (Not included)

The third point will be added in another PR, once request#3274 is fixed

Signed-off-by: Benjamin Gentil <[email protected]>
@bengentil
Copy link
Contributor Author

Done.

I've added the 3rd point but mentioned that it's not included as it could help people understand why it's still not working when the kubernetes URI is something like https://[2001:db8::1]:443

Copy link
Collaborator

@michi-covalent michi-covalent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank you @bengentil !

@michi-covalent michi-covalent merged commit c59bf51 into cilium:master Feb 23, 2020
@Dixon3
Copy link

Dixon3 commented Feb 25, 2020

Updated to new version:

And have in logs:

{"name":"frontend","hostname":"hubble-ui-77fc9894f5-l6fq6","pid":18,"req_id":"2706d57f-4c8d-4b6d-a29a-c67164ea6b0a","user":"admin@localhost","level":50,"err":{"message":"Can't fetch namespaces via k8s api: Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: Host: fd03. is not in the cert's altnames: DNS:kube09ms01cn.novalocal, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:kube09ms01cn, DNS:localhost, DNS:localhost6, DNS:kube09ms01cn, DNS:kube09ms01cn.novalocal, IP Address:FD03:0:0:0:0:0:0:1, IP Address:FD3C:BFF9:693F:D06:F816:3EFF:FE57:A932","locations":[{"line":4,"column":7}],"path":["viewer","clusters"],"extensions":{"code":"INTERNAL_SERVER_ERROR"}},"msg":"","time":"2020-02-25T04:39:38.436Z","v":0}

@bengentil
Copy link
Contributor Author

bengentil commented Feb 25, 2020

Yes this is the third point mentioned above, but this is not fixed in request yet see request/request#3274.

You can apply this patch https://github.com/bengentil/request-ipv6-issue/blob/master/request.patch to make it work

Edit: You can build the docker image from the PR #8 to make it work

@bengentil bengentil deleted the add-ipv6-support branch February 25, 2020 08:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants