Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node: stream isn't being shared through requests #11406

Closed
geekdada opened this issue Jun 5, 2017 · 14 comments
Closed

Node: stream isn't being shared through requests #11406

geekdada opened this issue Jun 5, 2017 · 14 comments
Assignees

Comments

@geekdada
Copy link

geekdada commented Jun 5, 2017

What version of gRPC and what language are you using?

[email protected]
Node

What operating system (Linux, Windows, …) and version?

macOS 10.12.5

What runtime / compiler are you using (e.g. python version or version of gcc)

Node 6.10.3

What did you do?

This is a koa controller

* echo() {
      const ctx = this.ctx;
      const grpcClient = ctx.grpc.example.test; // returns a new client
      ctx.body = yield grpcClient.echo({ id: 1, userName: 'grpc' });
}

What did you expect to see?

Share one stream in multiple request to the same destination.

What did you see instead?

After multiple times of request, it generates multiple times of [FIN ACK] frames when application being closed. It seems that gRPC is not using the same stream, which I think is not good. Is this a bug, or just I'm using it wrong?

image

@murgatroid99 murgatroid99 self-assigned this Jun 5, 2017
@murgatroid99
Copy link
Member

If you create a new client, you use a new connection. If you continue to use an existing client, you will use an existing connection.

@geekdada
Copy link
Author

geekdada commented Jun 6, 2017

Would gRPC ever close clients automatically, based on how many periods of time the client is idle, or any other reasons?

@atian25
Copy link

atian25 commented Jun 6, 2017

surprised me, thought grpc is using http2 to share channels (open channel per ip, share with all client which init with this ip)

@murgatroid99
Copy link
Member

Yes, gRPC is using HTTP2 to have multiple streams on a single channel. If you use the same client object, it will use the same underlying connection. But if you explicitly create a new client object, then that will create a new connection.

And gRPC will close clients when they are garbage collected.

@atian25
Copy link

atian25 commented Jun 6, 2017

could we create channel then pass to client? saw Java have this interface, but node don't exports it.

@murgatroid99
Copy link
Member

No, an API like that is not currently exposed in the Node library. But it shouldn't make much of a difference unless you have many different clients for different services that connect to the same server.

@atian25
Copy link

atian25 commented Jun 7, 2017

our Java backend server, have multiple services which split by biz, such as myjob.test.GameService, myjob.some.Sdk. and each of them have multiple serve ips (for load balance at rpc level)

so our bff node server, usually connect to 5+ backend(each one may have 4+ ips, 6+ service, 50+ rpc), so if we only reuse client, then channel will be 5×4×6, if share channel then only need 4.

@murgatroid99
Copy link
Member

The numbers you have there don't seem to be counting the exact same thing. It looks like, with shared clients, you would expect to have 4×6 channels per backend, and with shared channels you would expect to have 4 channels per backend.

This doesn't entirely solve your problem, but I think you would benefit from the load balancing functionality that exists within gRPC. A gRPC channel is not simply a single TCP connection, but a collection of connections that RPCs are automatically load balanced across. I don't entirely understand how this is used, so @markdroth, can you elaborate on this.

In any case, thank you for the information about your use case. I will keep that in mind when making future API changes.

@markdroth
Copy link
Member

You can read an overview of gRPC load-balancing architecture here:

https://github.com/grpc/grpc/blob/master/doc/load-balancing.md

The simplest way to do per-call load balancing is to use the round_robin LB policy. The idea is that you have a DNS entry for the server name that points to multiple IP addresses, and the round_robin LB policy in the client will spread the requests across those IP addresses. It's a fairly simplistic algorithm -- each request just goes to the next IP in the list, so it doesn't take into account things like requests with different weights -- but it eliminates the need to create a separate channel to each individual backend and deal with distributing the load in your application.

@murgatroid99, I don't know if we expose a way to set the LB policy via the Node API. In C++, this is done via ChannelArguments::SetLoadBalancingPolicyName(), defined here:

https://github.com/grpc/grpc/blob/master/include/grpc%2B%2B/support/channel_arguments.h#L98

Eventually, there will also be a way to set this via the service config, via the mechanism described in the following gRFC:

https://github.com/grpc/proposal/blob/master/A2-service-configs-in-dns.md

However, we have not yet implemented that in C-core.

@murgatroid99
Copy link
Member

That channel argument would be passed as {"grpc.lb_policy_name": name} in the third argument to a client constructor.

@atian25
Copy link

atian25 commented Jun 8, 2017

@markdroth is there any nodejs guide for how to implement our custom External Load Balancing Service?

had read load-balancing.md before, but don't know how to use at node.

@atian25
Copy link

atian25 commented Jun 8, 2017

A gRPC channel is not simply a single TCP connection, but a collection of connections that RPCs are automatically load balanced across

so for now, could we share connections between difference service(client instance) with same ip?

as @geekdada said:

const ip1 = 'localhost:50051';
const client1 = new FirstClient(ip1);
const client2 = new FirstClient(ip1);
const client3 = new SecondClient(ip1)

will open 3 connection, which means NOT share.

correct me if I'm wrong.

@markdroth
Copy link
Member

@atian25 Unfortunately, while most of the infrastructure for external load balancing is implemented in the OSS code, there are a couple of required pieces still missing.

Most notably, we don't currently have an OSS implementation of the external load balancer itself available, although that's on our future roadmap (exact timing TBD). I suppose that you could write an external load balancer yourself using the protocol defined here:

https://github.com/grpc/grpc/blob/master/src/proto/grpc/lb/v1/load_balancer.proto

However, that's not a trivial piece of work, so I wouldn't recommend doing that unless you absolutely have to.

The other piece that's not quite done is support for specifying external load balancers in DNS, as described here:

https://github.com/grpc/proposal/blob/master/A5-grpclb-in-dns.md

@y-zeng is almost done with a PR to add this to the c-ares DNS resolver (see #11237), but there are still some missing features in the c-ares resolver that prevent us from making it the default. (And I think there may still be issues preventing us from using c-ares in node, although @murgatroid99 will know more about that than I do.)

Given all of that, I suspect that it will be a while before you can actually use external load balancing. However, as I mentioned earlier, there's no reason you can't use the built-in round_robin LB policy in the interim.

@markdroth
Copy link
Member

Is there anything actionable here, or can this issue be closed?

@lock lock bot locked as resolved and limited conversation to collaborators Sep 29, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants