-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node: stream isn't being shared through requests #11406
Comments
If you create a new client, you use a new connection. If you continue to use an existing client, you will use an existing connection. |
Would gRPC ever close clients automatically, based on how many periods of time the client is idle, or any other reasons? |
surprised me, thought grpc is using http2 to share channels (open channel per ip, share with all client which init with this ip) |
Yes, gRPC is using HTTP2 to have multiple streams on a single channel. If you use the same client object, it will use the same underlying connection. But if you explicitly create a new client object, then that will create a new connection. And gRPC will close clients when they are garbage collected. |
could we create channel then pass to client? saw Java have this interface, but node don't exports it. |
No, an API like that is not currently exposed in the Node library. But it shouldn't make much of a difference unless you have many different clients for different services that connect to the same server. |
our Java backend server, have multiple services which split by biz, such as so our bff node server, usually connect to 5+ backend(each one may have 4+ ips, 6+ service, 50+ rpc), so if we only reuse client, then channel will be 5×4×6, if share channel then only need 4. |
The numbers you have there don't seem to be counting the exact same thing. It looks like, with shared clients, you would expect to have 4×6 channels per backend, and with shared channels you would expect to have 4 channels per backend. This doesn't entirely solve your problem, but I think you would benefit from the load balancing functionality that exists within gRPC. A gRPC channel is not simply a single TCP connection, but a collection of connections that RPCs are automatically load balanced across. I don't entirely understand how this is used, so @markdroth, can you elaborate on this. In any case, thank you for the information about your use case. I will keep that in mind when making future API changes. |
You can read an overview of gRPC load-balancing architecture here: The simplest way to do per-call load balancing is to use the round_robin LB policy. The idea is that you have a DNS entry for the server name that points to multiple IP addresses, and the round_robin LB policy in the client will spread the requests across those IP addresses. It's a fairly simplistic algorithm -- each request just goes to the next IP in the list, so it doesn't take into account things like requests with different weights -- but it eliminates the need to create a separate channel to each individual backend and deal with distributing the load in your application. @murgatroid99, I don't know if we expose a way to set the LB policy via the Node API. In C++, this is done via Eventually, there will also be a way to set this via the service config, via the mechanism described in the following gRFC: However, we have not yet implemented that in C-core. |
That channel argument would be passed as |
@markdroth is there any nodejs guide for how to implement our custom had read |
so for now, could we share connections between difference service(client instance) with same ip? as @geekdada said: const ip1 = 'localhost:50051';
const client1 = new FirstClient(ip1);
const client2 = new FirstClient(ip1);
const client3 = new SecondClient(ip1) will open 3 connection, which means NOT share. correct me if I'm wrong. |
@atian25 Unfortunately, while most of the infrastructure for external load balancing is implemented in the OSS code, there are a couple of required pieces still missing. Most notably, we don't currently have an OSS implementation of the external load balancer itself available, although that's on our future roadmap (exact timing TBD). I suppose that you could write an external load balancer yourself using the protocol defined here: However, that's not a trivial piece of work, so I wouldn't recommend doing that unless you absolutely have to. The other piece that's not quite done is support for specifying external load balancers in DNS, as described here: @y-zeng is almost done with a PR to add this to the c-ares DNS resolver (see #11237), but there are still some missing features in the c-ares resolver that prevent us from making it the default. (And I think there may still be issues preventing us from using c-ares in node, although @murgatroid99 will know more about that than I do.) Given all of that, I suspect that it will be a while before you can actually use external load balancing. However, as I mentioned earlier, there's no reason you can't use the built-in round_robin LB policy in the interim. |
Is there anything actionable here, or can this issue be closed? |
What version of gRPC and what language are you using?
[email protected]
Node
What operating system (Linux, Windows, …) and version?
macOS 10.12.5
What runtime / compiler are you using (e.g. python version or version of gcc)
Node 6.10.3
What did you do?
This is a koa controller
What did you expect to see?
Share one stream in multiple request to the same destination.
What did you see instead?
After multiple times of request, it generates multiple times of
[FIN ACK]
frames when application being closed. It seems that gRPC is not using the same stream, which I think is not good. Is this a bug, or just I'm using it wrong?The text was updated successfully, but these errors were encountered: