Conversation
| grpc_millis grpc_grpclb_duration_to_millis(grpc_grpclb_duration *duration_pb) { | ||
| return (duration_pb->has_seconds ? duration_pb->seconds : 0) * | ||
| GPR_MS_PER_SEC + | ||
| (duration_pb->has_nanos ? duration_pb->nanos : 0) / GPR_NS_PER_MS; |
There was a problem hiding this comment.
Any concern about loss of precision here?
There was a problem hiding this comment.
Not really: since (AFAIK) we just pass this to the grpc timer system, and that system has never been more than millisecond accurate, we're not losing anything.
Not sure if we should be rounding down or up here however... @dgquintas?
There was a problem hiding this comment.
This is a nit, but let's round down: duration represents how long a serverlist is valid. Rounding down is slightly more conservative.
| @@ -53,8 +53,8 @@ static void timer_callback(grpc_exec_ctx* exec_ctx, void* arg, | |||
| static void start_timer_if_needed(grpc_exec_ctx* exec_ctx, | |||
| grpc_call_element* elem, | |||
| gpr_timespec deadline) { | |||
There was a problem hiding this comment.
Why leave this argument as gpr_timespec? It could presumably be converted to grpc_millis as well.
| @@ -288,45 +284,37 @@ static grpc_error* init_channel_elem(grpc_exec_ctx* exec_ctx, | |||
| chand->channel_stack = args->channel_stack; | |||
| chand->max_connection_age = | |||
| DEFAULT_MAX_CONNECTION_AGE_MS == INT_MAX | |||
There was a problem hiding this comment.
I suggest two changes:
- Set
DEFAULT_MAX_CONNECTION_AGE_MStoGRPC_MILLIS_INF_FUTUREinstead ofINT_MAX. - Change
add_random_max_connection_age_jitter()to not add any jitter if the input value isGRPC_MILLIS_INF_FUTURE.
With those two changes, this statement becomes much simpler:
chand->max_connection_age =
add_random_max_connection_age_jitter(DEFAULT_MAX_CONNECTION_AGE_MS);
Same thing for all of the other assignment statements below.
There was a problem hiding this comment.
- is tricky since
MAX_CONNECTION_AGE_INTEGER_OPTIONStreatsDEFAULT_MAX_CONNECTION_AGE_MSas anint, which grpc_millis isn't necessarily (it'sintptr_t)
There was a problem hiding this comment.
- works out however if we change the return type of add_random_max_connection_age_jitter to grpc_millis
| ? gpr_inf_future(GPR_TIMESPAN) | ||
| : gpr_time_from_millis(g_default_client_keepalive_timeout_ms, | ||
| GPR_TIMESPAN); | ||
| t->keepalive_time = g_default_client_keepalive_time_ms == INT_MAX |
There was a problem hiding this comment.
Similar comment here: suggest simply setting g_default_client_keepalive_time_ms to GRPC_MILLIS_INF_FUTURE, so that no ternary operator is needed.
Same thing for all of these values.
| grpc_exec_ctx* exec_ctx, grpc_handshake_manager* mgr, | ||
| grpc_endpoint* endpoint, const grpc_channel_args* channel_args, | ||
| gpr_timespec deadline, grpc_tcp_server_acceptor* acceptor, | ||
| grpc_millis deadline, grpc_tcp_server_acceptor* acceptor, |
There was a problem hiding this comment.
Note that this will require changes to some of our internal code when we import it.
|
|
||
| void grpc_exec_ctx_global_shutdown(void) {} | ||
|
|
||
| static gpr_atm timespec_to_atm_round_down(gpr_timespec ts) { |
There was a problem hiding this comment.
Are we concerned about the loss of precision here? Presumably, this means that deadline timers will fire some number of nanoseconds earlier than they would have before.
| } | ||
|
|
||
| gpr_cv_wait(&g_cv_wait, &g_mu, next); | ||
| gpr_cv_wait(&g_cv_wait, &g_mu, |
There was a problem hiding this comment.
Would it make sense to change gpr_cv_wait() to use grpc_millis as well? (I realize that this would require renaming it to gpr_millis.)
There was a problem hiding this comment.
Which would require moving exec_ctx into gpr, which would move executor and combiner into gpr.
A: maybe, not yet (we don't use cv's much)
| } while (0) | ||
| #define GRPC_SCHEDULING_END_BLOCKING_REGION \ | ||
| do { \ | ||
| #define GRPC_SCHEDULING_END_BLOCKING_REGION_NO_EXEC_CTX \ |
There was a problem hiding this comment.
It seems a bit messy to put code that references exec_ctx into gpr. I realize that this is just a macro, so there's no link-time dependency, but it still seems sub-optimal. Do we have any reasonable alternative?
There was a problem hiding this comment.
Moving block_annotate.h to iomgr is probably correct here.
| break; | ||
| } | ||
|
|
||
| cq->num_polls++; |
There was a problem hiding this comment.
The deletion of this line doesn't seem to have anything to do with the rest of this PR, and I don't know this code well enough to understand the impact. Is this change intentional, and if so, what's the purpose?
There was a problem hiding this comment.
Yeah not intentional at all (suspect merge error... I started this change a long long time ago)
|
This is a round down, so I'll leave as is.
…On Wed, Jul 19, 2017 at 2:31 PM David G. Quintas ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In
src/core/ext/filters/client_channel/lb_policy/grpclb/load_balancer_api.c
<#11866 (comment)>:
> @@ -258,13 +258,10 @@ int grpc_grpclb_duration_compare(const grpc_grpclb_duration *lhs,
return 0;
}
-gpr_timespec grpc_grpclb_duration_to_timespec(
- grpc_grpclb_duration *duration_pb) {
- gpr_timespec duration;
- duration.tv_sec = duration_pb->has_seconds ? duration_pb->seconds : 0;
- duration.tv_nsec = duration_pb->has_nanos ? duration_pb->nanos : 0;
- duration.clock_type = GPR_TIMESPAN;
- return duration;
+grpc_millis grpc_grpclb_duration_to_millis(grpc_grpclb_duration *duration_pb) {
+ return (duration_pb->has_seconds ? duration_pb->seconds : 0) *
+ GPR_MS_PER_SEC +
+ (duration_pb->has_nanos ? duration_pb->nanos : 0) / GPR_NS_PER_MS;
This is a nit, but let's round down: duration represents how long a
serverlist is valid. Rounding down is slightly more conservative.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#11866 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AJpudfdzHHKTArz_oQanBmNfvqHZk3UWks5sPnWqgaJpZM4OaqYG>
.
|
|
| @@ -814,7 +816,7 @@ typedef struct client_channel_call_data { | |||
|
|
|||
| grpc_slice path; // Request path. | |||
| gpr_timespec call_start_time; | |||
There was a problem hiding this comment.
Could change this to grpc_millis as well.
Same thing for the start_time field in grpc_call_element_args:
There was a problem hiding this comment.
I don't think we can in channel_stack: it's used for latency measurement, and consequently likely needs the precision bump.
|
|
|
|
|
1 similar comment
|
|
|
|
|
|
Caching the start-time for GPR_CLOCK_REALTIME has been causing errors in cases where the system time is changed (after caching the time). In such cases, the following functions produce incorrect results (and are off by how much ever the system time was changed) grpc_millis_to_timespec() and grpc_timespec_to_millis_round_down() This can cause problems especially when using the above functions to get timer deadlines or completion queue timeouts. (In the worst case scenarios, the timeouts/deadlines will always occur (if the timeout inverval / deadline was less than the system change delta) Ideally we should be reverting grpc#11866 but since that is a large change (which introduced new APIs in exec_ctx.cc), I am doing this change to effectively revert to the old behavior (while still keeping the new APIs introduced in exec_ctx)
intended to: