Skip to content

allow disable core_list setting and override qps server in benchmarks#9168

Merged
apolcyn merged 1 commit intogrpc:masterfrom
apolcyn:allow_disable_qps_json_driver_core_counts
Dec 21, 2016
Merged

allow disable core_list setting and override qps server in benchmarks#9168
apolcyn merged 1 commit intogrpc:masterfrom
apolcyn:allow_disable_qps_json_driver_core_counts

Conversation

@apolcyn
Copy link
Copy Markdown
Contributor

@apolcyn apolcyn commented Dec 20, 2016

Copy link
Copy Markdown
Contributor

@vjpai vjpai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good but raising some small issues.

gpr_free(host);
gpr_free(driver_port);
gpr_free(cli_target);
if (qps_server_target_override != NULL && strlen(qps_server_target_override) > 0) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the purpose of this seems to be to work around the gpr_join_host_port business, why not just have an option to disable that? That way you can still have multiple server targets.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partly I was trying to avoid host-port parsing, but mostly the benefit was to allow configuring of the qps server target when invoking the driver, rather than from info provided by the worker. I could change the parameter to a list?

// Setup the hosts and core counts
auto hosts_cores = get_hosts_and_cores(workers);
std::unordered_map<string, std::deque<int>> hosts_cores;
if(configure_core_lists) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-format


// Setup the hosts and core counts
auto hosts_cores = get_hosts_and_cores(workers);
std::unordered_map<string, std::deque<int>> hosts_cores;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this whole variable and its initialization need to be moved down below where it is actually used? That way you are just checking configure_core_lists once down below and then only declaring and setting up this variable if it's needed.

@apolcyn apolcyn force-pushed the allow_disable_qps_json_driver_core_counts branch from 5c54b3f to 87636cd Compare December 20, 2016 22:55
Copy link
Copy Markdown
Contributor

@vjpai vjpai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, asking for a change back.

}
if (configure_core_lists) {
if (i == 0) {
hosts_cores = get_hosts_and_cores(workers);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, I made a bad suggestion. I guess you should just pull this back out of the loop after all; it's ugly under a nested conditional like this.

Copy link
Copy Markdown
Contributor

@vjpai vjpai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@apolcyn apolcyn force-pushed the allow_disable_qps_json_driver_core_counts branch from 4aacac9 to 4873d30 Compare December 21, 2016 03:25
@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Dec 21, 2016

test failure on known issue: interop PR - #8989

@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Dec 21, 2016

latest updates rebase and squash

@apolcyn apolcyn merged commit 659ddda into grpc:master Dec 21, 2016
@lock lock bot locked as resolved and limited conversation to collaborators Jan 25, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants