Skip to content

Comments

Network Bandwidth management changes #27809 #26767#27846

Closed
shivacherukuri wants to merge 1 commit intomoby:masterfrom
shivacherukuri:siva-network-io-bandwidth
Closed

Network Bandwidth management changes #27809 #26767#27846
shivacherukuri wants to merge 1 commit intomoby:masterfrom
shivacherukuri:siva-network-io-bandwidth

Conversation

@shivacherukuri
Copy link

@shivacherukuri shivacherukuri commented Oct 28, 2016

Code changes for handling Docker network bandwidth management changes.
for more info https://github.com/shivacherukuri/Docker-Network-Bandwidth
includes #27809 #26767

Design:
Added a new driver to the docker engine so that it can natively support network bandwidth management based on container id. This will be an enabler for handling persistent bandwidth management rules and can be used by docker SWARM to schedule containers based on available network bandwidth too.

Dynamically can change n/w bandwidth management rules for a running container using "$docker network bandwidth" command

Managing n/w bandwidth based on actual link/NIC speed. Can manage allocating n/w resources for containers based on the actual remaining/available bandwidth. if the containers running on the host or any network application on host which are not part of of docker network bandwidth management can fall under default bandwidth management rule. This way can guarantee the minimum n/w bandwidth.

Currently $docker network command cli has been updated to make use of the new driver. can be extended to $docker run cli and others

Using tool tc (traffic classifier) to manage n/w bandwidth classes

i have verified my changes running ubuntu 14.04 kernel 4.6.0-rc

Signed-off-by: shivacherukuri [email protected]

@aboch
Copy link
Contributor

aboch commented Oct 28, 2016

@shivacherukuri
I am thinking the bandwidth policies will be set at container run or network connect as we do for other per-network container properties.They should be properties of the existing network endpoints, I don't think we need separate network/bandwidth endpoint.
They need to be abstrated in libnetwork core (per network and per network endpoint) and ultimately programmed by the osl package. I do not think we need a bandwidth driver under libnetwork/drivers for this. Also the osl pkg will program the TC via netlink calls, directly inside the container network namespace.

@shivacherukuri
Copy link
Author

shivacherukuri commented Oct 28, 2016

@aboch
Yes having capability to set bandwidth policies during container start level would be good and the my current design can be extended to update docker run command. Why i have opted to add the bandwidth policy capability using docker network command is, it can be set or unset dynamically at any time of the container lifetime.
Regarding the new driver part, right now i just tweaked daemon/network.go to call new network bandwidth api without modifying driverapi. can make the bandwidth api part of libnetwork. Also would like to manage network bandwidth or apply bandwidth policies per container or per network from host instead of container namespace, as it can give more flexibility to manage physical resources.

@thaJeztah thaJeztah added the status/failing-ci Indicates that the PR in its current state fails the test suite label Oct 30, 2016
@thaJeztah
Copy link
Member

Looks like this needs a gofmt;

01:35:27 These files are not properly gofmt'd:
01:35:27  - api/server/router/network/backend.go
01:35:27  - api/server/router/network/network.go
01:35:27  - api/server/router/network/network_routes.go
01:35:27  - api/types/types.go
01:35:27  - cli/command/network/bandwidth.go
01:35:27  - cli/command/network/cmd.go
01:35:27  - client/bandwidth_create.go
01:35:27  - client/interface.go
01:35:27  - daemon/network.go
01:35:27 
01:35:27 Please reformat the above files using "gofmt -s -w" and commit the result.
01:35:27 

updated with gofmt issues and merge conflicts
Signed-off-by: shivacherukuri <[email protected]>
@shivacherukuri shivacherukuri force-pushed the siva-network-io-bandwidth branch from 1a1c667 to 1f7be5b Compare November 5, 2016 05:15
@shivacherukuri
Copy link
Author

Updated fixing formatting issues and base branch conflicts. Please feel free to test the proposed "$docker network bandwidth" behavior and provide further feedback

@AkihiroSuda
Copy link
Member

+1 for @aboch 's suggestion.
This should not be a separate driver.

For API/CLI, I think "docker network update" would be better so that we can have other configuration parameter as well in the future.

cc @mrjana @mavenugo

@shivacherukuri
Copy link
Author

@AkihiroSuda
If you are saying something like "docker network update bandwidth < >" then that would be good, but there is "docker update" command as well right?.
Initially i thought of setting,updating or monitor network bandwidth using only one command "docker network bandwidth" because only if containers are up and running then only we will come to know which physical interface it actually use and if we try using "docker run" for setting n/w bandwidth and if it fails due to some error, then stopping the container may not be an appropriate for these kind of errors.

@AkihiroSuda
Copy link
Member

Maybe "docker update CONTAINER" is better because the setting is specific to a container

@shivacherukuri
Copy link
Author

@AkihiroSuda
yeah "docker update CONTAINER" sounds good 👍

@thaJeztah
Copy link
Member

docker update CONTAINER is better because the setting is specific to a container

However, this setting is for an endpoint, not for a container, correct? So I should be able to set a specific limit on an endpoint (if a container is connected to multiple networks)

@shivacherukuri
Copy link
Author

@thaJeztah
The way i thought about this situation is, any n/w packet generated from the container will be tagged with a specific classid and using that classid all the packets or n/w traffic from the container will be classified and limited/throttle at physical NIC level. Suppose if a container(container2 in the below example) attached to two n/w endpoints and those two endpoints use two different physical NIC cards to send traffic out, then it will be able to apply bandwidth limit rules on two network endpoints of container2. But if container2 in the below example's n/w endpoints end up using only one physical NIC, then does it always make sense to apply bandwidth limit rules at container n/w endpoint level?

container-nw-ex

@vdemeester
Copy link
Member

ping @aboch @mavenugo what is the status here ?

@LK4D4
Copy link
Contributor

LK4D4 commented Jan 30, 2017

ping @aboch @mavenugo @sanimej

@dnephin
Copy link
Member

dnephin commented Feb 10, 2017

Thanks for the contribution @shivacherukuri. Based on earlier comments it sounds like this needs some more design work. Please open an issue so there can be a discussion about the design of this feature.

It is likely that parts of this will need to be implemented in libnetwork.

@dnephin dnephin closed this Feb 10, 2017
@vincentwoo
Copy link
Contributor

@dnephin I'm interested in following work in libnetwork to allow for per-container bandwidth limitation. Are there any ongoing PRs/issues there that address this? I couldn't find anything from my own searches.

@shivacherukuri
Copy link
Author

Hi @vincentwoo .. right now I don't see any other ongoing PRs to address this. I couldn't spend time in the last couple of months to address the comments and the changes required in the design. Let me see when i can come back.

Hi @dnephin are there any ongoing efforts upgrading libnetwork to support network bandwidth management changes in the upcoming docker release. Would like to hear the news and more details if it can be shared.

@sanimej
Copy link

sanimej commented Feb 13, 2017

@shivacherukuri are there any ongoing efforts upgrading libnetwork to support network bandwidth management - No. There hasn't been lot of interest in this. Typically bandwidth is not a big constraint within the DC. But I am sure it will be useful for some deployments.

We are looking into adding support for network policy and the bandwidth should be part of the UX. In terms of implementation as @aboch commented earlier bandwidth management shouldn't be a separate driver.

@thaJeztah
Copy link
Member

@shivacherukuri I'm not aware of current work, but it may be good to discuss the feature in the libnetwork repository, and come to a design there (from a libnetwork perspective at least); feel free to ping me on the issue if you want to have me involved in helping on the UX / design from docker's perspective

@shivacherukuri
Copy link
Author

Sure, will ping you @thaJeztah and all. Thanks for all your wonderful feedback/comments on the design.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

status/failing-ci Indicates that the PR in its current state fails the test suite status/1-design-review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants