Skip to content

swarm rejoin causes overlay network to become a "local" network #30084

@vikstrous

Description

@vikstrous

Description

Leaving and joining a swarm while running a container on an attachable network turns the overlay network into a local network.

Steps to reproduce the issue:

docker swarm init --listen-addr 0.0.0.0 --advertise-addr 127.0.0.1
docker network create -d overlay --attachable overlaytest
# network scope is swarm
docker run -d --net overlaytest alpine sleep 1000000
docker swarm leave --force
# network scope is still swarm
docker swarm init --listen-addr 0.0.0.0 --advertise-addr 127.0.0.1
# network scope is local

Describe the results you received:
network is a "local" overlay network

Describe the results you expected:
Network doesn't exist, is a real overlay network or container is killed. Basically anything else other than what's happening because this behaviour is very confusing.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:      1.14.0-dev
 API version:  1.26
 Go version:   go1.7.4
 Git commit:   bf6eb85
 Built:        Tue Dec 27 22:23:51 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.14.0-dev
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.4
 Git commit:   bf6eb85
 Built:        Tue Dec 27 22:23:51 2016
 OS/Arch:      linux/amd64
 Experimental: false

This has always been this way though. I've seen it on 1.12-cs

Output of docker info:

Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 600
Server Version: 1.14.0-dev
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: nhk9xeap351ngic91qdyco86j
 Is Manager: true
 ClusterID: tt3gmr7nh94mhxe8uqasoo4hc
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 127.0.0.1
 Manager Addresses:
  127.0.0.1:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 51371867a01c467f08af739783b8beafc154c4d7
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.8.12
Operating System: NixOS 17.03.git.1c50bdd (Gorilla)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.53 GiB
Name: thisisfine
ID: 4XB6:RAOL:2UT7:W3TT:JBSR:ILHC:LREA:HUVU:5U7V:TI7E:WUT2:WQTS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 64
 Goroutines: 182
 System Time: 2017-01-11T17:42:25.480179518-08:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Experimental: false
Insecure Registries:
 172.17.0.1
 172.17.0.1:3001
 172.17.0.1:443
 172.17.0.1:444
 172.17.0.1:445
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):
physical

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions