docs: add note about custom F2B setup with PROXY protocol#3964
docs: add note about custom F2B setup with PROXY protocol#3964georglauterbach merged 1 commit intomasterfrom
Conversation
|
Documentation preview for this PR is ready! 🎉 Built with commit: 41d7320 |
| - Kubernetes manifest changes for the DMS configured `Service` | ||
| - DMS configuration changes for Postfix and Dovecot | ||
| - [ ] To keep support for direct connections to DMS services internally within cluster, service ports must be "duplicated" to offer an alternative port for connections using PROXY protocol | ||
| - [ ] Custom Fail2Ban required: Because the traffic to DMS is now coming from the proxy, banning the origin IP address will have no effect; you'll need to implement a [custom solution for your setup][github-web::docker-mailserver::proxy-protocol-fail2ban]. |
There was a problem hiding this comment.
I thought that with PROXY protocol, the logs being monitored would have the proper client IP for banning, and that within the same node Fail2Ban would use nftables to ban that IP? So the reverse-proxy IP isn't a concern as the connection is blocked outside of Docker?
I'm not quite sure how that works with a single node with k8s but would have thought it to be similar? However when the connection is initially from a separate node, then Fail2Ban would be banning IP pointlessly since connections from the client don't occur direct to the node, but through the reverse-proxy / load-balancer service, ban needs to occur on the ingress node.
I did touch on that concern previously, although your response was that the connection was rejected within a container, which is not how I recall it working with Docker 🤔 I didn't think it was service specific, just "if source IP is a banned IP, reject the connection", the Docker host would not bother routing the connection to a container since the firewall already terminated it.
There was a problem hiding this comment.
At least from the f2b config, it seems to ban on the host as I described:
docker-mailserver/target/fail2ban/jail.local
Lines 18 to 21 in 082e076
That's why the capabilities are required for permission to do so right?
Might be good to clarify this if there is some difference with what's happening with k8s, or if your alternative network stack is responsible on that single node you've mentioned where this ban action is incompatible?
There was a problem hiding this comment.
I thought that with PROXY protocol, the logs being monitored would have the proper client IP for banning,
They have :)
and that within the same node Fail2Ban would use
nftablesto ban that IP?
I think here is the misunderstanding: AFAIK, this has nothing to do with multi-node or not. You seem to expect that the ban inside the container would be "node-wide", not just limited to the container, but this doesn't seem to be the case (at least for me, it wasn't). I checked nftables inside the container and all IPs had been properly banned, but not on the host (which I'd expect as well). Otherwise, a malicious container could block host traffic altogether effortlessly. I am not sure whether this changes if host-networking is enabled. From what I see, bans and routes in nftables are contained inside the container.
So the reverse-proxy IP isn't a concern as the connection is blocked outside of Docker?
That does not seem to be the case (at least in K8s it isn't) because, as I just mentioned, the ban is not enforced on the host, but inside the container.
I'm not quite sure how that works with a single node with k8s but would have thought it to be similar? However when the connection is initially from a separate node, then Fail2Ban would be banning IP pointlessly since connections from the client don't occur direct to the node, but through the reverse-proxy / load-balancer service, ban needs to occur on the ingress node.
Indeed, but K8s networking is a tad more difficult in my experience. You want to keep cluster-internal traffic decisions inside the cluster, and I am not sure where CNIs hook into the traffic chain precisely. Banning an IP because of a cluster-internal policy decision should be done by the CNI, not on the host (because in K8s, hosts are disposable good, really).
I did touch on that concern previously, although your response was that the connection was rejected within a container, which is not how I recall it working with Docker 🤔
I should have read the whole response before answering 😆So there may be a difference between Docker and Kubernetes here.
I didn't think it was service specific, just "if source IP is a banned IP, reject the connection", the Docker host would not bother routing the connection to a container since the firewall already terminated it.
If this is the case with Docker, then there should be no issue at all when using the PROXY protocol on Docker hosts. This is not the case for Kubernetes, though. Now we have some clarity 🚀
There was a problem hiding this comment.
Otherwise, a malicious container could block host traffic altogether effortlessly.
Ehh... a malicious container can do plenty of damage if you grant it the capabilities.
You see users push hard for running containers with a user that is non-root in the name of security, but realistically many of the security concerns they would cite were due to capabilities available. If those were dropped the root user in the container would be just as good. Instead I see projects adopt non-root users and then internally enforce capabilities that you cannot drop if you want the software to run, even when you're not using a feature that needs it 🤷♂️ (The CoreDNS docker image presently is guilty of this)
From what I see, bans and routes in
nftablesare contained inside the container.
All good, I've not looked into it too far and lack time for that right now.
I was just under the impression that fail2ban was acting like it would on the host, and that granting it the network capability it needed to apply that ban via nftables would have been host-side rather than scoped to the container 😅
If this is the case with Docker, then there should be no issue at all when using the PROXY protocol on Docker hosts. This is not the case for Kubernetes, though. Now we have some clarity 🚀
As mentioned, I can't confirm any time soon. I know that publishing a port for a container on the host will ignore host firewall rules (at least with UFW) and there is some other weird networking gotchas specific to the Docker network management, so I wouldn't be surprised for Docker to not scope the ban.
Presently I only have access to WSL2, which lacks some features such as cgroups v2 integration with Docker which may also impact the networking, but WSL2 + Docker already has some differences that I'd rather get a proper insight from a linux host system.
I may be wrong, it could be container scoped on Docker 🤔
Description
Fixes #1761
Type of change
Checklist