故事是这样的,我今天遇到一件怪事:
我有一台线路很棒的日本服务器,它走4837+cn2向国内传文件,发现速度非常慢,只能跑800kb/s
但是我还有一台线路很差的德国服务器,走ntt向国内传文件,发现速度居然不错,能跑10mb/s
这引发了我的疑惑.经过一中午的流量观察,我发现了问题所在:应用了一个广为流传的”bbr”脚本!
你买了一台价格不菲,性能线路很棒的机器,你肯定会想榨干他的网络性能,对吧?我也是这么想的.然后稍微搜索一下,你大概率就会找到此脚本
https://github.000060000.xyz/tcpx.sh
这tcpx.sh非常的经典.我相信玩机久的佬都听过用过.
这个脚本的背景是在那个资料匮乏的年代,配置bbr不是容易事.还有大量的参数和选项,让一般用户甚至绝大多数非linux网络栈爱好者根本没有精力研究,于是便选择这个脚本…但如今审视,它在做什么呢?
我的系统是Debian 12,选择”tcpx.sh”的”22.系统配置优化新”
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.arp_announce = 0
net.ipv6.conf.default.forwarding = 1
net.ipv4.tcp_synack_retries = 5
net.ipv4.ip_forward = 1
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.tcp_fin_timeout = 60
net.ipv4.conf.all.rp_filter = 0
net.nf_conntrack_max = 262144
fs.file-max = 9223372036854775807
net.ipv4.tcp_max_syn_backlog = 1024
sysctl: cannot stat /proc/sys/net/ipv4/tcp_collapse_max_bytes: No such file or directory
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.ip_local_port_range = 32768 60999
net.ipv4.udp_rmem_min = 4096
net.netfilter.nf_conntrack_max = 262144
net.ipv4.tcp_window_scaling = 1
net.core.wmem_max = 212992
net.ipv4.conf.default.rp_filter = 0
net.core.rmem_max = 212992
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_mtu_probing = 0
net.core.netdev_max_backlog = 1000
net.core.somaxconn = 4096
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.ipv4.tcp_fack = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.conf.lo.arp_announce = 0
net.ipv4.tcp_rmem = 4096 131072 6291456
net.ipv4.tcp_low_latency = 0
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.ipv4.tcp_notsent_lowat = 4294967295
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.tcp_syn_retries = 6
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.ipv4.tcp_sack = 1
net.ipv4.udp_wmem_min = 4096
net.ipv4.conf.all.arp_announce = 0
net.ipv4.tcp_max_tw_buckets = 65536
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_tw_reuse = 2
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
* Applying /usr/lib/sysctl.d/50-pid-max.conf …
* Applying /etc/sysctl.d/99-joeyblog.conf …
* Applying /usr/lib/sysctl.d/99-protect-links.conf …
* Applying /etc/sysctl.d/99-sysctl.conf …
* Applying /etc/sysctl.conf …
kernel.pid_max = 4194304
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_early_retrans = 3
net.ipv4.neigh.default.unres_qlen = 10000
net.ipv4.conf.all.route_localnet = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.forwarding = 1
net.ipv4.conf.default.forwarding = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.lo.forwarding = 1
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.accept_ra = 2
net.ipv6.conf.default.accept_ra = 2
net.core.netdev_max_backlog = 100000
net.core.netdev_budget = 50000
sysctl: setting key “net.core.netdev_budget_usecs”: Invalid argument
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.core.rmem_default = 67108864
net.core.wmem_default = 67108864
net.core.optmem_max = 65536
net.core.somaxconn = 1000000
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 2
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_fin_timeout = 15
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_autocorking = 0
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_max_syn_backlog = 819200
net.ipv4.tcp_notsent_lowat = 16384
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_ecn = 1
net.ipv4.tcp_ecn_fallback = 1
net.ipv4.tcp_frto = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.ipv4.neigh.default.gc_thresh3 = 8192
net.ipv4.neigh.default.gc_thresh2 = 4096
net.ipv4.neigh.default.gc_thresh1 = 2048
net.ipv6.neigh.default.gc_thresh3 = 8192
net.ipv6.neigh.default.gc_thresh2 = 4096
net.ipv6.neigh.default.gc_thresh1 = 2048
net.ipv4.tcp_orphan_retries = 1
net.ipv4.tcp_retries2 = 5
vm.swappiness = 1
vm.overcommit_memory = 1
kernel.pid_max = 64000
net.netfilter.nf_conntrack_max = 262144
net.nf_conntrack_max = 262144
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
net.ipv4.tcp_low_latency = 1
net.ipv6.conf.all.forwarding = 1
net.ipv4.conf.default.arp_announce = 0
net.ipv6.conf.default.forwarding = 1
net.ipv4.tcp_synack_retries = 5
net.ipv4.ip_forward = 1
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.tcp_fin_timeout = 60
net.ipv4.conf.all.rp_filter = 0
net.nf_conntrack_max = 262144
fs.file-max = 9223372036854775807
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.ip_local_port_range = 32768 60999
net.ipv4.udp_rmem_min = 4096
net.netfilter.nf_conntrack_max = 262144
net.ipv4.tcp_window_scaling = 1
net.core.wmem_max = 212992
net.ipv4.conf.default.rp_filter = 0
net.core.rmem_max = 212992
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_mtu_probing = 0
net.core.netdev_max_backlog = 1000
net.core.somaxconn = 4096
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.ipv4.tcp_fack = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.conf.lo.arp_announce = 0
net.ipv4.tcp_rmem = 4096 131072 6291456
net.ipv4.tcp_low_latency = 0
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.ipv4.tcp_notsent_lowat = 4294967295
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.tcp_syn_retries = 6
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.ipv4.tcp_sack = 1
net.ipv4.udp_wmem_min = 4096
net.ipv4.conf.all.arp_announce = 0
net.ipv4.tcp_max_tw_buckets = 65536
net.ipv4.tcp_fastopen = 1
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_tw_reuse = 2
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
这个优化配置已经多年了.他在干什么呢?
1.net.ipv4.tcp_low_latency = 1
这是延迟优先的flag.开启这个对于绝大多数场景是非常抽象的,到底是谁愿意为延迟抛弃吞吐呢? 实测这个影响非常之大,检查内核文档发现启用他直接取消内核的IPV4 tcp prequeue processing
2.rmem_max|wmem_max = 64mb
据serverfault指出,buffer太多并不能改善速度.事实上bbr自己有动态窗口机制,根本不应该手动设置,否则会有大量的丢包重传
3.net.ipv4.tcp_tw_reuse = 2
这个设置相当神秘,据IBM文档指出
Permits sockets in the “time-wait” state to be reused for new connections.
In high traffic environments, sockets are created and destroyed at very high rates. This parameter, when set, allows “no longer needed” and “about to be destroyed” sockets to be used for new connections. When enabled, this parameter can bypass the allocation and initialization overhead normally associated with socket creation saving CPU cycles, system load and time.
The default value is 0 (off). The recommended value is 1 (on).
设置成2 是几个意思?
….
还有大量内容,现在已经是动态和智能调节了,而该脚本会写死…
在此水一贴:现代linux 6.0内核下,
echo “net.core.default_qdisc=fq” > /etc/sysctl.conf
echo “net.ipv4.tcp_congestion_control=bbr” >> /etc/sysctl.conf
sysctl -p
已经是足够且几乎最佳的网络优化实践,不要过于追求参数优化,最好还是把他们交给内核
另外,更新的内核和bbr经常是好的实践(这一点未必总是正确,还需进行实践测试,因为国内网络环境毕竟非常特殊)
如最新的bbrv3,据测试,在许多场景下表现还可以. 实测确实如此,简单的优化后,tcp性能几乎比肩hy2.
如一开始我提到的那个德国ntt机器,在bbrv3默认配置下传输到电信网络居然能跑10mb/s+,虽说是白天非高峰期,也非常惊人了.
总而言之,对于不是那么精通的东西,如果想用得爽,还是少折腾吧