Skip to content

Comments

Upgrade gVisor to latest version v0.0.0-20260109181451-4be7c433dae2#5527

Merged
RPRX merged 1 commit intoXTLS:mainfrom
Owersun:upgrade-gvisor
Jan 12, 2026
Merged

Upgrade gVisor to latest version v0.0.0-20260109181451-4be7c433dae2#5527
RPRX merged 1 commit intoXTLS:mainfrom
Owersun:upgrade-gvisor

Conversation

@Owersun
Copy link
Collaborator

@Owersun Owersun commented Jan 12, 2026

gVisor upgraded to the latest version
golang change to 1.25.5 is required by this gVisor version

Slight modification to wireguard udpHandler.HandlePacket() to be compatible with this version.
Someone responsible for proxy/wireguard should check that the change is ok.

proxy/tun that also uses gVisor is running just fine with this version due to custom udpHandler.

@Fangliding
Copy link
Member

better wait until go 1.26

@RPRX
Copy link
Member

RPRX commented Jan 12, 2026

better wait until go 1.26

为啥

@RPRX
Copy link
Member

RPRX commented Jan 12, 2026

@Owersun 有运行 go mod tidy 吗

@RPRX
Copy link
Member

RPRX commented Jan 12, 2026

现在不升级的话可能会有这个 bug #5525 (comment)

话说 Xray WireGuard 对 gVisor 的使用看起来确实不是 FullCone

@Owersun
Copy link
Collaborator Author

Owersun commented Jan 12, 2026

@Owersun 有运行 go mod tidy 吗

sorry.
ran. commited. squashed.

@Owersun
Copy link
Collaborator Author

Owersun commented Jan 12, 2026

话说 Xray WireGuard 对 gVisor 的使用看起来确实不是 FullCone

Even if it is, the udp_connection we implemented in tun is better connection, because default gVisor UDPHandler creates new connection for each packet. It treats each packet as new connection, not ordering them in a stream

btw why do you strike through your messages? is it for Copilot not to pick up unrelated talk?

@RPRX
Copy link
Member

RPRX commented Jan 12, 2026

because default gVisor UDPHandler creates new connection for each packet. It treats each packet as new connection, not ordering them in a stream

WTF?

@Owersun
Copy link
Collaborator Author

Owersun commented Jan 12, 2026

because default gVisor UDPHandler creates new connection for each packet. It treats each packet as new connection, not ordering them in a stream

WTF?

yeah... gVisor tcp.Forwarder.HandlePacket tries to find which connection the packet belong to, udp.Forwarder.HandlerPacket doesn't do anything, it just dispatch every packet as new forwarded packet to handler.
gVisor udp handling is very simple.
but it make sense in a way. since there is no flow control in UDP, there is no "connection start" or "connection end" in udp. you can treat each packet as completely independent of each other on transport level.
it's just a huge waste of resources, but when computers have gigabytes of ram - no big deal

@RPRX
Copy link
Member

RPRX commented Jan 12, 2026

算了不等 Go 1.25.6 了先合了这个和 hy2 然后发版吧,不然 go.mod 里写个 1.25.6 的话软路由那边不好编译,那边 Go 版本更新慢

@RPRX RPRX merged commit e742e84 into XTLS:main Jan 12, 2026
39 checks passed
@Owersun Owersun deleted the upgrade-gvisor branch January 13, 2026 16:41
@RPRX
Copy link
Member

RPRX commented Jan 21, 2026

@Owersun 话说这个升级咋没包含 #5561 (comment) ,我看那个 commit 的时间在这个 PR 前面

@RPRX
Copy link
Member

RPRX commented Jan 21, 2026

甚至你升级到的这个 commit id 我在 gVisor 主线都没找到

@Fangliding
Copy link
Member

better wait until go 1.26

为啥

因为二月份就go126了 这玩意跟go内部链接多 最好跟go一起升 不然升go版本可能还会boom一次 又得升

@RPRX
Copy link
Member

RPRX commented Jan 21, 2026

比起来可能有 bug 的旧版,多升级几次依赖倒没啥

@gumiruo
Copy link

gumiruo commented Jan 21, 2026

@RPRX 这个commit在go分支上

@Owersun
Copy link
Collaborator Author

Owersun commented Jan 21, 2026

gVisor can be upgraded two ways, if you do "go get gvisor.dev/gvisor" you get gVisor that doesn't compile properly from their master branch.
proper upgrade (and the one gVisor authors recommend) is "go get gvisor.dev/gvisor@go" - then you get the version that compile and work properly (google/gvisor#11531)
go branch is still on 1.25.5 version, and master branch still doesn't compile.

@RPRX
Copy link
Member

RPRX commented Jan 22, 2026

@gumiruo 我看到了,差一点点就带上那个修复了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants