Conversation
|
我在这几个 nil 的地方都试了下 |
|
@hossinasaadi 不是说可能会影响 API 吗,你测试下,下一个版本是 v26.1.23 稳定版, |
我觉得是在扯淡 |
|
其实之前revert的pr对改善xray启动速度还是有点用的,像我的路由器上个版本启动在2-4s之间,现在这个版本是8s左右 |
|
被 revert 的代码其实等价于下面这两行 而此 pr 清理的更及时 至于为啥会影响启动速度我就不清楚了,你测测这个 pr 呢? |
|
@Fangliding 他那几个 pr 你用的啥方法测的峰值内存降低 我是这样观察前后变化 {
runtime.GC()
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("heap: %d MB | sys: %d MB\n",
m.HeapAlloc/1024/1024, m.Sys/1024/1024)
} |
|
内存太充足了这样测不准的,因为没主动 gc 要看脸 |
|
搞个 GOMEMLIMIT=50MiB 这样应该够严谨点 当时忘了还有这个了 |
|
测不准,有时有效有时无效 |
|
|
可以直接按字节解析 Geodat 文件,参考我几年前写的这个:v2fly/v2ray-core#934 V2Ray 发布 v5 后,相关代码被移到了 decode.go 文件里。 |
|
@RPRX 他这个是流式读文件,不会把整个 geodat 读到 ram,可以解决 1+3 峰值 |
|
不过我感觉暂时不用上,geodat 体积不大, unmarshal 完就被 gc 掉了 |
|
Linux mmap 文件使用的内存是 page cache,不计入堆内存,而且当系统内存不够时会优先释放(下次用到会重新读),还避免了一次拷贝。如果使用堆内存就成了 anon page,系统内存不够时只能换页到 swap。因此我还是觉得 mmap 读文件更好。 |
|
主要是为了 ios 才降启动内存峰值的 我是觉得没必要为了 1+3 做任何改动,甚至这个 pr 都意义不大 ios 的问题还是在于启动后常驻内存吃太多 |
|
我不清楚 iOS 的内核机制,但是 mmap 文件肯定对 OpenWRT 硬路由等小内存设备有帮助 |
|
其实 iOS 可以简单测试一下:mmap一个远超过内存大小限制的文件,并完整读取(比如计算文件的checksum)。在Linux下mmap的文件大小是可以超过物理内存大小的,系统会LRU释放未使用的 page cache。 |
|
是的要等他们反馈,有需要的话可以把 @Loyalsoldier 的那段搬过来,还可以缝合 mmap |
|
|
|
Hi, it was an internet blackout here and I missed a lot. |
|
@hossinasaadi 之前的那些 PR 导致了一些问题 |
|
就是说降低启动峰值内存这方面不需要太复杂的深度修改,有这个 PR 的小修改就行,而 iOS 上应该用 #5505 @Meo597 @hossinasaadi 你们觉得呢 |
|
ios 真正的问题是运行时占用而非峰值 有人说 mmap 用的是 swap |
|
我们应该首先确定 ios 到底在哪一步开始崩的 1+3 还是 3+4 |
|
@hossinasaadi 这是什么 app 此 pr 在构建 matcher 的时候会清理内存中反序列化出来的配置条目 |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as off-topic.
This comment was marked as off-topic.
|
@Meo597 a custom app, it will read the config file again and seems crash while reading cn geodat at Im check it further. |
|
@enriquephl 我每次看到你发的那些话就气不打一处来,问题很大:
1 是简单 QA,2 是几个意思?你甚至还开 issue 问过,issue 是干这事的吗?你有这 spam 的精力为什么不自己换一下试一下呢?? |
|
聊几句就能让开发者血压上头你也是人才,浪费我时间就算了问完了还就是不改配置还非要再吐槽两句,你到底是来干嘛的??
|
|
@enriquephl just stick with battle-tested mainline protocols and you'll be fine |
|
I don't know where else to put it. Since you discussed the new XDRIVE protocol here, I can share it here. I originally wrote about Yandex Disk (#5414 (comment)) which is basically like Google Drive. But then I did some research and realized that it might be even better to use S3 compatible stores as proxies. Because Yandex Disk, Google Drive and other cloud drives probably have different APIs and adding a single unified transport for all of them is going to be more difficult than using S3 stores that should all be compatible with each other. With the help of AI, I wrote a simple proof of concept. It's a server and a client. They use a locally running instance of Minio as a proxy. The server is running and polling the S3 store at a fixed interval. When a new chunk of data appears, it fetches it. When all chunks are fetched, the file is re-assembled. Link to the code: I only tested it locally so far. I will try to use a real S3 store (e.g. Yandex and VK Cloud both have them) later. The size of the binaries is big, though. I guess instead of using the go S3 library, I should use plain REST API requests instead. What do you think? |
|
@paqx 太好了, 我也考虑过利用云存储服务,不过网盘的优势在于一它们是免费的,二它们是民用的、使用更广泛,可以用它们的域名来隐藏代理 对于 S3 stores,你可以列举出它们的价格,以及我们能用哪些域名隐藏代理吗? 当然 XDRIVE 的目标不止是兼容网盘,而是利用任何可能的东西,所以可以兼容 S3 stores 作为选项之一, “不需要自有公网 IP”是它与 XHTTP 在设计模式上的区别,XDRIVE 的 hub 并不监听端口,比如可以放在境外家宽或免费容器服务内 我们发现 Google Drive 做不到“边传边用”, 既然 XDRIVE 的计划已经公布不如以后就公开讨论进度吧,集思广益,以下是 @iambabyninja 今天发给我的邮件:
|
* commit 'f6a7e939231e5ec6b167628bf730dc70a3c36707': (90 commits) VMess inbound: Optimize replay filter (XTLS#5562) Bump github.com/pires/go-proxyproto from 0.9.1 to 0.9.2 (XTLS#5614) TUN inbound: Add iOS support (XTLS#5612) Geodat: Reduce peak memory usage (XTLS#5581) Bump github.com/pires/go-proxyproto from 0.9.0 to 0.9.1 (XTLS#5608) Hysteria transport: Support range & random for `interval` in `udphop` as well (XTLS#5603) TUN inbound: Enhance Darwin interface support (XTLS#5598) XUDP client: Initialize Global ID's BaseKey correctly (XTLS#5602) TUN inbound: Disable RACK/TLP recovery to fix connection stalls (XTLS#5600) v26.1.23 common/errors/feature_errors.go: Add PrintNonRemovalDeprecatedFeatureWarning() (XTLS#5567) API: Add ListRule() for routing (XTLS#5569) Log config: More flexible `maskAddress` (XTLS#5570) Bump github.com/miekg/dns from 1.1.70 to 1.1.72 (XTLS#5590) Bump github.com/cloudflare/circl from 1.6.2 to 1.6.3 (XTLS#5589) Hysteria transport: Fix speedtest issue (XTLS#5587) README.md: Add fancyss to Asuswrt-Merlin Clients Router: Fix panic in ProcessNameMatcher when source IPs are empty (XTLS#5574) README.md: Update links for PassWall & PassWall 2 (XTLS#5572) Tests: Reduce RAM usage (XTLS#5577) ... # Conflicts: # core/core.go
|
有一种设想,但是我想知道他目前困难在哪里,这种设想是针对于高延迟,或者说低iops但是带宽不(太)低的方法 |
|
网盘的 list 可能有数秒的延迟,实际上这也是可以接受的, 即使是 revisions,状态机制都不需要,只要做到按序上传、按序读取即可,也不用管对方有没有读取, |
Well, I didn't know, but you mentioned Google Drive, so I thought it wouldn't use S3.
Here's a link to Yandex Cloud: VK Cloud: Edge Center (they are too greedy, I would't consider this option unless it's really necessary): Check if any of them are accessible for you. There are probably others, but I listed these S3 stores because they are probably white-listed in Russia and will allow users to bypass mobile internet restrictions.
It seems to me it can be configurable:
|
|
@paqx 测试一下它们的 API 地址能不能在境内外访问就行 最新的测试发现 Google Drive 对“有问题的账号”才会“要求验证”,新号貌似没这个问题,不过发包间隔当然是可以配置的 其实对于普通上网和上传/下载/看视频来说 500ms 就够用了, |
|
以下是上面提到的测试代码(敏感信息已删),@paqx package main
import (
"bytes"
"context"
"fmt"
"io"
"log"
"net/http"
"strconv"
"sync"
"time"
"golang.org/x/oauth2"
"google.golang.org/api/drive/v3"
"google.golang.org/api/option"
)
const (
ClientID = ""
ClientSecret = ""
RefreshToken = ""
FolderID = ""
TestDataSize = 5 * 1024 * 1024
SegmentSize = 512 * 1024
RetentionWindow = 30
PollInterval = 500 * time.Millisecond
)
func main() {
log.SetFlags(log.Ltime | log.Lmicroseconds)
srv := driveService()
log.Println("SYS: INIT WRITER")
writer := NewLogWriter(srv)
ctx, cancel := context.WithCancel(context.Background())
defer func() {
log.Println("SYS: CLEANUP")
srv.Files.Delete(writer.manifestID).Do()
for _, fid := range writer.activeSegments {
srv.Files.Delete(fid).Do()
}
}()
log.Println("SYS: INIT READER")
reader := &LogReader{
srv: srv,
manifestID: writer.manifestID,
expectedSeq: 0,
}
dataChannel := make(chan []byte, 100)
go reader.Start(ctx, dataChannel)
log.Println("SYS: GEN TEST DATA")
testData := make([]byte, TestDataSize)
for i := range testData {
testData[i] = byte(i % 255)
}
go func() {
log.Println("SYS: START STREAM")
chunkSize := 64 * 1024
for i := 0; i < len(testData); i += chunkSize {
end := i + chunkSize
if end > len(testData) {
end = len(testData)
}
writer.Write(testData[i:end])
time.Sleep(10 * time.Millisecond)
}
writer.Close()
log.Println("SYS: WRITER FINISHED")
}()
var receivedData []byte
// Client Loop
for len(receivedData) < len(testData) {
chunk := <-dataChannel
receivedData = append(receivedData, chunk...)
log.Printf("Client: Received total %d / %d bytes", len(receivedData), len(testData))
}
cancel()
time.Sleep(1 * time.Second)
if bytes.Equal(testData, receivedData) {
log.Println("SYS: DATA VERIF. ALL OK.")
} else {
log.Println("SYS: OH SHIT...")
}
}
func driveService() *drive.Service {
ctx := context.Background()
cfg := &oauth2.Config{
ClientID: ClientID,
ClientSecret: ClientSecret,
Endpoint: oauth2.Endpoint{TokenURL: "https://oauth2.googleapis.com/token"},
}
token := &oauth2.Token{
RefreshToken: RefreshToken,
Expiry: time.Now().Add(-time.Hour),
TokenType: "Bearer",
}
httpClient := &http.Client{Timeout: 60 * time.Second}
ctx = context.WithValue(ctx, oauth2.HTTPClient, httpClient)
srv, err := drive.NewService(ctx, option.WithTokenSource(cfg.TokenSource(ctx, token)))
if err != nil {
log.Fatal("Failed to create Drive service: ", err)
}
return srv
}
func retryForever(name string, operation func() error) {
backoff := 500 * time.Millisecond
maxBackoff := 10 * time.Second
attempt := 1
for {
err := operation()
if err == nil {
return
}
log.Printf("SYS: Retry %s failed (attempt %d): %v. Retrying in %v...", name, attempt, err, backoff)
time.Sleep(backoff)
backoff *= 2
if backoff > maxBackoff {
backoff = maxBackoff
}
attempt++
}
}
type LogWriter struct {
srv *drive.Service
manifestID string
mu sync.Mutex
buf []byte
seq int
activeSegments map[int]string
}
func NewLogWriter(srv *drive.Service) *LogWriter {
var manifestID string
retryForever("Init Manifest", func() error {
m, err := srv.Files.Create(&drive.File{
Name: "xdrive_wal_manifest.json",
Parents: []string{FolderID},
AppProperties: map[string]string{"type": "wal"},
}).Fields("id").Do()
if err == nil {
manifestID = m.Id
}
return err
})
return &LogWriter{
srv: srv,
manifestID: manifestID,
activeSegments: make(map[int]string),
}
}
func (w *LogWriter) Write(p []byte) {
w.mu.Lock()
defer w.mu.Unlock()
w.buf = append(w.buf, p...)
if len(w.buf) >= SegmentSize {
w.flush()
}
}
func (w *LogWriter) Close() {
w.mu.Lock()
defer w.mu.Unlock()
if len(w.buf) > 0 {
w.flush()
}
}
func (w *LogWriter) flush() {
data := w.buf
currentSeq := w.seq
fileName := fmt.Sprintf("seg_%09d.bin", currentSeq)
var fileID string
retryForever(fmt.Sprintf("Upload Seg %d", currentSeq), func() error {
f, err := w.srv.Files.Create(&drive.File{
Name: fileName,
Parents: []string{FolderID},
}).Media(bytes.NewReader(data)).Fields("id").Do()
if err == nil {
fileID = f.Id
}
return err
})
w.activeSegments[currentSeq] = fileID
retryForever("Update Manifest", func() error {
props := make(map[string]string)
props[fmt.Sprintf("s%d", currentSeq)] = fileID
oldSeq := currentSeq - RetentionWindow
if _, ok := w.activeSegments[oldSeq]; ok {
props[fmt.Sprintf("s%d", oldSeq)] = ""
}
_, err := w.srv.Files.Update(w.manifestID, &drive.File{
AppProperties: props,
}).Fields("id").Do()
return err
})
log.Printf("WRITER: COMMITTED SEQ=%d", currentSeq)
w.seq++
w.buf = nil
oldSeq := currentSeq - RetentionWindow
if oldID, ok := w.activeSegments[oldSeq]; ok {
delete(w.activeSegments, oldSeq)
go func(fid string) {
w.srv.Files.Delete(fid).Do()
}(oldID)
}
}
type LogReader struct {
srv *drive.Service
manifestID string
expectedSeq int
}
func (r *LogReader) Start(ctx context.Context, ch chan<- []byte) {
ticker := time.NewTicker(PollInterval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
m, err := r.srv.Files.Get(r.manifestID).Fields("appProperties").Do()
if err != nil || m.AppProperties == nil {
continue
}
availableSeqs := make(map[int]string)
minSeq := -1
for k, v := range m.AppProperties {
if len(k) > 1 && k[0] == 's' {
if s, err := strconv.Atoi(k[1:]); err == nil {
availableSeqs[s] = v
if minSeq == -1 || s < minSeq {
minSeq = s
}
}
}
}
if len(availableSeqs) == 0 {
continue
}
if r.expectedSeq < minSeq {
log.Printf("READER: LAG! Expected SEQ=%d, Min Available=%d. Jumping...", r.expectedSeq, minSeq)
r.expectedSeq = minSeq
}
fileID, exists := availableSeqs[r.expectedSeq]
if !exists {
continue
}
var data []byte
retryForever(fmt.Sprintf("Download Seg %d", r.expectedSeq), func() error {
resp, err := r.srv.Files.Get(fileID).Download()
if err != nil {
return err
}
defer resp.Body.Close()
data, err = io.ReadAll(resp.Body)
return err
})
log.Printf("READER: Downloaded SEQ=%d (%d bytes)", r.expectedSeq, len(data))
ch <- data
r.expectedSeq++
}
}
} |
|
另外 revisions 只有 Google Drive 和 OneDrive 支持,对于 yandex disk 和 S3 stores 需要创建多个文件,就是说至少需要两种模式 它们均不支持“上传未完成就开始下载”,无法实现流式,如果以后找到能实现的可以有第三种模式, 当然还有第四种模式,比如“改个用户名”这种,只能拿两个点位等对方读完了再传新的,这是以后的事情 |
|
https://t.me/projectXray/4629124
|
|
刚刚敲打了一下千问,他告诉我阿里的OSS有一种给日志类有持续内容更新,但是文件大小不大的文件使用的上传下载方式,称为,“追加上传-部分下载”,简而言之是允许只上传文件的新增部分,允许指定下载开始的位置以及结束的位置。 有持续的数据流,但是数据量不是非常大,这种如何做到边上传边下载?另外接收方是否能只下载文件新增的那一部分太棒了!你描述的场景非常典型:持续数据流 + 边上传边下载 + 接收方只拉新增部分。这其实是很多实时日志、IoT 数据上报、监控系统、甚至 AI 训练数据同步的核心需求。好消息是:阿里云 OSS 完全可以支持这种模式,而且有几种优雅的实现方式。我来给你拆解最合适的方案。 ✅ 最佳推荐:用 Append Object(追加上传) + Range 下载 🎯 为什么适合你?
🔧 实现原理
Python 示例(aliyun-oss-python-sdk) auth = Auth('', '') position = 0 # 初始位置为0 第一次 append 后续继续 append ✅ 每次 append 后,文件立即可读,且内容是累积的。
接收方维护一个 已读字节偏移量(offset),比如上次读到了第 1024 字节。 下次请求时,用 HTTP Range: bytes=1024- 告诉 OSS:“从 1024 开始给我后面的所有数据”。 下载新增部分 new_content = result.read() ✅ 这样就实现了 增量拉取,不重复下载已有数据! 📌 关键优势
🔁 替代方案(如果 Append 不适用) 如果你的数据不是严格追加(比如有随机写、或需要并发写),可以考虑: 方案B:分片上传 + 元数据记录已传范围
但复杂度更高,适合大文件分段场景,不太适合你的“小数据流”。 💡 给你的实操建议 假设你在湖南做 IoT 设备日志收集: 设备 → (Append to OSS: device_123.log) → 你只需要:
如果你愿意,我可以帮你写一段完整的 上传+下载 demo 代码(Python / Node.js / C# 都行),或者帮你设计轮转策略 😊 你现在是在做 IoT、日志系统,还是别的什么项目呀? |
|
伊朗俄罗斯可用吗?
|
|
(希望S3存储桶能够得到XDrive传输方式首先支持,便可使用Cloudflare R2呀) |
|
|
|
@paqx XDRIVE 有开工吗 |
文件才几十兆根本不需要改,非要弄 mmap 帮助不大因为得看运气,改流式处理,或者直接把文件切了用ext:此 pr 已改流式处理
因为二者同时存在增加峰值,此 pr 改为一边创建一边 nil