Skip to content

Conversation

@Xuyuchao-juice
Copy link
Contributor

@Xuyuchao-juice Xuyuchao-juice commented Nov 26, 2025

Background:
When mounting with -o writeback_cache and running sequential writes, the kernel issues excessive setattr (time updates), which drags down throughput.

Change:
Add a short-lived attr cache plus an async SetAttr worker pool so pure time updates can be deduplicated, rate-limited, and handled without blocking the foreground path.

Test Method and Script:
Use 4K block sequential writing to create 4 files each of 512M size.
meta:sqlite

// seq_write.go
package main

import (
	"crypto/rand"
	"fmt"
	"os"
	"time"
)

const (
	blockSize = 4 * 1024          // 4KB
	fileSize  = 512 * 1024 * 1024 // 512MB
)

func main() {
	numIterations := 4

	// 记录总耗时
	totalDuration := time.Duration(0)

	for iteration := 1; iteration <= numIterations; iteration++ {
		// 计算文件中的块数
		numBlocks := fileSize / blockSize

		// 创建文件并预分配空间
		fileName := fmt.Sprintf("seq_write_%d.dat", iteration)
		file, err := os.Create(fileName)
		if err != nil {
			fmt.Printf("创建文件 %s 失败: %v\n", fileName, err)
			continue
		}

		fmt.Printf("\n开始写入文件 %s (迭代 %d/%d)\n", fileName, iteration, numIterations)

		// 预分配文件空间
		if err := file.Truncate(fileSize); err != nil {
			fmt.Printf("预分配文件空间失败: %v\n", err)
			file.Close()
			continue
		}

		// 生成随机数据块
		dataBlock := make([]byte, blockSize)
		if _, err := rand.Read(dataBlock); err != nil {
			fmt.Printf("生成随机数据失败: %v\n", err)
			file.Close()
			continue
		}

		// 记录开始时间
		startTime := time.Now()

		for writtenBlocks := 1; writtenBlocks <= numBlocks; writtenBlocks++ {
			offset := int64(writtenBlocks-1) * blockSize

			if _, err := file.WriteAt(dataBlock, offset); err != nil {
				fmt.Printf("写入文件失败: %v\n", err)
				break
			}

			if writtenBlocks%100 == 0 || writtenBlocks == numBlocks {
				progress := float64(writtenBlocks) / float64(numBlocks) * 100
				fmt.Printf("\r进度: %.2f%% (%d/%d)", progress, writtenBlocks, numBlocks)
			}
		}
		fmt.Println()

		// 关闭文件
		file.Close()

		// 计算并打印执行时间
		duration := time.Since(startTime)
		totalDuration += duration
		fmt.Printf("文件 %s 写入完成!耗时: %v\n", fileName, duration)
		fmt.Printf("写入速度: %.2f MB/s\n", float64(fileSize)/1024/1024/duration.Seconds())
	}

	// 打印总统计信息
	fmt.Printf("\n所有 %d 个文件写入完成!\n", numIterations)
	fmt.Printf("总耗时: %v\n", totalDuration)
	fmt.Printf("平均每个文件耗时: %v\n", totalDuration/time.Duration(numIterations))
	fmt.Printf("平均写入速度: %.2f MB/s\n", float64(fileSize*numIterations)/1024/1024/totalDuration.Seconds())
}

Test Result:
aa7e33aa99d00b93ff9f6656b317945d
15ca005a5de07bc4387c3cf7788d5540
71be1778a841cee39f7ee81224165d13
c0f64b2defe2d3c06f9b7dea64339f96
a087586c499c024fcadaf7f16ac69634

@Xuyuchao-juice
Copy link
Contributor Author

Xuyuchao-juice commented Nov 26, 2025

看看这几个数值设置是否合理:

  1. 目前缓存的有效时间10s
  2. 后台任务每1分钟清理一次过期缓存
  3. 并发setattr goroutine最多100个

@jiefenghuang
Copy link
Contributor

开了writeback_cache,可以把max-fuse-io调大,应该能减少请求比例。
Increasing max-fuse-io to 1MiB should help reduce the proportion of requests.

@jiefenghuang jiefenghuang marked this pull request as draft November 28, 2025 02:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants