Skip to content

feat(openshift): Optimize TCP settings for high throughput and low latency#809

Merged
yarda merged 1 commit intoredhat-performance:masterfrom
gheffern:openshift_tcp_tuning
Feb 5, 2026
Merged

feat(openshift): Optimize TCP settings for high throughput and low latency#809
yarda merged 1 commit intoredhat-performance:masterfrom
gheffern:openshift_tcp_tuning

Conversation

@gheffern
Copy link
Contributor

@gheffern gheffern commented Nov 8, 2025

Apply specific kernel network tuning parameters to the OpenShift profile (/profiles/openshift/tuned.conf) to support high bandwidth-delay product links.

The following TCP settings are added:
net.ipv4.tcp_notsent_lowat=131072: Sets tcp_notsent_lowat to a value intended to balance throughput and latency while limiting total socket memory usage.
net.ipv4.tcp_slow_start_after_idle=0: Disables slow start after a connection has been idle, allowing immediate maximum throughput upon resuming data transfer.
net.ipv4.tcp_rmem="4096 131072 16777216": Increases the maximum TCP receive buffer size to improve performance on high-bandwidth-delay product connections.
net.ipv4.tcp_wmem="4096 16384 16777216": Increases the maximum TCP send buffer size for similar reasons.

These changes are based on work done by Cloudflare for improving TCP performance on high bandwidth-delay product links but are more conservative in adjusting rmem and wmem

Reference: https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency/

…tency

Apply specific kernel network tuning parameters to the OpenShift profile
(/profiles/openshift/tuned.conf) to support high bandwidth-delay product links.

The following TCP settings are added:
net.ipv4.tcp_notsent_lowat=131072: Sets tcp_notsent_lowat to a value intended to balance throughput and latency while limiting total socket memory usage.
net.ipv4.tcp_slow_start_after_idle=0: Disables slow start after a connection has been idle, allowing immediate maximum throughput upon resuming data transfer.
net.ipv4.tcp_rmem="4096 131072 16777216": Increases the maximum TCP receive buffer size to improve performance on high-bandwidth-delay product connections.
net.ipv4.tcp_wmem="4096 16384 16777216": Increases the maximum TCP send buffer size for similar reasons.

These changes are based on work done by Cloudflare for improving TCP
performance on high bandwidth-delay product links but are more conservative in adjusting rmem and wmem

Reference: https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency/

Signed-off-by: Graham Heffern <gheffern@gmail.com>
@yarda
Copy link
Contributor

yarda commented Feb 5, 2026

Thanks, LGTM. It seems the net.ipv4.tcp_slow_start_after_idle isn't discussed in the cloudflare blog post, but setting it to 0 makes sense for performance improvements.

@yarda yarda merged commit 843f44e into redhat-performance:master Feb 5, 2026
14 of 16 checks passed
yarda added a commit to yarda/tuned that referenced this pull request Feb 5, 2026
Update network-throughput profile and drop now duplicate tuning from
the openshift profile. Follows up the redhat-performance#809 and cleans the things a bit.

Signed-off-by: Jaroslav Škarvada <jskarvad@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants