-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Make MTU configurable and implement TCP MSS #1446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Very nice! It's awesome how far we have come on this since the wsproxy days. |
|
The code looks good to me, could one of @basicer, @ProgrammerIn-wonderland or @chschnell do a quick review on this? |
|
Code looks fine, I'll compile to test it tonight |
|
Using Jumbo frames with virtio is a great idea! However, I can't reproduce the results, clearly I'm doing something wrong here. I'm using As I've done in the past, I tested My guess is that it's something about wsinc or buildroot, I'll run more tests tomorrow. @ading2210: Which guest OS did you use? Do you have any tips for me? And please fix the eslint errors :) Console output of iperf and NIC setup (note the |
|
@chschnell I used an Alpine Linux guest. I uploaded my disk image here so you can try with that one: https://local.ading.dev/array/alpine_hda.zip I don't think there would be much of a performance improvement with a higher MTU on the older wsproxy network transports, especially if your proxy server is on localhost. That bypasses the v86 virtual TCP/IP stack entirely, so most of the changes in this PR don't have any effect. Try testing with Wisp networking instead. |
|
Also the wisp.mercurywork.shop is bandwidth limited so you probably want to use wisps://anura.pro/ |
basicer
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Thanks all! |
This PR makes the MTU configurable in the v86 TCP/IP stack and the virtio NIC. It also implements the TCP maximum segment size option (MSS).
The MTU can be set to 65535 bytes, which leads to a massive speed improvement. I was able to get 400mbit/s download speeds and 230mbit/s upload speeds inside the guest VM while using Wisp networking.
Implementing the TCP MSS option allows the guest to send the maximum sized TCP payloads for the configured MTU (MTU - 40 bytes). Previously, the MSS option would not be specified by the v86 TCP stack, so the guest would fall back to sending 536 byte TCP payloads (as required by the TCP spec).
I've tested it to work well with the Wisp, fetch, and wsproxy network backends.
I then added an option to the frontend to specify the NIC type and MTU.
Also included in the PR is a fix for a bug in the Wisp network adapter that would cause a crash if the Wisp stream started buffering during uploads.
As a result of these changes, the network is now fast enough to facilitate hardware accelerated OpenGL in the guest via VirGL over TCP:
8mb.video-REt-sa4Ba3iS.mp4
I posted more details about that here: #51 (comment)