-
Notifications
You must be signed in to change notification settings - Fork 65
Description
I'm seeing ~8 MB/s throughput on a large file copy on 1 Gbps connection with a 6ms latency (with threads anywhere from 8 to 32), server is diod and client has it mounted via diodmount on v9fs (msize default all the way up to msize=1048576). scp is at ~35 MB/s, wget is 90 MB/s (all with no performance options). I've got what I think are reasonable tunables for tcp on either end. ss -tmi on client is showing a strange discrepancy between http and 9p via v9fs:
for wget 'http://box/test.dat':
ss -tmi
skmem:(r0,rb84751494,t4,tb46080,f0,w0,o0,bl0,d6) cubic wscale:14,14 rto:206.666 rtt:5.477/2.296 ato:40 mss:1388 pmtu:1500 rcvmss:1388 advmss:1448 cwnd:10 bytes_sent:127 bytes_acked:128 bytes_received:327466024 segs_out:39483 segs_in:236042 data_segs_out:1 data_segs_in:236040 send 20.3Mbps lastsnd:3940 pacing_rate 40.5Mbps delivery_rate 1.7Mbps delivered:2 app_limited busy:6ms rcv_rtt:7.672 rcv_space:3994532 rcv_ssthresh:60584075 minrtt:5.329 rcv_ooopack:7345 snd_wnd:32768 rcv_wnd:60588032
for cp /run/media/user/p9/test.dat /dev/null:
ss -tmi
skmem:(r0,rb881008,t0,tb46080,f3177,w919,o0,bl0,d535) cubic wscale:14,14 rto:206.666 rtt:6.35/0.219 ato:40 mss:1388 pmtu:1500 rcvmss:1388 advmss:1448 cwnd:10 bytes_sent:590149 bytes_retrans:11 bytes_acked:590116 bytes_received:1365512910 segs_out:168312 segs_in:1005326 data_segs_out:25687 data_segs_in:1004776 send 17.5Mbps pacing_rate 35Mbps delivery_rate 6.78Mbps delivered:25687 app_limited busy:168013ms unacked:1 retrans:0/1 dsack_dups:1 rcv_rtt:6.414 rcv_space:131153 rcv_ssthresh:615390 minrtt:5.494 snd_wnd:65536 rcv_wnd:622592
for diodcat with default msize:
ss -tmi
skmem:(r0,rb1253270,t0,tb46080,f3177,w919,o0,bl0,d0) cubic wscale:14,14 rto:206.666 rtt:6.524/0.633 ato:40 mss:1388 pmtu:1500 rcvmss:1388 advmss:1448 cwnd:10 bytes_sent:20978 bytes_acked:20956 bytes_received:58643268 segs_out:7206 segs_in:42968 data_segs_out:902 data_segs_in:42966 send 17Mbps lastsnd:4 lastrcv:4 lastack:4 pacing_rate 34Mbps delivery_rate 1.89Mbps delivered:902 app_limited busy:6079ms unacked:1 rcv_rtt:6.588 rcv_space:133822 rcv_ssthresh:875593 minrtt:5.867 snd_wnd:65536 rcv_wnd:884736
(9 MB/s)
for diodcat with -m 1048576
ss -tmi
skmem:(r0,rb805413,t0,tb46080,f3177,w919,o0,bl0,d0) cubic wscale:14,14 rto:206.666 rtt:6.504/0.275 ato:40 mss:1388 pmtu:1500 rcvmss:1388 advmss:1448 cwnd:10 bytes_sent:48417 bytes_acked:48395 bytes_received:136812207 segs_out:12623 segs_in:100233 data_segs_out:2095 data_segs_in:100230 send 17.1Mbps lastsnd:7 lastrcv:7 lastack:7 pacing_rate 34.1Mbps delivery_rate 2.31Mbps delivered:2095 app_limited busy:13455ms unacked:1 rcv_rtt:6.325 rcv_space:142150 rcv_ssthresh:562443 minrtt:4.815 snd_wnd:65536 rcv_wnd:573440
(just above 9 MB/s)
This throughput is well below even what was reported in #59 . I'm presuming this is atypical, do you have any pointers on where to look?
(Linux 6.12.8 PREEMPT_DYNAMIC on client, diod latest release converted from a debian .deb on server)