10Gb download, 1.1Gb upload with 10Gb NIC

Internet access discussion, including Fusion, IP Broadband, and Gigabit Fiber!
4 posts Page 1 of 1
by csko » Sat Feb 18, 2023 8:18 pm
I have 10Gb service in Belmont and I am connecting a PC with a 10Gb NIC (Intel E10G41AT2 PCI Express 2.0 x8 AT2) directly to the ONT. Download speeds are great, but the upload speeds max out around 1100Mb/s. Oddly, when I connect the ONT directly to a 2.5Gb NIC, I get 2.3Gb/2.3Gb, so the line should be able to handle 1Gb+ upload. I am able to route 9.4Gb/s iperf3 both directions between two 10Gb networks cards using the cable that I'm testing with.

Another poster in Belmont seems to have a similar issue.

Any help would be appreciated! Very happy with the service otherwise!

Speedtest using ookla cli:

Code: Select all

   Speedtest by Ookla

     Server: Sonic.net, Inc. - San Jose, CA (id = 17846)
        ISP: Sonic.net, LLC
    Latency:     3.19 ms   (0.21 ms jitter)
   Download:  8129.89 Mbps (data used: 5.8 GB )
     Upload:  1057.82 Mbps (data used: 476.2 MB )
Packet Loss:     0.0%
 Result URL: https://www.speedtest.net/result/c/7e943df6-f46c-49c2-81af-29efa81d99ed
Setup:

Code: Select all

Fresh Ubuntu Linux 22.10
Linux x 5.19.0-31-generic #32-Ubuntu SMP PREEMPT_DYNAMIC Fri Jan 20 15:20:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Empty ufw and iptables.
Hardware:
CPU: Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
Motherboard:  ASUS ATX DDR3 2600 LGA 1150 Motherboard Z97-A/USB 3.1
NIC: Intel E10G41AT2 PCI Express 2.0 x8 AT2 Server Adapter
The NIC is in the PCI_E1 slot.
Cable: cat6 30meters
Things I've tried so far:
  • tried the same ONT with the same cable on a different 2.5Gbit NIC, yielded 2.3Gbit/2.3Gbit WAN speeds
  • two 10Gb NICs in the same computer (with NAT trick) showed an iperf of 9.42Gb/s both ways
  • mtu 9000
  • plugged the cable in reverse
  • disabled ipv6
  • tried different PCI slots
  • upgraded NIC firmware from 2.1.41 to 2.4.45
  • replaced NIC card with another one of the same type
  • onboard NIC disabled
  • ubuntu 22.10 live boot from USB
  • different speedtest servers yield the same
  • tried these various settings:

Code: Select all

ethtool -K eth1 lro off
ethtool -K eth1 tso off
ethtool -G eth0 rx 256 tx 256sysctl:
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_rmem = 10000000 10000000 10000000
net.ipv4.tcp_wmem = 10000000 10000000 10000000
net.ipv4.tcp_mem = 10000000 10000000 10000000
net.core.rmem_max = 524287
net.core.wmem_max = 524287
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000
lspci -vvv:

Code: Select all

01:00.0 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AT2 Server Adapter (rev 01)
        Subsystem: Intel Corporation 82598EB 10-Gigabit AT2 Server Adapter
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 16
        Region 0: Memory at dfd80000 (32-bit, non-prefetchable) [size=128K]
        Region 1: Memory at dfd40000 (32-bit, non-prefetchable) [size=256K]
        Region 2: I/O ports at e000 [disabled] [size=32]
        Region 3: Memory at dfda0000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at dfd00000 [disabled] [size=256K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
                Address: 0000000000000000  Data: 0000
        Capabilities: [60] MSI-X: Enable+ Count=18 Masked-
                Vector table: BAR=3 offset=00000000
                PBA: BAR=3 offset=00002000
        Capabilities: [a0] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0W
                DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
                LnkCap: Port #0, Speed 2.5GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <4us, L1 <64us
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
                LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s, Width x8
                        TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR-
                         10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                         EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                         FRS- TPHComp- ExtTPHComp-
                         AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                DevCtl2: Completion Timeout: 16ms to 55ms, TimeoutDis- LTR- 10BitTagReq- OBFF Disabled,
                         AtomicOpsCtl: ReqEn-
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshoot
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                         EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                         Retimer- 2Retimers- CrosslinkRes: unsupported
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap- ECRCGenEn- ECRCChkCap- ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [140 v1] Device Serial Number 00-1b-21-ff-ff-a3-93-b0
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe
relevant dmesg:

Code: Select all

[    1.025429] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver
[    1.025567] usb 2-1: new high-speed USB device number 2 using ehci-pci
[    1.026681] ixgbe: Copyright (c) 1999-2016 Intel Corporation.
[    1.027334] ixgbe 0000:01:00.0: enabling device (0000 -> 0002)
[    1.530857] ixgbe 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 8, Tx Queue count = 8 XDP Queue count = 0
[    1.531197] ixgbe 0000:01:00.0: 16.000 Gb/s available PCIe bandwidth (2.5 GT/s PCIe x8 link)
[    1.531277] ixgbe 0000:01:00.0: MAC: 1, PHY: 2, PBA No: E73052-004
[    1.531285] ixgbe 0000:01:00.0: 00:1b:21:a3:93:b0
[    1.532028] ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
[    9.557805] ixgbe 0000:01:00.0 eth0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[    9.557894] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

ifconfig:

Code: Select all

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 23.93.X.X  netmask 255.255.252.0  broadcast 23.93.X.X
        inet6 fe80::21b:21ff:fea3:93b0  prefixlen 64  scopeid 0x20<link>
        ether 00:1b:21:a3:93:b0  txqueuelen 1000  (Ethernet)
        RX packets 87118291  bytes 129694469610 (129.6 GB)
        RX errors 0  dropped 1595  overruns 0  frame 0
        TX packets 44474250  bytes 62389586633 (62.3 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
ethtool:

Code: Select all

Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Auto-negotiation: on
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000040 (64)
                               rx_err
        Link detected: yes
lshw -C network:

Code: Select all

  *-network
       description: Ethernet interface
       product: 82598EB 10-Gigabit AT2 Server Adapter
       vendor: Intel Corporation
       physical id: 0
       bus info: pci@0000:01:00.0
       logical name: eth1
       version: 01
       serial: 00:1b:21:a3:93:b0
       size: 10Gbit/s
       capacity: 10Gbit/s
       width: 32 bits
       clock: 33MHz
       capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical tp 1000bt-fd 10000bt-fd autonegotiation
       configuration: autonegotiation=on broadcast=yes driver=ixgbe driverversion=5.19.0-31-generic duplex=full firmware=0x000124c1 ip=23.93.X.X latency=0 link=yes multicast=yes port=twisted pair speed=10Gbit/s
       resources: irq:16 memory:dfe80000-dfe9ffff memory:dfe40000-dfe7ffff ioport:e000(size=32) memory:dfea0000-dfea3fff memory:dfe00000-dfe3ffff

intel firmware tool:

Code: Select all

./bootutil64e
Error: Connection to QV driver failed - please reinstall it!

Intel(R) Ethernet Flash Firmware Utility
BootUtil version 1.39.20.0
Copyright (C) 2003-2022 Intel Corporation

Type BootUtil -? for help

Port Network Address Location Series  WOL Flash Firmware                Version
==== =============== ======== ======= === ============================= =======
  1   001B21A393B0     1:00.0 10GbE   N/A PXE                           2.4.45
by msiegen » Sun Feb 19, 2023 1:51 pm
Hey csko, that's an impressive list of things tried so far!

One other area to explore would be to try rate limiting the egress traffic using tc commands on your Linux router or PC. I don't have any firsthand experience but have seen reports of this fixing the upload speed on other ISPs, for example this one. Start around 2000 Mbps since you know that worked on the slower NIC, and then ratchet the value up until performance starts to degrade.
by csko » Wed Feb 22, 2023 7:39 pm
msiegen wrote:Hey csko, that's an impressive list of things tried so far!

One other area to explore would be to try rate limiting the egress traffic using tc commands on your Linux router or PC. I don't have any firsthand experience but have seen reports of this fixing the upload speed on other ISPs, for example this one. Start around 2000 Mbps since you know that worked on the slower NIC, and then ratchet the value up until performance starts to degrade.
That did it, thanks a lot! With some trial and error, I was able to max out around 6Gbps with the 6500Mbps limit set:

Code: Select all

tc qdisc replace dev eth1 root handle 1: htb default 10 r2q 2500
tc class replace dev eth1 parent 1: classid 1:10 htb rate 6500mbit ceil 6500mbit
tc qdisc add dev eth1 parent 1:10 handle 10: fq_codel ecn limit 1024
Speedtest:

Code: Select all

   Speedtest by Ookla

     Server: Sonic.net, Inc. - San Jose, CA (id = 17846)
        ISP: Sonic.net, LLC
    Latency:     3.41 ms   (0.06 ms jitter)
   Download:  8115.59 Mbps (data used: 4.1 GB )
     Upload:  6027.66 Mbps (data used: 3.1 GB )
Packet Loss:     0.0%
 Result URL: https://www.speedtest.net/result/c/150c35d3-d905-456f-b656-6cdcdaae95da
Indeed, I found TCP retransmissions during the upload phase without the rate limiting. What's the root cause for these?

I can also get similar speeds through an Asus AX89X router -- in case anyone else is interested in getting that device (though its internal speedtest only shows 4Gbps/4Gbps).
by msiegen » Fri Feb 24, 2023 5:25 pm
Cool that that worked!
csko wrote:Indeed, I found TCP retransmissions during the upload phase without the rate limiting. What's the root cause for these?
Retransmissions happen when packets are dropped, for example because they're arriving at a device at a higher rate than they can be transmitted onwards. This is completely normal, and the TCP sender will lower its transmission rate to try to fit within the available bandwidth. It can't "see" how much bandwidth is available at intermediate hops though, so it will then try to slowly increase its rate until it encounters drops/retransmissions again.

Usually that iterative process results in good utilization of the available bandwidth. I'm not sure why it didn't in your case, but I guess TCP is sensitive to the timing and other characteristics of the packet loss, and reacts better to the limit imposed by Linux Traffic Control than whatever the ONT/NIC/etc were otherwise doing.
4 posts Page 1 of 1

Who is online

In total there are 30 users online :: 2 registered, 0 hidden and 28 guests (based on users active over the past 5 minutes)
Most users ever online was 999 on Mon May 10, 2021 1:02 am

Users browsing this forum: Bing [Bot], Semrush [Bot] and 28 guests