So I am trying to track down what is possibly slowing down my download connection from my Debian server to my devices (streaming box, laptop, other servers, etc).
First let me go over my network infrastructure: OPNsense Firewall (Intel C3558R) <-10gb SFP+ DAC-> Managed Switch <-2.5gb RJ45-> Clients, 2.5gb AX Access Point, and Debian Server (Intel N100).
Under a 5 minute stress test between my laptop (2.5gb adapter plugged into switch) and the Debian Server (2.5gb Intel I226-V NIC), I get the full bandwidth when uploading however when downloading it tops out around 300-400mbps. The download speed does not fair any better when connecting to the AX access point, with upload dropping to around 500mbps. File transfers between the server and my laptop are also approximately 300mbps. And yes, I manually disabled the wifi card when testing over ethernet. Speed tests to the outside servers reflect approximately 800/20mbps (on an 800mbps plan).
Fearing that the traffic may be running through OPNsense and that my firewall was struggling to handle the traffic, I disconnected the DAC cable and reran the test just through the switch. No change in results.
Identified speeds per device:
Server: 2500 Mb/s
Laptop: 2500Base-T
Switch: 2,500Mbps
Firewall: 10Gbase-Twinax
Operating Systems per device:
Server: Debian Bookworm
Laptop: macOS Sonoma (works well for my use case)
Switch: some sort of embedded software
Firewall: OPNsense 24.1.4-amd64
Network Interface per device:
Server: Intel I226-V
Laptop: UGreen Type C to 2.5gb Adapter
Switch: RTL8224-CG
Firewall: Intel X553
The speed test is hosted through Docker on my server.
Did you use
iperf
? It makes sure that HDD/SSD is not the bottleneck.You can also check the statistics and watch for uncommon errors. Or trace the connection with
tcpdump
.Using iperf3 results in 2.5gb of bandwidth. SSD should not be a bottleneck, the server only has NVME storage and the laptop SSD is located in the SoC. Both far exceeding the network speeds. Traceroute indicated just a single hop to the server.
NVMe drives aren’t guaranteed to be fast. Based on those stats I’m guessing you have QLC and no DRAM.
I think you might be right, couldn’t find an identifiable label on the drive and the model reported in Debian shows up in searches as having only 2465MB/s read speeds. After real-world losses and also handling running an OS + multiple services I imagine that could me the source of my problems. Thanks!
You can do a disk benchmark on the server to be sure
What’s the make and model of your server?
Just an N100 based (quad core 3.4ghz) mini pc with 8gb of RAM and 2.5gb ethernet.
Do you have a firewall? Packet inspection in particular can wreak havoc on speeds.
Op said they tried without the firewall connected and had the same results
Ah, right, read to fast it seems! Though that still leaves the possibility of software firewalls, but any OOTB ones wouldn’t be doing any packet inspection.
Try iperf from your server to your opnsense firewall, to both your laptop and server
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters IP Internet Protocol NVMe Non-Volatile Memory Express interface for mass storage SCP Secure Copy encrypted file transfer tool, authenticates and transfers over SSH SSD Solid State Drive mass storage SSH Secure Shell for remote terminal access TCP Transmission Control Protocol, most often over IP
[Thread #644 for this sub, first seen 31st Mar 2024, 06:35] [FAQ] [Full list] [Contact] [Source code]
https://www.baeldung.com/linux/network-speed-testing try some of the options offered here.
You can also try rsync/rclone too and see how they perform.
rsync and rclone both rely on disk performance. iperf3 is the best way to test network performance.
Note that the Windows version of iperf is unofficial and very old now, so you really want to use two Linux systems if you’re testing with iperf.
Iperf3 in wsl is probably ok
This is a good point. I know the WSL team were doing some optimizations to improve the performance of iperf3 in WSL, but I haven’t tested it.
Have you tried changing out ethernet cables and trying different ports?
Also, try hosting the speed test from your laptop and running the speed test from the server to see if the results are reversed.
Just attempted that, odd thing happened was that both evened out on the reverse test at ~800Mbp/s. So higher than the download test before and lower on the upload. Conducted iperf3 tests and that shows the 2.5gb bandwidth so I retried file sharing. Samba refused to work for whatever reason on Debian so I conducted a SCP transfer and after a few tests of a 6.3GB video file, I averaged around 500mbps (highs of around 800mbp/s and lows of around 270mbp/s).
SCP encrypts your traffic before sending it, so it might be CPU/RAM bottleneck. You can try with different cypher or different compression levels, which are defined in your
.ssg/config file
Since the server is on an N100 that could very well explain it.
I’ll check my server’s CPU usage while transferring. I only used SCP for testing yesterday because the Samba share stopped working.
iperf3 showed 2.5 in both directions?
-R reverses direction
Also note it can be set up as a daemon - I like to have at least one available on every network I have to deal with.
Damn, I wish my server was that “slow.”
I mean, compared to what it should be, it is. Especially when I paid for 2.5gb infrastructure.
And it also affects how fast I can pull files from my server. Trying to get some shows downloaded to my laptop before a business trip, guess better prepare for an hour or two copy over LAN. Pulling a backup OS image for my devices? Going to wait for a while.
deleted by creator
i think speedtest data is ot read or written to or from the disk but generates in memory or just ‘thrown away’
Try switching to bbr for congestion control, and adjust the buffer sizes. The defaults are good for Gigabit but not really for higher speeds. Not near my computer right now so I can’t grab a copy of my sysctl settings, but searching Google for “Linux TCP buffer size tuning” and “Linux enable bbr” should find some useful info.
If the devices are different speeds (eg one system is 2.5Gbps but another is 1Gbps), try enabling flow control on the switch, if it’s a managed switch.
Try to execute
ping -c 1000 1.1.1.1
And check for any packet loss and jitter.
Additionally I would also recommend trying a different test server and comparing the results.
Keep in mind that your ISP might also have issues with the connectivity which can be fixed in the following days.
I’ve done pings without any drops. ISP doesn’t come into effect as this is only LAN traffic, laptop and server are on the same switch.
Sorry in that case I would recommend you do iperf and see what the traffic would be. Make sure you whitelist the traffic as well.
Who is your ISP? I had some issues with my FIOS ONT. Had to disable IPv6 on my router for it to stop dropping packets.
ISP wouldn’t matter regarding handling of LAN only traffic right?
Ah, you’re right. I didn’t read closely enough. Sorry!
No problem 😁