You've just provisioned a 10Gbps dedicated server, but how do you know you're actually getting those speeds? Are your file transfers hitting the theoretical maximum? Is there network congestion? Which tool should you use?
This comprehensive guide covers command-line network speed testing from basics to advanced diagnostics. You'll learn how to use speedtest-cli, iperf3, and iptraf, understand the crucial difference between bits and bytes, calculate expected throughput for different connection speeds, and diagnose performance bottlenecks.
Before running any tests, you must understand the difference between bits per second (b/s) and bytes per second (B/s). This single point of confusion causes 90% of "my speed is wrong" support tickets.
The math:
Why this matters: Network speeds are advertised in bits (Gbps), but file transfer tools report in bytes (MB/s). If you have a 1Gbps connection and see 112 MB/s download speeds, you're getting ~90% of the theoretical maximum — that's actually excellent.
Real-world throughput expectations:
TCP/IP overhead (headers, acknowledgments, flow control) typically consumes 5-15% of theoretical bandwidth. This is normal and unavoidable.
Best for: Testing public internet connectivity to your server from standard speed test servers worldwide.
Installation on Ubuntu/Debian:
sudo apt update
sudo apt install -y speedtest-cli
Installation on RHEL/AlmaLinux:
sudo dnf install -y speedtest-cli
Basic usage:
speedtest-cli
This runs a test to the closest Speedtest.net server and reports download, upload, and ping times.
List available servers:
speedtest-cli --list | grep -i switzerland
Example output:
11884) Solnet AG (Zurich, Switzerland) [2.41 km]
24215) Init7 (Winterthur, Switzerland) [18.22 km]
48668) iWay AG (Zurich, Switzerland) [2.41 km]
Test to a specific server (useful for consistency):
speedtest-cli --server 11884
Get simple output (no fancy formatting, easier for scripts):
speedtest-cli --simple
Example output:
Ping: 12.456 ms
Download: 942.15 Mbit/s
Upload: 938.44 Mbit/s
JSON output for parsing:
speedtest-cli --json | jq .
Limitations of speedtest-cli:
Best for: Measuring maximum throughput between two servers, diagnosing network issues, testing 10Gbps+ links.
iperf3 runs in client-server mode. One server listens, the other connects and floods it with data to measure maximum throughput.
Installation:
sudo apt install -y iperf3 # Ubuntu/Debian
sudo dnf install -y iperf3 # RHEL/AlmaLinux
Server mode (on the target you want to test TO):
iperf3 -s
This starts iperf3 in server mode, listening on port 5201.
Client mode (from your server, testing TO the iperf3 server):
iperf3 -c iperf.example.com
Test with multiple parallel streams (for 10Gbps+ connections):
iperf3 -c iperf.example.com -P 10
10 parallel TCP streams will saturate high-bandwidth links better than a single connection.
Reverse test (test upload speed instead of download):
iperf3 -c iperf.example.com -R
Test for specific duration (default is 10 seconds):
iperf3 -c iperf.example.com -t 30
UDP test (useful for diagnosing packet loss):
iperf3 -c iperf.example.com -u -b 1G
This sends UDP traffic at 1 Gbps and reports packet loss percentage.
Several organizations run public iperf3 servers for testing. Here are reliable options:
Bouygues Telecom (France, 10Gbps):
iperf3 -c iperf.par2.as42831.net -P 10
Clouvider (Global locations, 40Gbps):
iperf3 -c lon.speedtest.clouvider.net -P 10 # London
iperf3 -c nyc.speedtest.clouvider.net -P 10 # New York
iperf3 -c la.speedtest.clouvider.net -P 10 # Los Angeles
Worldstream (Netherlands, 10Gbps):
iperf3 -c speedtest.ams1.worldstream.com -P 10
Online.net (France, 10Gbps):
iperf3 -c ping.online.net -P 10
Note: Public iperf3 servers can be congested during peak hours. For accurate results, test at different times or run your own iperf3 server on a second machine.
Example output from a 10Gbps server:
[SUM] 0.00-10.00 sec 11.2 GBytes 9.58 Gbits/sec 0 sender
[SUM] 0.00-10.00 sec 11.2 GBytes 9.57 Gbits/sec receiver
Key metrics:
For a 1Gbps connection, expect 900-950 Mbits/sec. For 10Gbps, expect 9.2-9.7 Gbits/sec with parallel streams.
Best for: Monitoring live traffic, diagnosing which interfaces are congested, watching bandwidth usage in real-time.
Installation:
sudo apt install -y iptraf-ng # Ubuntu/Debian
sudo dnf install -y iptraf-ng # RHEL/AlmaLinux
Launch interactive dashboard:
sudo iptraf-ng
This opens a text-based UI. Navigate with arrow keys:
Monitor a specific interface (e.g., eth0):
sudo iptraf-ng -d eth0
Press q to quit.
Use case: Run iptraf-ng in one terminal while running speedtest or iperf3 in another. You'll see real-time bandwidth usage and can identify if you're hitting network interface limits.
Network speed tests consume CPU, memory, and disk I/O (if logging). Here's what to expect:
CPU usage:
Monitor CPU usage during tests:
htop
Or for a single snapshot:
mpstat 1 10
This shows CPU usage every 1 second for 10 intervals.
Memory usage: All these tools use minimal RAM (<100MB). Not a concern even on 4GB servers.
Disk I/O: None, unless you're redirecting output to files. Network tests happen entirely in RAM.
Symptom: Speed tests show lower speeds than expected, or speeds vary wildly between tests.
Possible causes:
1. Interface saturation
Check current interface utilization:
sudo iftop -i eth0
This shows real-time bandwidth usage per connection. If you see sustained traffic at or near your link speed (1Gbps, 10Gbps), the interface is saturated.
2. Packet loss or retransmits
Run iperf3 and check the "Retr" column:
iperf3 -c iperf.example.com -P 4
If you see non-zero retransmits, there's packet loss somewhere in the path (could be your server, the remote server, or ISP routing).
3. CPU bottleneck (encryption overhead)
If testing over SSH or VPN, encryption can bottleneck at ~2-4Gbps on older CPUs. Test without encryption:
iperf3 -c iperf.example.com -P 10 # No encryption
Compare to SSH-tunneled traffic. If SSH is significantly slower, your CPU can't keep up with encryption at line speed.
4. Routing or peering issues
Test to multiple geographic locations:
iperf3 -c lon.speedtest.clouvider.net # London
iperf3 -c nyc.speedtest.clouvider.net # New York
iperf3 -c sgp.speedtest.clouvider.net # Singapore
If speeds are fast to London but slow to New York, the issue is likely ISP peering or routing, not your server.
Here's what you should expect from various connection types under ideal conditions (low latency, no congestion, modern hardware):
1 Gbps connection:
10 Gbps connection:
40 Gbps unmetered connection:
When diagnosing network performance, follow this workflow:
1. Run baseline speedtest-cli:
speedtest-cli --simple
This gives a quick sanity check. If speeds are way off, don't bother with deeper tests yet.
2. Run iperf3 to multiple test servers:
iperf3 -c iperf.par2.as42831.net -P 10
iperf3 -c lon.speedtest.clouvider.net -P 10
iperf3 -c nyc.speedtest.clouvider.net -P 10
Compare results. Consistent speeds = good. Wildly varying speeds = routing or peering issue.
3. Monitor live traffic with iptraf-ng:
sudo iptraf-ng -d eth0
Run this while doing large file transfers to see real-time bandwidth.
4. Check for packet loss:
iperf3 -c iperf.example.com -u -b 1G
If packet loss > 0.1%, investigate network quality (could be switch, cabling, or ISP).
5. Verify interface link speed:
ethtool eth0 | grep Speed
Expected output for a 10Gbps server:
Speed: 10000Mb/s
If it shows 1000Mb/s, your NIC negotiated at 1Gbps instead of 10Gbps — check cabling or switch port config.
For 10Gbps+ servers, default TCP buffer sizes can bottleneck throughput. Here's how to optimize:
Check current TCP buffer sizes:
sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmem
Increase buffers for high-bandwidth, high-latency links:
sudo sysctl -w net.core.rmem_max=134217728
sudo sysctl -w net.core.wmem_max=134217728
sudo sysctl -w net.ipv4.tcp_rmem="4096 87380 67108864"
sudo sysctl -w net.ipv4.tcp_wmem="4096 65536 67108864"
This increases max receive/send buffers to 128MB, allowing TCP windows to grow for long-distance, high-speed transfers.
Make changes persistent:
sudo nano /etc/sysctl.conf
Add these lines:
net.core.rmem_max=134217728
net.core.wmem_max=134217728
net.ipv4.tcp_rmem=4096 87380 67108864
net.ipv4.tcp_wmem=4096 65536 67108864
Reload:
sudo sysctl -p
SwissLayer's dedicated servers in Switzerland offer 1Gbps, 10Gbps, and 40Gbps unmetered connectivity. When testing your server:
For private network testing between SwissLayer servers in the same data center, expect near-theoretical speeds (9.8+ Gbps on 10G links with <0.2ms latency).
-P 10 for parallel streams.# speedtest-cli
speedtest-cli --simple
speedtest-cli --list | grep -i zurich
speedtest-cli --server 11884
# iperf3
iperf3 -s # Server mode
iperf3 -c iperf.example.com -P 10 # Client with 10 streams
iperf3 -c iperf.example.com -R # Reverse (test upload)
iperf3 -c iperf.example.com -u -b 1G # UDP test for packet loss
# iptraf-ng
sudo iptraf-ng # Interactive dashboard
sudo iptraf-ng -d eth0 # Monitor specific interface
# Interface check
ethtool eth0 | grep Speed # Verify link speed
ip -s link show eth0 # Interface statistics
# Live monitoring
sudo iftop -i eth0 # Real-time bandwidth by connection
htop # CPU usage
Network speed testing isn't just about running speedtest-cli and calling it done. Understanding the difference between bits and bytes, knowing when to use iperf3 vs speedtest-cli, monitoring live traffic with iptraf-ng, and interpreting results in the context of TCP overhead and real-world limitations — these skills separate informed server administrators from those constantly confused why their "10Gbps server" only transfers at 1.2 GB/s.
The answer? It's not 10 Gigabytes per second. It's 10 Gigabits per second. And 1.2 GB/s is actually 9.6 Gbps — which is 96% efficiency and exactly what you should expect.
Ready to test your server's network performance? Explore SwissLayer dedicated servers with 1Gbps, 10Gbps, and 40Gbps unmetered connectivity in Swiss data centers.