Network latency can make or break your user experience. When data packets take too long to travel between points, you'll see dropped video calls, sluggish applications, and frustrated users. But here's the thing: you can't fix what you don't measure.
This guide walks you through everything you need to measure network latency accurately, from basic ping tests to continuous monitoring solutions. You'll learn which tools work best for different scenarios, how to interpret your results, and what to do when latency spikes.
By the end of this guide, you'll know how to:
Who this guide is for: Network engineers, systems administrators, and IT professionals responsible for network performance and troubleshooting.
Time investment: 15-20 minutes to read, 30-60 minutes to implement basic monitoring.
Network latency measures the amount of time it takes for a data packet to travel from source to destination. It's typically expressed in milliseconds (ms) and directly impacts everything from video conferencing quality to application responsiveness.
There are two primary ways to measure latency:
Round-trip time (RTT): The time it takes for a packet to travel from point A to point B and back again. This is what most ping tests measure.
Time to first byte (TTFB): The time between sending a request and receiving the first byte of data back. This matters more for web applications and API calls.
High latency doesn't just slow things down. It causes packet loss, jitter (variation in latency), and poor user experience. For real-time applications like VoIP or video conferencing, latency above 150ms becomes noticeable. Above 300ms, it's disruptive.
You have several options for measuring network latency, each with different strengths:
Ping: The simplest tool, available on every operating system. Sends ICMP echo requests and measures response time. Great for quick checks but limited in depth.
Traceroute (tracert on Windows): Shows latency at each hop along the network path. Essential for identifying where delays occur.
MTR (Matt's Traceroute): Combines ping and traceroute, providing continuous measurements over time. Better for spotting intermittent issues.
Netperf and iperf: Measure throughput and latency under load. Useful for testing network capacity and performance under stress.
Wireshark: Packet capture tool that analyzes TCP handshakes and timestamps. Gives you the most detailed view but requires more expertise.
Network monitoring platforms: Tools like PRTG Network Monitor provide continuous, automated latency monitoring across your entire infrastructure.
The best approach? Start with basic command-line tools for troubleshooting, then implement continuous monitoring for proactive management.
Ping is your first line of defense for latency testing. It's built into Windows, Linux, and macOS, making it universally accessible.
On Windows:
ping google.com
On Linux or macOS:
ping -c 10 google.com
The -c 10 flag sends exactly 10 packets, then stops. Without it, ping runs indefinitely until you press Ctrl+C.
A typical ping response looks like this:
Reply from 142.250.185.46: bytes=32 time=14ms TTL=117
Here's what matters:
Don't test to just one destination. A single ping to google.com tells you about your internet connection, not your internal network. Test multiple endpoints:
Also, don't rely on a single ping. Run at least 10-20 packets to get an average. Latency varies, and one measurement doesn't show the full picture.
Before testing external sites, ping your router's IP address (usually 192.168.1.1 or 10.0.0.1). If you see high latency here, the problem is on your local network, not your ISP or the internet.
Ping tells you there's a problem. Traceroute shows you where it is.
Traceroute sends packets with incrementally increasing TTL values. Each router along the path decrements the TTL and sends back a "time exceeded" message when TTL hits zero. This reveals every hop and its latency.
On Windows:
tracert google.com
On Linux or macOS:
traceroute google.com
You'll see output like this:
1 2 ms 1 ms 2 ms 192.168.1.1 2 12 ms 11 ms 13 ms 10.45.2.1 3 45 ms 89 ms 52 ms isp-router.net 4 14 ms 13 ms 15 ms google.com
Each line shows one hop. The three time values represent three separate probes to that hop.
What to look for:
Traceroute gives you a snapshot, but latency changes over time. That's where MTR (Matt's Traceroute) comes in. It runs continuous traceroutes and shows statistics over time, helping you spot patterns and intermittent issues.
On Linux:
mtr google.com
MTR isn't installed by default on most systems, but it's available in most package managers.
For the most accurate latency measurements, analyze actual TCP connections. This is what Reddit users recommend when they need to measure "ACTUAL network latency" beyond simple ping tests.
When a TCP connection starts, the client sends a SYN packet, and the server responds with a SYN-ACK. The time between these two packets is your true network latency.
You can capture this with Wireshark or tcpdump:
Using tcpdump on Linux:
sudo tcpdump -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) != 0'
This captures all SYN and SYN-ACK packets. Look at the timestamps to calculate the delta.
ICMP ping uses a different protocol than your actual applications. Some routers deprioritize ICMP traffic, giving you misleading results. TCP-based measurements show what your users actually experience.
This approach is especially valuable when testing latency between two servers in the same data center or measuring end-to-end application performance.
One-time tests are great for troubleshooting, but they don't show you trends or alert you to problems before users complain. That's where continuous monitoring comes in.
Don't just monitor latency. Track these related metrics together:
PRTG Network Monitor provides automated, continuous latency monitoring across your entire network infrastructure. Instead of running manual ping tests, PRTG monitors all your QoS parameters 24/7.
Here's how it works:
Ping sensors continuously measure latency and packet loss to critical devices. You'll see real-time graphs showing latency trends over hours, days, or weeks.
Quality of Service (QoS) sensors track latency, jitter, packet loss, and Mean Opinion Score (MOS) for VoIP and video applications. If your video conferencing quality drops, you'll know immediately.
Flow monitoring (NetFlow, sFlow, jFlow) shows which applications and users consume bandwidth, helping you correlate latency spikes with traffic patterns.
Multi-vendor support means you can monitor Cisco, Juniper, HP, and other network hardware from a single dashboard, regardless of your network infrastructure complexity.
The key advantage? PRTG alerts you when latency exceeds your thresholds. You can fix problems during backup windows or off-peak hours, not during critical business operations.
You can't identify "high latency" without knowing what's normal for your network. Establishing baselines is critical.
Run latency tests at different times:
Track these measurements for at least two weeks. Calculate:
The 95th percentile matters more than the average. It shows what users experience during busy periods, not just ideal conditions.
Once you have baselines, set alerts at meaningful levels:
For example, if your baseline latency to a critical server is 20ms, set a warning at 30ms and critical at 40ms.
Once you're measuring latency consistently, you can optimize it.
Sometimes packets take inefficient paths. Use traceroute to identify unnecessary hops, then work with your ISP or reconfigure routing protocols (BGP, OSPF) to optimize paths.
Quality of Service (QoS) prioritizes critical traffic. Configure your routers and switches to prioritize:
Lower-priority traffic like backups and software updates can tolerate higher latency.
If you're running workloads in AWS, Azure, or Google Cloud, monitor latency from your on-premises network to your cloud resources. PRTG's cloud monitoring capabilities let you track latency to virtual machines and cloud services alongside your on-premises infrastructure.
For web applications, CDNs cache content closer to users, reducing latency. Monitor latency to your CDN endpoints to ensure they're performing as expected.
Symptom: Internal network latency is fine, but external sites are slow.
Likely causes:
Solution: Test latency to your ISP's first hop. If it's high, contact your provider. If it's normal, the issue is beyond your ISP. Consider changing DNS servers or implementing a CDN.
Symptom: Latency is normal most of the time but spikes during backup windows or peak hours.
Likely causes:
Solution: Use network traffic monitoring to identify what's consuming bandwidth during spike periods. Schedule backups during off-peak hours or implement QoS to prioritize interactive traffic.
Symptom: Two servers on the same network segment show unexpectedly high latency.
Likely causes:
Solution: Check switch port statistics for errors. Verify STP topology. Test with different network cables or switch ports to rule out hardware issues.
Symptom: Latency is unpredictable, with random spikes that don't correlate with traffic patterns.
Likely causes:
Solution: Use MTR or continuous monitoring to capture patterns over time. Check for hardware errors on switches and routers. For Wi-Fi, analyze channel utilization and interference.
Symptom: Small ping packets show low latency, but larger packets show higher latency.
Likely causes:
Solution: Test with different packet sizes using ping -l 1400 google.com (Windows) or ping -s 1400 google.com (Linux). If larger packets show proportionally higher latency, you're hitting bandwidth limits.
For quick troubleshooting, use ping to test latency to your router, internal servers, and external sites. For accurate application-level measurements, capture TCP SYN-ACK packets with Wireshark or tcpdump. For ongoing visibility, implement continuous monitoring with a tool like PRTG that tracks latency, packet loss, and jitter 24/7.
The most accurate method is to run a packet capture on one server and observe the time delta between TCP SYN and SYN-ACK packets. Alternatively, use ping or specialized tools like iperf to measure round-trip time. For production environments, set up continuous monitoring to track latency trends over time.
It depends on your application. For general web browsing, under 100ms is acceptable. For VoIP and video conferencing, aim for under 150ms. Real-time gaming requires under 50ms. Internal network latency should typically be under 10ms. Establish baselines for your specific environment and set thresholds accordingly.
For troubleshooting, test continuously until you identify the issue. For proactive management, implement automated monitoring that checks latency every 60 seconds. Review trends weekly and investigate any deviations from your baseline.
Yes. When routers, switches, or firewalls experience high CPU utilization, they can't process packets quickly, introducing latency. Monitor CPU and memory on your network devices alongside latency metrics to identify these correlations.
Latency is the time it takes for data to travel between points (measured in milliseconds). Bandwidth is the amount of data that can travel in a given time (measured in Mbps or Gbps). Think of latency as the speed limit on a highway and bandwidth as the number of lanes. Both affect performance, but in different ways.
Jitter is the variation in latency over time. If your latency is consistently 20ms, you have zero jitter. If it varies between 15ms and 45ms, you have high jitter. Jitter is particularly problematic for real-time applications because it causes inconsistent performance even when average latency is acceptable.
PRTG Network Monitor provides comprehensive latency monitoring with:
Learn more about PRTG's latency monitoring capabilities
Use ping when: You need a quick latency check or want to verify connectivity.
Use traceroute when: You need to identify where latency is occurring along the network path.
Use MTR when: You're troubleshooting intermittent latency issues that don't appear in single tests.
Use packet capture when: You need the most accurate application-level latency measurements.
Use monitoring platforms when: You need continuous visibility, historical trends, and proactive alerts across your entire network infrastructure.
Now you know how to measure network latency using multiple methods. Here's your action plan:
Immediate actions (today):
This week:
Ongoing:
The difference between reactive and proactive network management is continuous measurement. Don't wait for users to complain about slow performance. Start monitoring latency now, and you'll catch issues before they impact your business.
Ready to implement automated latency monitoring? Explore PRTG's network monitoring capabilities and see how continuous visibility transforms network management.