What is latency? - Definition from WhatIs.com
Part of the Network administration glossary:

Latency is the delay from input into a system to desired outcome; the term is understood slightly differently in various contexts and latency issues also vary from one system to another. Latency greatly affects how usable and enjoyable electronic and mechanical devices as well as communications are.

Latency in communication is demonstrated in live transmissions from various points on the earth as the communication hops between a ground transmitter and a satellite and from a satellite to a receiver each take time. People connecting from distances to these live events can be seen to have to wait for responses. This latency is the wait time introduced by the signal travelling the geographical distance as well as over the various pieces of communications equipment. Even fiber optics are limited by more than just the speed of light, as the refractive index of the cable and all repeaters or amplifiers along their length introduce delays. 

Types of latency
Network latency is an expression of how much time it takes for a packet of data to get from one designated point to another. In some environments (for example, AT&T), latency is measured by sending a packet that is returned to the sender; the round-trip time is considered the latency. Ideally, latency is as close to zero as possible.

The contributors to network latency include:

  • Propagation: This is simply the time it takes for a packet to travel between one place and another at the speed of light.
  • Transmission: The medium itself (whether optical fiber, wireless, or some other) introduces some delay, which varies from one medium to another. The size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than a short one. Also, when signals must be boosted by a repeater, this too introduces additional latency.
  • Router and other processing: Each gateway node takes time to examine and possibly change the header in a packet (for example, changing the hop count in the time-to-live field).
  • Other computer and storage delays: Within networks at each end of the journey, a packet may be subject to storage and hard disk access delays at intermediate devices such as switches and bridges. (In backbone statistics, however, this kind of latency is probably not considered.)

Internet latency is just a special case of network latency - the Internet is a very large wide-area network (WAN). The same factors as above determine latency on the Internet. However, distances in the transmission medium, the number of hops over equipment and servers are all greater than for smaller networks. Internet latency measurement would generally start at the exit of a network and end on the return of the requested data from an Internet resource.

Interrupt latency is the length of time that it takes for a computer to act on an interrupt, which is a signal telling the operating system to stop until it can decide what it should do in response to some event. 

WAN latency itself can be an important factor in determining Internet latency. A WAN that is busy directing other traffic will produce a delay whether a resource is being requested from a server on the LAN, another computer on that network or elsewhere on the Internet. LAN users will also experience delay when the WAN is busy. In either of these examples the delay would still exist even if the rest of the hops --including the server where the desired data was located -- were entirely free of traffic congestion.

Audio latency is the delay between sound being created and heard. In sound created in the physical world, this delay is determined by the speed of sound, which varies slightly depending on the medium the sound wave travels through. Sound travels faster in denser mediums: It travels faster through solids, less quickly through liquids and slowest through air. We generally refer to the speed of sound as measured in dry air at room temperature, which is 796 miles-per-hour.  In electronics, audio latency is the cumulative delay from audio input to audio output. How long this delay is depends on the hardware and even software used, such as the operating system and drivers used in computer audio. Latencies of 30 milliseconds are generally noticed by an individual as a separate production and arrival of sound to the ear.

See a video demonstration of how to measure audio latency on iOS devices:

Operational latency can be defined as the sum time of operations, when performed in linear workflows. In parallel workflows, the latency is determined by the slowest operation performed by a single task worker.

Mechanical latency is the delay from input into a mechanical system or device to the desired output. This delay is determined by Newtonian physics-based limits of the mechanism (excepting quantum mechanics). An example would be the delay in time to shift a gear from the time the shift lever of a gear box or bicycle shifter was actuated.

Computer and operating system latency is the combined delay between an input or command and the desired output. In a computer system, latency is often used to mean any delay or waiting that increases real or perceived response time beyond what is desired. Specific contributors to computer latency include mismatches in data speed between the microprocessor and input/output devices, inadequate data buffers and the performance of the hardware involved, as well as its drivers. The processing load of the computer can also add significant latency.

From the user's perspective, latency issues are usually a perceived lag between an action and a response to it. In 3D VR simulation, for example, in using a helmet that provides stereoscopic vision and head tracking, latency is the time between the computer’s detection of head motion to the time it displays motion in the image. In multiplayer networked or Internet gaming, low latency is critical for best gameplay and enjoyability. Control is difficult with significant latency as the player is lagging behind the real-time events in the game, due to delays in the information getting to their computer.

Latency issues are noticeable for an individual, generally increasing user annoyance and impacting productivity as the level increases above 30ms. The severity of the effect varies from one application to another, as do mitigating tactics. However, games can often be enjoyable up to around 90ms latency. In communications, delays can be a result of heavy traffic, hardware problems, incorrect set up and/or configuration.

Latency testing:
Latency testing can vary from application to application. In some applications, measuring latency requires special and complex equipment or knowledge of special computer commands and programs; in other cases, latency can be measured with a stop watch. In networking, an estimated latency to equipment or servers can be determined by running a ping command; information about latency through all the hops can be gathered with a trace route command. High-speed cameras might be used to capture the minute differences in response times for input to various mechanical and electronic systems.

Reducing latency:
Reducing latency is a function of tuning, tweaking and upgrading both computer hardware and software and mechanical systems. Within a computer, latency can be removed or hidden by such techniques as prefetching (anticipating the need for data input requests) and multithreading or by using parallelism across multiple execution threads.  Other steps to reduce latency and increase performance include uninstalling unnecessary programs, optimizing networking and software configurations and upgrading or overclocking hardware.


This was last updated in September 2016
Contributor(s): Matthew Haughn, Ed Blair
Posted by: Margaret Rouse

Related Terms


  • HashiCorp Atlas

    - HashiCorp Atlas is a suite of open source, modular DevOps (development/operations) infrastructure products. Atlas products can be implemented separately, together, or alongside other technologies. (searchITOperations.com)

  • zero touch provisioning (ZTP)

    - Zero touch provisioning (ZTP) is a switch feature that allows the devices to be provisioned and configured automatically, eliminating most of the manual labor involved with adding them to a network. (searchITOperations.com)

  • data center bridging (DCB)

    - DCB is a suite of IEEE standards designed to enable lossless transport over Ethernet and a converged network for all data center applications. (searchConvergedIT.com)


  • Network administration

    - Terms related to managing computer networks, including definitions about LANS or WANS and words and phrases about network design, troubleshooting, security and backups.

  • Internet applications

    - This WhatIs.com glossary contains terms related to Internet applications, including definitions about Software as a Service (SaaS) delivery models and words and phrases about web sites, e-commerce ...

Ask a Question. Find an Answer.Powered by ITKnowledgeExchange.com

Ask An IT Question

Get answers from your peers on your most technical challenges

Ask Question

Tech TalkComment



    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.