WhatIs.com

augmented reality (AR)

By Alexander S. Gillis

What is augmented reality (AR)?

Augmented reality (AR) is the integration of digital information with the user's environment in real time. Unlike virtual reality (VR), which creates a totally artificial environment, AR users experience a real-world environment with generated perceptual information overlaid on top of it.

Augmented reality has a variety of uses, from assisting in the decision-making process to entertainment. AR is used to either visually change natural environments in some way or to provide additional information to users. The primary benefit of AR is that it manages to blend digital and three-dimensional (3D) components with an individual's perception of the real world.

AR delivers visual elements, sound and other sensory information to the user through a device like a smartphone, glasses or a headset. This information is overlaid onto the device to create an interwoven and immersive experience where digital information alters the user's perception of the physical world. The overlaid information can be added to an environment or mask part of the natural environment.

Boeing Computer Services, Research and Technology employee Thomas Caudell coined the term augmented reality in 1990 to describe how the head-mounted displays that electricians use when assembling complicated wiring harnesses worked. One of the first commercial applications of augmented reality technology was the yellow first-down marker that began appearing in televised football games in 1998.

Today, smartphone games, mixed-reality headsets and heads-up displays (HUDs) in car windshields are the most well-known consumer AR products. But AR technology is also being used in many industries, including healthcare, public safety, gas and oil, tourism and marketing.

How does augmented reality work?

Augmented reality is deliverable in a variety of formats, including within smartphones, glasses and headsets. AR contact lenses are also in development. The technology requires hardware components, such as a processor, sensors, a display and input devices. Mobile devices, like smartphones and tablets, already have this hardware onboard, making AR more accessible to the everyday user. Mobile devices typically contain sensors, including cameras, accelerometers, Global Positioning System (GPS) instruments and solid-state compasses. For AR applications on smartphones, for example, GPS is used to pinpoint the user's location, and its compass is used to detect device orientation.

Sophisticated AR programs, such as those used by the military for training, might also include machine vision, object recognition and gesture recognition. AR can be computationally intensive, so if a device lacks processing power, data processing can be offloaded to a different machine.

Augmented reality apps work using either marker-based or markerless methods. Marker-based AR applications are written in special 3D programs that let developers tie animation or contextual digital information into the computer program to an augmented reality marker in the real world. When a computing device's AR app or browser plugin receives digital information from a known marker, it begins to execute the marker's code and layer the correct image or images.

Markerless AR is more complex. The AR device doesn't focus on a specific point, so the device must recognize items as they appear in view. This type of AR requires a recognition algorithm that detects nearby objects and determines what they are. Then, using the onboard sensors, the device can overlay images within the user's environment.

Differences between AR and VR

VR is a virtual environment created with software and presented to users in such a way that their brain suspends belief long enough to accept a virtual world as a real environment. Virtual reality is primarily experienced through a headset with sight and sound.

The biggest difference between AR and VR is that augmented reality uses the existing real-world environment and puts virtual information on top of it, whereas VR completely immerses users in a virtually rendered environment.

The devices used to accomplish this are also different. VR uses VR headsets that fit over the user's head and present them with simulated audiovisual information. AR devices are less restrictive and typically include devices like phones, glasses, projections and HUDs.

In VR, people are placed inside a 3D environment in which they can move around and interact with the generated environment. AR, however, keeps users grounded in the real-world environment, overlaying virtual data as a visual layer within the environment. So, for example, while VR places a user in a simulated environment, VR could overlay a web browser in front of the user in their living room. For spatial computing headsets, like Apple Vision Pro or Meta Quest 3, where the device is blocking the user's natural vision, a technique called passthrough is used. Here, the headset mirrors what the device's front-facing cameras see on the headset's display.

Although it can be interchanged with AR, the term mixed reality refers to a virtual display over a real-world environment with which users can interact. For example, Apple Vision Pro can project a virtual keyboard that the wearer can use to type. The key difference between mixed reality and AR is the user's ability to interact with the digital display.

Top AR use cases

AR can be used in the following ways, among others:

Examples of AR

Examples of AR include the following:

Future of AR technology

AR technology is growing steadily as the popularity and familiarization of apps and games like Pokemon Go or retail store AR apps increase.

Apple continues to develop and update its open source mobile augmented reality development tool set, ARKit. Companies, including Target and Ikea, use ARKit in their flagship AR shopping apps for iPhone and iPad. ARKit 6, for example, enables the rendering of AR in high dynamic range 4K and improves image and video capture. It also provides a Depth API, which uses per-pixel depth information to help a device's camera understand the size and shape of an object. It includes scene geometry that creates a topological map of a space along with other features.

ARCore, Google's platform for building AR experiences on Andriod and iOS, continues to evolve and improve. For example, ARCore uses a geospatial API that sources data from Google Earth 3D models and Street View image data from Google Maps. Similar to ARKit's Depth API, ARCore has improved its Depth API, optimizing it for longer-range depth sensing.

Improved AR, VR and mixed-reality headsets are also being released. For example, Meta improved its Quest 2 headset with Meta Quest 3, which was released in October 2023. This new headset is slimmer, lighter and more ergonomic than Quest 2.

In February 2024, Apple released Apple Vision Pro, bringing more competition to the AR and VR headset market. Vision Pro is targeted at early adopters and developers at a much higher price point than Quest 3. Meta Platforms is pursuing a wider audience at a $499 price point, while Apple is pricing Vision Pro at about $3,499. It's expected that Apple will produce a non-Pro variant of its headset at a more affordable price in the future. Developers of Apple Vision Pro will have to work with the visionOS software development kit. However, they can still use familiar Apple tools, such as ARKit, SwiftUI or RealityKit to build apps.

Other potential future advancements for AR include the following:

AR, VR and mixed-reality technologies are being used in various industries. Learn how each of these technologies differ.

21 Mar 2024

All Rights Reserved, Copyright 1999 - 2024, TechTarget | Read our Privacy Statement