virtual reality | computer science
How Does Virtual Reality (VR) Technology Work?
It might seem like virtual reality (VR) technology has only been around for a few short years.
However, the VR systems and headsets we know today have been under development for decades. The earliest progenitor of today's VR systems actually dates back to 1957 with a patent filed by Martin Heilig for a head-mounted stereoscopic television device.
In the years since, VR technology has been making slow but steady progress. At first, developers lacked the computing power to make a true, immersive VR experience. Then, once they had it, the race was on to make it portable and affordable for the average consumer.
That's where we are today. Companies like HTC, Oculus, Valve, and Sony now offer commercially viable VR hardware that's continuing to improve by leaps and bounds. For that reason, people all around the world are now familiar with VR and understand what it is. Most don't, however, have a firm grasp on the specifics of the technology.
A technical guide to virtual reality
To remedy that, here's a basic technical guide to virtual reality technology. You'll learn how it works, what it takes to make it work, and where the technology might go next. Let's dive in.
The scientific basics of virtual reality
At its core, VR technology has only one purpose: to simulate settings and environments realistically enough to fool the human brain into accepting them as reality. From a scientific standpoint, that all begins by understanding how our brains interpret the things we see to develop a mental picture of the world around us.
Without getting into too much detail, the simplest explanation is that our perception of reality is based on rules we develop using our experiences as a guide. For example, when we see the sky, it tells us which direction is "up". When we see objects we can identify, we can use their size relative to one another to judge distance. We can also detect light sources by picking up on the shadows cast by the objects around us.
VR designers can use those conventional rules to create virtual environments that conform to our mental expectations of reality. When they do, the result is a seamless experience that we interpret as "real".
The technical basics of virtual reality
Today's commercial VR systems are all competing to determine which can provide the best possible user experience in a virtual setting. In truth, none of them are capable of a completely immersive experience, for one very simple reason: the technology hasn't caught up with the capabilities of human vision – yet. Here's a breakdown of where today's VR headsets are, and where they're trying to reach.
Field of view
From a technical point of view, one of the biggest hurdles is the fact that humans are capable of a much wider field of view (FOV) than today's headsets can provide. An average human can see the environment around them in a roughly 200 to 220-degree arc around their head. Where the eyesight from our left and right eyes overlap there is a roughly 114-degree arc, where we can see in 3D.
Today's headsets focus their attention on that 114-degree 3D space to deliver their virtual environments. No headset, though, can yet accommodate the full FOV of the average human. Right now, though, today's VR hardware designers are aiming to create devices that will allow for a 180-degree FOV, which is considered ideal for a high-performance VR simulation.
Frame rate
In the world of VR, there is perhaps no greater topic of disagreement than over how to deal with the frame rates of virtual environments. That's because there's no real scientific consensus on how sensitive human vision is in that regard. From a physical standpoint, we know that human eyes can see up to the equivalent of 1000 frames per second (FPS). The human brain, however, never receives such detail via the optic nerve. There have been studies that have suggested that humans can discern frame rates up to 150 FPS, but beyond that, the information is lost in translation on the way to the brain.
For a movie you see in a theater, the frame rate is 24 FPS. That, however, isn't designed to simulate reality. For VR applications, most developers have found that anything less than 60 FPS tends to cause disorientation, headaches, and nausea in the user. For that reason, most developers aim for a VR content "sweet spot" of about 90 FPS and some (like Sony) won't certify software to run on their devices if they fall below 60 FPS at any point while in use. Going forward, though, most VR hardware developers are going to start pushing for a frame rate of 120 FPS or more, as that will provide a more true-to-life experience for most applications.
Sound effects
Another crucial technical aspect of VR is the way that designers use sound effects to convey a sense of three-dimensional space to the user. Today, cutting-edge VR relies on a technology called spatial audio to create a simulated audio landscape that matches the visuals created by VR.
Anyone who has ever sat in a well-designed concert hall should be familiar with how the sounds we hear can vary based on where we're located within a space and even which way we turn our heads. Spatial audio is a technique whereby VR designers can produce binaural (stereo) audio through a set of headphones that mimics that exact sensation.
There are a variety of current implementations, but they all share some similar characteristics, including:
Controlling volume
Using left/right delay to convey direction
Using head tracking to map auditory space
Manipulating reverberation and echo to simulate environmental factors
It's also important to remember that for a VR headset, the audio effects described here must be computed in real-time to account for the movement of the user. When it comes to this, today's VR hardware is still just beginning to scratch the surface of what's possible.
Head and position tracking
The real magic of VR doesn't come from how convincing the visuals or sound are (although those are critical foundational elements), it comes from the fact that users can move within a virtual space that adjusts to their position. It's what separates a VR headset from a simple set of video viewing glasses.
Right now, there are two types of head and position tracking in use for VR applications – measured in degrees of freedom - 3DoF and 6DoF. Mobile VR headsets like the Samsung Gear VR, Google's Daydream View, and the Oculus Go use 3DoF, which means they are capable of rotational tracking only. They know when you turn your head left and right, look up or down, or tilt your head to one side or another. If you move your whole body, though, they won't pick that up.
Headsets that use 6DoF, by contrast, can track the wearer's position within the room, as well as the direction their head is pointed. That means 6DoF headsets can allow for full autonomous movement through a 3D space, which is a far more convincing VR experience. The way it's done varies from platform to platform, but major methods tend to include camera-based tracking in concert with infrared light beacons.
Where virtual reality is headed
As advanced as today's VR technology is, it's bound to get a whole lot better in the coming years. As developments continue, we should start to see hardware with an enhanced, more lifelike FOV, and better 3D audio to match. That alone makes the near-term future of VR exciting.
We're also on the cusp of seeing new improvements to VR that are going to make the experience vastly better than what you can get from today's hardware. One of those is the use of haptic feedback devices like the HaptX Gloves, which provide realistic touch sensations for the objects users interact with in VR. Another is a graphics technique known as foveated rendering, which takes advantage of the human eye's limited focal point to deliver ultra-high definition images only where our eyes are focused, thus lowering the computing power required to create the image.
What's more important, though, are the new ways that VR is likely to be used. Parallel advances in machine learning technology in the education field are going to make immersive distance learning a reality for the first time. Surgeons will benefit from advanced VR training to improve patient outcomes. Those in need of treatment for PTSD and related disorders will finally have a way to heal.
The bottom line here is that VR technology is only just beginning to realize its potential in a variety of fields. As the technology grows, so too will the applications that talented software developers, researchers, and business leaders dream up for it. From that standpoint, it's fair to say that we are much closer to the beginnings of the story of virtual reality than we are to the conclusion – and there's going to be a whole lot more amazing developments to come.
Interested in learning more about existing virtual reality software and related technology? See all available options to take your knowledge to the next level – only on G2.
What Is Virtual Reality All About?
Virtual reality has taken the tech world by storm. But just what is VR?
Read on to discover what VR can do—and how you can be a part of the unbelievable experiences that transport you from the comfort of your living room to far-off worlds you never knew existed.
The Lowdown on Virtual Reality
VR uses cutting-edge graphics, best-in-class hardware, and artistically rendered experiences to create a computer-simulated environment where you aren’t just a passive participant, but a co-conspirator. With a VR headset, you’re fully absorbed in realistic 3D worlds, creating a major shift in how we experience the digital realm.
A VR headset usually features a display split between the eyes to show each eye a different feed. This creates a stereoscopic 3D effect with stereo sound. It also tracks your position in space to orient your point of view in the system.
When you combine the VR headset and input tracking, you get a completely immersive and realistic experience. Since the world around you turns every time you move your head, you feel like you’re “in the game” mentally and physically. In other words, you feel like you’re part of another universe.
Who Is VR For?
While VR adds a whole new layer to entertainment, the technology goes beyond gaming to offer something for everyone.
Did you know that, with VR, you can learn a new language, teleport almost anywhere in the world, or step aboard the International Space Station? VR lets you explore new worlds and attempt feats that seem unimaginable. And it has the potential to transform how we play, work, learn, communicate, and experience the world around us.
Consider the possibilities in healthcare. Oculus partnered with Children’s Hospital Los Angeles to build a VR simulation that enables medical students and staff to be fully immersed in high-risk pediatric trauma situations where split-second decisions mean the difference between life and death. These virtual scenarios empower doctors and students to practice and learn in realistic workplace conditions, helping them hone the skills they’ll use to treat patients. By training with VR, medical providers can deliver better care.
VR is used in the automotive industry to experiment with new automobile designs. You’ll also find brands using it in retail to help shoppers virtually “try on” clothing and accessories to assist with purchasing decisions. And it’s even being used in law enforcement and the military for training.
So while games are an integral part of virtual reality, VR has plenty of different applications that will only expand as the technology develops further.
Step into Another World
Of course, reading about VR is much different than experiencing it first-hand. So get ready to go snowboarding in your living room, have a work meeting as an avatar, or explore Machu Picchu from your kitchen. With VR, the possibilities are out of this world.
virtual reality | computer science
Summary
The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense , the National Science Foundation , and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.
virtual reality (VR) , the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of “being there” (telepresence) is effected by motion sensors that pick up the user’s movements and adjust the view on the screen accordingly, usually in real time (the instant the user’s movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.
Early work
Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of media—from futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatres—over the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York World’s Fair by Fred Waller and Ralph Walker, originated in Waller’s studies of vision and depth perception. Waller’s work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Trainer—an example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.
Panorama of the Battle of Gettysburg, painting by Paul Philippoteaux, 1883; at Gettysburg National Military Park, Pennsylvania James P. Rowan
Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a “cinema of the future.” By late 1960, Heilig had built an individual console with a variety of inputs—stereoscopic images, motion chair, audio, temperature changes, odours, and blown air—that he patented in 1962 as the Sensorama Simulator, designed to “stimulate the senses of an individual to simulate an actual experience realistically.” During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted “stereoscopic 3-D TV display” that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.
Get a Britannica Premium subscription and gain access to exclusive content. Subscribe Now
The seeds for virtual reality were planted in several computing fields during the 1950s and ’60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called “light guns”). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.
During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a “man-computer symbiosis” and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.
Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MIT’s Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPA’s premier research centres. In 1965 Sutherland outlined the characteristics of what he called the “ultimate display” and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Roberts’s Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbart’s invention of a new input device, the computer mouse.
New from Britannica New from Britannica Before high-wattage floodlights became prevalent, the NFL used a white football during night games to increase the ball’s visibility. See All Good Facts
Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilot’s head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called “augmented reality” because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearer’s ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewer’s immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.