Immersive technology will revolutionize our daily life

The digital and physical worlds will soon be integrated, allowing us to interact with internet-based technologies more seamlessly than ever.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Immersive technology aims to overlay a digital layer of experience atop everyday reality, changing how we interact with everything from medicine to entertainment. What that future will look like is anyone’s guess. But what’s certain is that immersive technology is on the rise.

The extended reality (XR) industry — which includes virtual reality (VR), augmented reality (AR), and mixed reality (MR), which involves both virtual and physical spaces — is projected to grow from $43 billion in 2020 to $333 billion by 2025, according to a recent market forecast. Much of that growth will be driven by consumer technologies, such as VR video games, which are projected to be worth more than $90 billion by 2027, and AR glasses, which Apple and Facebook are currently developing.

But other sectors are adopting immersive technologies, too. A 2020 survey found that 91 percent of businesses are currently using some form of XR or plan to use it in the future. The range of XR applications seems endless: Boeing technicians use AR when installing wiring in airplanes. H&R Block service representatives use VR to boost their on-the-phone soft skills. And KFC developed an escape-room VR game to train employees how to make fried chicken.

XR applications not only train and entertain; they also have the unique ability to transform how people perceive familiar spaces. Take theme parks, which are using immersive technology to add a new experiential layer to their existing rides, such as roller coasters where riders wear VR headsets. Some parks, like China’s $1.5 billion VR Star Theme Park, don’t have physical rides at all.

One of the most novel innovations in theme parks is Disney’s Star Wars: Galaxy’s Edge attraction, which has multiple versions: physical locations in California and Florida and a near-identical virtual replica within the “Tales from the Galaxy’s Edge” VR game.

“That’s really the first instance of anything like this that’s ever been done, where you can get a deeper dive, and a somewhat different view, of the same location by exploring its digital counterpart,” game designer Michael Libby told Freethink.

Libby now runs Worldbuildr, a company that uses game-engine software to prototype theme park attractions before construction begins. The prototypes provide a real-time VR preview of everything riders will experience during the ride. It begs the question: considering that VR technology is constantly improving, will there come a point when there’s no need for the physical ride at all?

Maybe. But probably not anytime soon.

“I think we’re more than a few minutes from the future of VR,” Sony Interactive Entertainment CEO Jim Ryan told the Washington Post in 2020. “Will it be this year? No. Will it be next year? No. But will it come at some stage? We believe that.”

It could take years for XR to become mainstream. But that growth period is likely to be a brief chapter in the long history of XR technologies.

The evolution of immersive technology

The first crude example of XR technology came in 1838 when the English scientist Charles Wheatstone invented the stereoscope, a device through which people could view two images of the same scene but portrayed at slightly different angles, creating the illusion of depth and solidity. Yet it took another century before anything resembling our modern conception of immersive technology struck the popular imagination.

In 1935, the science fiction writer Stanley G. Weinbaum wrote a short story called “Pygmalion’s Spectacles,” which describes a pair of goggles that enables one to perceive “a movie that gives one sight and sound […] taste, smell, and touch. […] You are in the story, you speak to the shadows (characters) and they reply, and instead of being on a screen, the story is all about you, and you are in it.”

The 1950s and 1960s saw some bold and crude forays into XR, such as the Sensorama, which was dubbed an “experience theater” that featured a movie screen complemented by fan-generated wind, a motional chair, and a machine that produced scents. There was also the Telesphere Mask, which packed most of the same features but in the form of a headset designed presciently similar to modern models.

The first functional AR device came in 1968 with Ivan Sutherland’s The Sword of Damocles, a heavy headset through which viewers could see basic shapes and structures overlaid on the room around them. The 1980s brought interactive VR systems featuring goggles and gloves, like NASA’s Virtual Interface Environment Workstation (VIEW), which let astronauts control robots from a distance using hand and finger movements.

1980’s Virtual Reality – NASA Video

That same technology led to new XR devices in the gaming industry, like Nintendo’s Power Glove and Virtual Boy. But despite a ton of hype over XR in the 1980s and 1990s, these flashy products failed to sell. The technology was too clunky and costly.

In 2012, the gaming industry saw a more successful run at immersive technology when Oculus VR raised $2.5 million on Kickstarter to develop a VR headset. Unlike previous headsets, the Oculus model offered a 90-degree field of view, was priced reasonably, and relied on a personal computer for processing power.

In 2014, Facebook acquired Oculus for $2 billion, and the following years brought a wave of new VR products from companies like Sony, Valve, and HTC. The most recent market evolution has been toward standalone wireless VR headsets that don’t require a computer, like the Oculus Quest 2, which last year received five times as many preorders as its predecessor did in 2019.

Also notable about the Oculus Quest 2 is its price: $299 — $100 cheaper than the first version. For years, market experts have said cost is the primary barrier to adoption of VR; the Valve Index headset, for example, starts at $999, and that price doesn’t include the cost of games, which can cost $60 a piece. But as hardware gets better and prices get cheaper, immersive technology might become a staple in homes and industry.

Advancing XR technologies

Over the short term, it’s unclear whether the recent wave of interest in XR technologies is just hype. But there’s reason to think it’s not. In addition to surging sales of VR devices and games, particularly amid the COVID-19 pandemic, Facebook’s heavy investments into XR suggests there’s plenty of space into which these technologies could grow.

A report from The Information published in March found that roughly 20 percent of Facebook personnel work in the company’s AR/VR division called Facebook Reality Labs, which is “developing all the technologies needed to enable breakthrough AR glasses and VR headsets, including optics and displays, computer vision, audio, graphics, brain-computer interface, haptic interaction.”

What would “breakthroughs” in XR technologies look like? It’s unclear exactly what Facebook has in mind, but there are some well-known points of friction that the industry is working to overcome. For example, locomotion is a longstanding problem in VR games. Sure, some advanced systems — that is, ones that cost far more than $300 — include treadmill-like devices on which you move through the virtual world by walking, running, or tilting your center of gravity.

But for the consumer-grade devices, the options are currently limited to using a joystick, walking in place, leaning forward, or pointing and teleporting. (There’s also these electronic boots that keep you in place as you walk, for what it’s worth.) These solutions usually work fine, but they produce an inherent sensory contradiction: Your avatar is moving through the virtual world but your body remains still. The locomotion problem is why most VR games don’t require swift character movements and why designers often compensate by having the player sit in a cockpit or otherwise limiting the game environment to a confined space.

For AR, one key hurdle is fine-tuning the technology to ensure that the virtual content you see through, say, a pair of smart glasses is optically consistent with physical objects and spaces. Currently, AR often appears clunky, unrooted from the real world. Incorporating LiDAR (Light Detection and Ranging) into AR devices may do the trick. The futurist Bernard Marr elaborated on his blog:

“The adoption of 5G will make a difference in terms of new types of content being able to be viewed by more people.” 

Irena Cronin, CEO of Infinite Retina

“[LIDAR] is essentially used to create a 3D map of surroundings, which can seriously boost a device’s AR capabilities. It can provide a sense of depth to AR creations — instead of them looking like a flat graphic. It also allows for occlusion, which is where any real physical object located in front of the AR object should, obviously, block the view of it — for example, people’s legs blocking out a Pokémon GO character on the street.”

Another broad technological upgrade to XR technologies, especially AR, is likely to be 5G, which will boost the transmission rate of wireless data over networks.

“The adoption of 5G will make a difference in terms of new types of content being able to be viewed by more people.” Irena Cronin, CEO of Infinite Retina, a research and advisory firm that helps companies implement spatial computing technologies, said in a 2020 XR survey report. “5G is going to make a difference for more sophisticated, heavy content being viewed live when needed by businesses.”

Beyond technological hurdles, the AR sector still has to answer some more abstract questions on the consumer side: From a comfort and style perspective, do people really want to walk around wearing smart glasses or other wearable AR tech? (The failure of Google Glass suggests people were not quite ready to in 2014.) What is the value proposition of AR for consumers? How will companies handle the ethical dilemmas associated with AR technology, such as data privacy, motion sickness, and the potential safety hazards created by tinkering with how users see, say, a busy intersection?

Despite the hurdles, it seems likely that the XR industry will steadily — if clumsily — continue to improve these technologies, weaving them into more aspects of our personal and professional lives. The proof is in your pocket: Smartphones can already run AR applications that let you see prehistoric creatures, true-to-size IKEA furniture in your living room, navigation directions overlaid on real streets, paintings at the Vincent Van Gogh exhibit, and, of course, Pokémon. So, what’s next?

The future of immersive experiences

When COVID-19 struck, it not only brought a surge in sales of XR devices and applications but also made a case for rethinking how workers interact in physical spaces. Zoom calls quickly became the norm for office jobs. But for some, prolonged video calls became annoying and exhausting; the term “Zoom fatigue” caught on and was even researched in a 2021 study published in Technology, Mind, and Behavior.

The VR company Spatial offered an alternative to Zoom. Instead of talking to 2D images of coworkers on a screen, Spatial virtually recreates office environments where workers — more specifically, their avatars — can talk and interact. The experience isn’t perfect: your avatar, which is created by uploading a photo of yourself, looks a bit awkward, as do the body movements. But the experience is good enough to challenge the idea that working in a physical office is worth the trouble.

That’s probably the most relatable example of an immersive environment people may soon encounter. But the future is wide open. Immersive environments may also be used on a wide scale to:

Mirror world

But the biggest transformation XR technologies are likely to bring us is a high-fidelity connection to the “mirror world.” The mirror world is essentially a 1:1 digital map of our world, created by the fusion of all the data collected through satellite imagery, cameras, and other modeling techniques. It already exists in crude form. For example, if you were needing directions on the street, you could open Google Maps AR, point your camera in a certain direction, and your screen will show you that Main Street is 223 feet in front of you. But the mirror world will likely become far more sophisticated than that.

Through the looking glass of AR devices, the outside world could be transformed in any number of ways. Maybe you are hiking through the woods and you notice a rare flower; you could leave a digital note suspended in the air so the next passerby can check it out. Maybe you encounter something like an Amazon Echo in public and, instead of it looking like a cylindrical tube, it appears as an avatar. You could be touring Dresden in Germany and choose to see a flashback representation of how the city looked after the bombings of WWII. You might also run into your friends — in digital avatar form — at the local bar.

This future poses no shortage of troubling aspects, ranging from privacy, pollution from virtual advertisements, and the currently impossible-to-answer psychological consequences of creating such an immersive environment. But despite all the uncertainties, the foundations of the mirror world are being built today.

As for what may lie beyond it? Ivan Sutherland, the creator of The Sword of Damocles, once described his idea of an “ultimate” immersive display:

“…a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such a room would be fatal. With appropriate programming such a display could literally be the Wonderland into which Alice walked.”

This article was originally published on our sister site, Big Think. Read the original article here.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
wearable robot
Subscribe to Freethink for more great stories