What is spatial computing and how does it work?

We Keep you Connected

What is spatial computing and how does it work?

Most Popular
‘ZDNET Recommends’: What exactly does it mean?
ZDNET’s recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.
When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.
ZDNET’s editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.
The term spatial computing is a broad concept that combines the physical world with virtual content. This process enables digital objects to exist and interact with the physical environment as if they were actually there, even allowing the user to interact with the digital objects.
Also: Meet Apple’s Vision Pro: Price, features, hands-on insights, and everything you need to know
This is spatial computing in broad terms — a system for computers to interact and understand the physical environment around users, and where they can also interact with virtual objects.
Spatial computing creates a seamless interaction between virtual and physical environments through software and hardware. You need a platform to make spatial computing possible. This platform can sense real-world information through cameras and sensors, process it in real time to understand the context of the captured space, and display content onto the physical surroundings to overlay content or create completely virtual environments.
Also: I tried Apple Vision Pro for a weekend and here are my 3 biggest takeaways
This three-stage process requires advanced sensors, cameras, and complex algorithms that often use artificial intelligence for spatial reasoning and object recognition. It’s how you can put on an AR headset and see a digital cat sitting on your physical coffee table or a text displayed only on the wall behind your couch.
Spatial computing examples include AR glasses, like Meta’s Ray-Ban smart glasses and Xreal’s Air 2 Ultra; and VR headsets, like the Meta Quest 3 and Apple Vision Pro. These spatial computers combine the physical world with a virtual one, whether by creating an immersive environment or overlaying information.
Virtual reality (VR) is a type of spatial computing. Blending the real world with digital experiences can be achieved through various extended reality (XR) technologies, including VR, but also augmented reality (AR), and mixed reality (MR). 
Extended reality (XR) is a universal term to describe technologies that digitally alter reality. XR is a broad umbrella term that encompasses spatial computing technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR).
Also: The best AR glasses
VR: A technology used to create simulated spaces for immersive experiences. These simulated environments can be full 3D worlds or sections of virtual content overlaying the physical world. VR headsets typically have displays and a set of lenses for the eyes to see through. While VR headsets can show the physical world in front of the user, this is typically recreated by the cameras on the front of the device, as the user’s view tends to be completely obstructed by the headset’s displays. Though Apple markets its Vision Pro headset as an AR headset with apps overlaying the wearer’s surroundings, it does this with sensors and cameras as the wearer can’t see directly through the Vision Pro.
AR: Creates an interactive experience for users by overlaying digital content, like images and text, onto the user’s physical environment. AR glasses, often called smart glasses, don’t obstruct the wearer’s view. They often sport darker lenses than prescription or reading glasses to ensure the content can be viewed brightly. The AR experience also extends to smartphones and tablets — anytime you use a shopping app to see what a piece of furniture will look like in your living room, you’re using AR. 
MR: Mixes both AR and VR in a way. MR gives the wearer a view of the real world around them with overlaid digital objects, but this technology lets these objects interact with the physical world. MR is more immersive than AR but not a fully immersive experience like VR. 
Though Apple has made the term popular recently, “spatial computing” has been around for decades. One of the first formalizations of the term came in 2003 by Simon Greenwold, who presented a thesis on his work with spatial computing. 
Also: MIT Reality Hack revealed the momentum building in VR, AR, and XR
Greenwold highlighted the move to more immersive computing environments and said spatial computing is where the physical space becomes a medium for interaction with digital information.

source

GET THE LATEST UPDATES, OFFERS, INFORMATION & MORE