Recently I have been checking out what capabilities Meta Quest Pro can provide if you start to think about using Meta Quest devices for Mixed Reality. Already in my first look on Meta Quest Pro I made a note that the passthrough option it provides leaves a lot of room for improvement. The passthrough created by Meta Quest Pro cameras is only usable for being aware of your surroundings because the image it provides is such a low resolution and noisy that it is really difficult to read anything via that passthrough. Meta Quest 3 is coming at the end of this year, and it should have the same technology for Mixed Reality as Meta Quest Pro (MQP) has. People living in US and UK can get MQP with a nice $400 discount for a limited time, in case you are still pondering. Quest 3 won’t have the eye and face tracking features MQP has, according to info released so far. Quest 3 will surely have a much nicer price point – probably landing towards that $400-$500 ballpark.
Mixed Reality use case for Meta Quest Pro (and eventually Quest 3) is not a total miss, but it does render Meta Quest Pro (MQP) device unusable on industrial, healthcare or other industries that require people to be aware and sharp about their surroundings – and being able to read what is written / seen on screens or signs. I would not use MQP passthrough too long or try to walk anything but short distances on. I read an article where they had tested MQP passthrough and determined that it sees the environment as badly as severely visually impaired person. “..users of a Quest Pro are almost blind or severely visually impaired while using passthrough.” . Yes, you can use MQP as a simulator for that to see how accessible your environment is for visually impaired. That applies mostly to seeing details – you can be aware about your surroundings a lot better.
What it leaves for the Meta Quest Pro then on Mixed Reality? Everything else. You can blend digital information to the physical world and avoid colliding with physical world obstacles. That is – you are enough aware of your surroundings so you don’t trip to a table, step off stairs or hit yourself to a wall. You can also use the environment, for example when you are designing the office room or your apartment deco. You may want to see if furniture fits in, or is the space too crowded. You can also use apps that require (or benefit) when you move around the room – the passthrough helps there to do just that.
It also depends heavily how and what the use it. Sometimes Mixed Reality is just enough to place some objects on the floor or ceilings. In this you focus on the virtual content, but the room/space is there for the reference. And you avoid bumping to chairs, tables, people or walls. I had fun testing out Joost’s (Mixed Reality MVP) application where you can track flights taking off and landing on various airports. In this case the airport map was on floor and I was able to move in close to planes as needed. Using hand gestures on Meta Quest Pro works really nicely in this setup. And I avoided colliding to any furniture!
The quality of virtual content doesn’t reflect well in these screenshots taken with Meta Quest. Colors, contrast and the usability is much much higher than these pictures let you understand. But you can see the background quality isn’t that good – however in this app example it didn’t matter. In fact it worked out very well – I wasn’t trying to read anything through Mixed Reality, only to have the blend of MR content to the real world and avoid colliding to objects and walls while using the device.
Meta Quest has three main areas to note for Mixed Reality development.
Passthrough is not just about the quality of camera feed, but also that it is rendered. Pay attention to this “An app cannot access images or videos of a user’s physical environment”. This means that developers don’t have direct access to camera feeds (raw data) that could be then fed to AI or cognitive services. When thinking about object identification use with Meta Quest Pro it can not be done – at least not by that specification. The passthrough render quality is sketchy at best, and thus trying to feed that to AI to identify specific items there is not possible. This is of course a software choice by Meta, perhaps they will re-evaluate these capabilities later. Currently you can only develop apps that don’t rely on identifying your surrounding details by AI.
Spatial Anchors mean you can lock in specific coordinates in the real world that set the reference to content in the mixed reality. They are origin points and allow you to build the experience based on those anchors. These anchors can also be persistent and shared so in a sense these are building blocks for the metaverse experiences. For example you could leave some virtual items on your physical table in the room and some on the bookshelf. You then log out and when you log back in you can see the virtual content in those real world locations where you left them. These items have been spatially anchored. These anchors have been used in various applications all over mixed reality for a long time and Meta Quest is not an exception. You can build experiences that have content that has been spatially anchored to specific places in the real world. It could be a viewscreen, art, a bot or whatever suits your experience – think about a virtual guide at a museum which stands next to a painting, ready to explain visitors about the painting history.
The final piece on Meta Quest development is the Scene. Unlike HoloLens 2 that can identify the room (walls, ceiling, floor) Meta Quest devices need a human to help set up the room by letting the device know where walls, for example, are and adding tables and such to the room as well. This is basically the Scene Model.
Using Scene Anchors and Model and Spatial Anchors you have building blocks to create a mixed reality experience. To start experimenting or creating first apps for Meta Quest you can check out the sample application The World Beyond that demonstrates how to use these building blocks together. You can get also it’s source code from GitHub to learn how these applications are built.
Of course that is just the Mixed Reality part – the world experience. The important part are people and for that you need to learn about Movement SDK with body, face and eye tracking. Movement SDK, Passthrough, Scenes and Spatial Anchors are part of the Meta Presence Platform. There are also Meta Quest Reference Documents that will help to learn more.
Meta Quest Pro potential in the Mixed Reality is about how it can relay our presence in those experiences. How well it can read our facial expressions and transform that to avatars. It is not about creating industrial solutions (where you need to be able to read what’s around you and see the real world clearly), but about the office, space and other place solutions that benefit on social interactions. Powering us to work better and using Mixed Reality to enhance collaboration while helping us stay aware about our surroundings – to avoid bumping to each other or tipping on tables. In many ways MQP is just a Virtual Reality headset that has some Mixed Reality capabilities – something that will be used during this and next year regarding the Metaverse and how it expands to homes, offices and other locations.
For industrial solutions you need to be looking for HoloLens 2 (and 3 in the future, fingers crossed!), Magic Leap 2 and similar Mixed Reality headsets or glasses. The difference is that while cameras could be used to create a good quality passthrough, it seems that those great quality cameras won’t be included in headsets just yet.