Here’s how Apple Vision Pro works; It Navigate using just your eyes, hands and voice

After about a 30 minute demo that walks through the key features that have yet to be tested, I came away convinced that Apple has made a real leap forward in the capability and performance of XR or Mixed Reality with its new Apple Vision Nothing less is given.

To be super clear, I’m not saying it delivers on all it promises, is a truly new paradigm in computing or any of the other high-powered claims Apple is expected to deliver after it ships. Is. Is. I needed a lot more time with the device than a guided demo.

But, I’ve used essentially every major VR headset and AR device since 2013’s Oculus DK1 through the latest generations of Quest and Vive headsets. I’ve tried all the experience and stabs when it comes to the XR. I’ve been reawakened and reawakened as the developers of the hardware and software of those devices and their marquee apps continue to chew on the “killer app’s puzzle” — trying to find something that’s widely real.

There are few real social, narrative or gaming breakthroughs like Gorilla Tag, VRChat or Cosmonius. I’ve also been struck by the Sundance filmmakers’ first-person experiences shedding light on the human (or animal) condition.

But none of them have the advantage that Apple brings to the table with the Apple Vision Pro. Namely, 5,000 patents filed in the last few years and a huge base of talent and capital to work with. Every bit of this thing reflects Apple-level ambition. I don’t know if this will be “the next computing mode” or not, but you can see the conviction behind every choice made here. No corners cut, Full-tilt engineering on display.

The hardware is good – very good – with 24 million pixels across two panels, orders of magnitude higher than any headset will expose to most consumers. The optics are better, the headband is comfortable and quickly adjustable and there’s a top strap for weight relief.

Apple says it’s still working out which Light Seal option to ship with when it’s officially released, but the default was comfortable for me. They aim to ship them with different sizes and shapes to fit different faces. The power connector design is very small, as well, which interconnects using an internal pin-type power linkage with an external twist lock.

There is also a magnetic solution for some (but not all) optical adjustments that people with differences in vision may need. The onboarding experience features an automatic eye-relief calibration that matches the lens to the center of your eyes. There is no manual wheel adjust here.

The main frame and glass piece look fine, though it is worth noting that they are on the larger side. Not overwhelming, really, but definitely present.

If you’ve had any experience with VR, you know that the two big deterrents affecting most people are either the latency-induced nausea or the dissociation that long sessions can cause some wear on your eyes.

Apple has narrowed down both of those ends. The R1 chip that sits alongside the M2 chip has a system-wide polling rate of 12ms, and I didn’t notice any judder or fraydrop. A slight motion blur effect was used in the passthrough mode but it was not distracting. The windows themselves creaked and rattled all around.

Of course, Apple was able to mitigate those issues because of a lot of new and original hardware. Everywhere you look there is a new idea, a new technology or a new implementation. All that novelty comes at a price: $3,500 is on the high end of expectations and places the device firmly in the power user category for early adopters.

Eye tracking and gesture control is perfect. Hand gestures are picked up from anywhere around the headset. This includes resting on your lap or under you and resting on a chair or couch facing away from you. Many other hand-tracking interfaces force you to hold your hands in front of you, which is tiring. Apple has dedicated high-resolution cameras on the bottom of the device to track your hands. Similarly, an eye-tracking array means that, after calibration, everything you look at is highlighted accurately. A simple low-effort tap of your fingers and boom, it does the job.

Having a real-time 4K view of the world around you, including any humans in your personal space, is very important for long-term VR or AR wear. There’s a deep animal brain thing in most humans that makes us really uncomfortable if we can’t look at our surroundings for long enough. Addressing that concern through an image should improve your chances of using it in the long run. There’s also a clever “breakthrough” mechanism that automatically passes through your content to the person approaching you, alerting you to the fact that they’re approaching.