At #MSBuild2017 Microsoft’s Alex Kipman defined the company’s Mixed Reality vision for developing what he called the “Virtual Interaction Spectrum”
I first got in touch with Alex Kipman back in 2008, when I was writing an article for a gaming magazine about a secretively code-named “Project Natal,” which later became the Kinect motion sensing system for the Xbox 360. For me, it was interesting to see how this rising star at Microsoft – a Brazilian expat like myself – was finding new and intuitive ways for humans and machines to interact. And although the focus of that technology was ostensibly on gaming, one couldn’t fail to spot broader potential applications down the line if they pulled this off.
Fast-forward to Microsoft Build 2017, and Kipman is something of a rock star figure, cheered enthusiastically as he took to the main stage for his day 2 keynote on Mixed Reality. Because if Microsoft now believes that MR represents “the new frontier of computing”, it’s in large part down to him pushing and developing that vision over the past decade. He is ultimately working towards a world where we can interact with computers in as natural a way as possible, and all devices eventually become lenses.
In his keynote Kipman also took the opportunity to address the sometimes controversial terminology surrounding that vision, referring to a “Mixed Reality Spectrum”, which goes all the way from simpler forms of augmented reality (think Pokémon Go) to fully immersive virtual reality to 3-D interactive holograms. It boils down to thinking about the interaction between the virtual and real worlds in terms of “and” instead of “or,” he said.
In keeping with that strategy, Microsoft unveiled hardware and platform features designed to support the growth of what it now officially calls “Windows Mixed Reality.” These moves are meant to enable developers to create compelling content that will in turn drive broader consumer adoption and treats all forms of Virtual and Augmented reality as part of this broader continuum.
And as much as some sceptical pundits might dismiss this terminology as a marketing play, it is actually based on a widely cited paper on the subject published by Paul Milgram. Even back in 1994 when the paper was first published, A Taxonomy of Mixed Reality Visual Displays already had difficulties in establishing clear-cut categories that differentiated between these different types of virtual experience, and predicted that in future they would increasingly blend into one another.
What people like Kipman argue is that different headsets and different experiences will mix the physical and virtual realities to varying degrees, and that each of those experiences will fall within a certain place within the Mixed Reality spectrum. We will increasingly have more versatile hardware and content which allows users to navigate through that spectrum seamlessly, so there will be no need to worry about which type of headset you have. The best possible experience will automatically be delivered to that HMD.
This context helps us to make sense of why Microsoft insists on calling the headsets unveiled at Build Mixed Reality as opposed to Virtual Reality – which is what most people would term the fully occluded experience they offer at this stage. Built in partnership with Acer and HP, these use the same groundbreaking inside-out tracking technology developed for the HoloLens. As of now developers can pre-order these in advance of a consumer launch scheduled for the 2017 holiday season. Additionally, Microsoft also plans on releasing a set of Mixed Reality motion controllers which – unlike systems like the HTC Vive and Oculus Rift – will not require any external physical markers, working instead with the headset’s internal motion sensors.