The matrix effect
I received today a video from a friend about the new mariokart home circuit trailer, and, as I was studying the AR/VR market for an MBA training course, it resonated to me.
I was writing on the point that, during the CES 2020, the boundary between AR and VR was fainting with the emergence of businesses mixing both capabilities.
After the first generation of products like the GoPro camera and the Google cardboard that have emulated the market, last year 2019 CES was presenting several products that were facilitating environment capture across different angles
- 360 degrees cameras were allowing to capture environment with sticks or drones and post-processing the capture the remove the trace of the camera from the recording
- Scanner or integration software (like an application on a smartphone) were allowing to capture 3D volumes (like building in cities)
- Stream processing software create the possibility to remove unwanted items (like cars on the freeway) from the video
At the core of the AR is an algorithm that allows to associate physical coordinates to video in real time. This is used to map 2D or 3D animations on the environment, providing an effect of virtual reality by enhancing the visual with additional information or characters.
Most of the solution of VR use the environment information but do not memorize it, as the focus in on matching objects and information about them (for example a surface to put 3D characters on it or a hand to catch an interaction)
In contrast, the VR creates a reality for the user and immerges him into it, using either eye tracking or motion sensor to acquire the geographical information. The world is either pre-defined by a blueprint or an algorithm that allows to create it while the user discover it.
We already observe VR 360 videos created from users like this video of a take-off of a navy rafale that immerge the user in a captured reality
But the fundamental difference is that, like VR, the localization information of the video is not memorized and a virtual world cannot be created from there.
On the movie Matrix, John C. Gaeta with the Bullet Time effect created a striking innovation by providing a combination of time freeze and geographical travel. I consider this effect as disruptive as it question the separation between AR and VR.
To recreate such an effect, you need to acquire the environment and then immerge the user into it.
In this new product from Nintendo, we see a first step toward live integration where the camera is used to capture the possible layout of the circuit and, then, the car drone is moving into the virtual circuit enhancing the real view (player can see themselves) with virtual rewards.
But, if you have observed model car racing, the drawback of this Nintendo product is that , if the drone car is directly controlled by the player, there is an important management of car that will crash to a wall, flip on their back or get stuck on the circuit.
But this combination of technologies open the door for many possibilities toward the matrix bullet time effect.
What if the car drone is equipped with 360 camera ? The environment can be captured in a way that can be rendered in an immersive view where a second person can “be in the car” while the player is controlling it.
A VR simulation concept of this was presented by Audi in 2017 at Oslo where the user could arrange a sandbox layout and then play a car simulation in it.
What if the trajectory of the car drone is captured by motion sensor and integrated ? Then the environment can be created in a navigable way and people can then navigate virtually in it. This process often used for virtual tour of house, can be extended to then create scenes like the attack of the black star in star wars.
What if two car drones are operated in sync ? Then having 2 sources of video allows to generate a 3D model of the environment in real time and give physical dimensions to the elements. As the object of the environment are represented in space and with a 3D information, they can be enhanced by behavior that are more consistent with their nature.
For example, place a virtual snake in a box that is only visible when you can look inside the box or go next to it.
By using the concept of this Nintendo car, but extending it with a 360 view and integration techniques, we can foresee a complete shift of the gaming code toward the principle of game master (like D&D Dungeon and Dragon).
The game master could pilot the drone that will acquire in real time the environment and define the scene of the game, while the players will pilot in immersive VR their digital characters in the scene. AI enhancements could dynamically add characters to the scene while playing (like running a scene on a tennis court and replace any tennis ball by an angry alien or firing cannon).
This shift will simplify the creation of virtual scenes and put the players at the center by either acting as a game master and allowing semi-virtual worlds that can be created from the environment around us.
It can create an emulation similar to the bloggers that communicate on their play, with an living interaction where the followers can directly play in the game. This is also a creativity booster for young (and not so young) players to arrange a circuit than can be used after for virtual race.
In our current social distancing context, this also bring concrete and personal touch in game play, where the game master will “invite” players to his house where he has prepared a new circuit for the afternoon play
If we look further, in VR games like The Void, the virtual experience is enhanced by a physical layout of a room (like a laser game room) that will provide feeling of temperature, heat and physical touch. But, despite the amazing experience, the need for a physical layout and a focused scenario limit the scalability of the model
In the CES 2020, several providers were demonstrating products that allows to provide physical feedback from equipment : glove for heat, cold and simulation of pressure, body equipment for impact simulation, surround sound and vibration to simulate localization of the characters.
If those technological products can be leveraged to enhance those interactive experience, there is a full new world of interactive gaming in front of us 🙂