Thu. Mar 23rd, 2023
This image from Microsoft Research's "Mano a Mano" paper shows how three projector/Kinect pairs can create believable 3D virtual objects from two different perspectives.

This image from Microsoft Research’s “Mano a Mano” paper shows how three projector/Kinect pairs can create believable 3D virtual objects from two different perspectives.

Microsoft may be taking an official wait and see approach before companies like Oculus and Sony follow the path of virtual reality headsets. That hasn’t stopped the company’s research division from exploring interesting ways to use Kinect and projector technology to create holodeck-like augmented reality experiences in the living room, though. Microsoft Research has prepared some interesting demos and papers on these lines for the User Interface Software and Technology Symposium of the Association for Computing Machinery, showing how far those efforts have come and how they could lead to exciting new forms in the future of gaming.

The first project, RoomAlive, promises to “transform any room into an immersive augmented virtual gaming experience,” as the researchers put it. The system uses six paired projector/Kinect units mounted on the ceiling so that they overlap slightly. These units can automatically calibrate themselves with a series of projected light patterns, converting their individual Kinect depth maps into a unified 3D point cloud model of the room.

From there, RoomAlive translates the point data into a series of vertical and horizontal surfaces representing the walls and furniture, then translates that into a 3D environment in the Unity game engine. Using that virtual representation of the room, the system then figures out how to project a unified image onto those walls and surfaces, distorting the projection to appear correct on each surface. The effect is like transforming the entire room into a computer screen or monitor, complete with player tracking via the array of Kinect cameras.

In addition to some non-interactive demos, MSR showed off a few game concepts using the system. In one “whack-a-mole” game, tracked by Kinect, users can touch or shoot creatures that appear on the wall. In another, a character with weapons controlled by a handheld controller runs across the wall, onto a table, then onto the floor while being chased by robots. The final demo places virtual nail traps on the wall for users to dodge, bathing the room in red when and if they are hit.

Dual perspectives and finger detection

In a similar ACM demo, called Mano-a-Mano, a team of two MSR researchers uses a trio of projector/Kinect combinations to create an augmented reality effect that provides correct three-dimensional perspectives for two different users. Each projector displays virtual objects against the walls, floors and fixtures in a room in such a way that they appear to float in the center of the room. The apparent perspective and size of those virtual objects change when the user’s position and head angle are detected by Kinect to create the illusion of true depth and position in the center of the room.

That’s quite a fake 3D solution for a single user, but how can such a system make two people look at a virtual object from different angles? That’s where the multi-projector setup comes in, giving each user their own take on the virtual scene. By “assuming each user is unaware of images being projected onto the wall behind them or their own body,” as the researchers explain, the system can show two different perspectives of the same scene that appear to each user. look correct. In the demo, the system is shown for a simple catch game and for a “battle game” where a user can summon fireballs in their hand and throw them at the user across the room.

The last of MSR’s ACM demos that might be of interest to gamers and game makers is Handpose, a system that adds a degree of detail and articulation to Kinect-based hand and finger tracking. A new tracking algorithm appears to allow researchers to discern individual fingers and hand gestures in much greater detail than previously possible with a standard Kinect v2 sensor.

Users are shown throwing complex finger positions at many different angles, while the tracking system tracks those positions quickly and accurately in a 3D model of the hand. This tracking is “robust against tracking failure,” works up to “several feet” away from the sensor, and works no matter where the camera is, even if the tracking camera moves, the researchers say. A video demo shows users how to use the system to easily grab and move virtual objects by simply moving their fingers together and apart.

Coming to the store… never?

https://www.youtube.com/watch?v=sJ4hWa6y710

These kinds of augmented reality experiments aren’t exactly new to Microsoft and Microsoft Research. MSR’s latest demos revolve around IllumiRoom, an impressive demonstration from last year that showed projectors being used to take gaming action beyond the confines of a TV screen. And let’s not forget that Microsoft’s 2012 “Project Fortaleza” leak and subsequent patents both point to interest in heads-up augmented reality displays.

Of course, no real products, or even real hints of real consumer products, have emerged from those revelations so far. Microsoft Research’s efforts are only loosely tied to the company’s consumer-facing divisions, and these proof-of-concept demonstrations should not be taken as indications that the Xbox division will be moving in this direction anytime soon. Even if they did, the technology would have to get a lot smaller and less expensive before the average consumer would be willing to mount three to six projector/camera combinations on the ceiling.

Still, it’s nice to see at least one Microsoft division pushing the boundaries of gaming beyond the flat screens and controls we’re used to. If we one day move into a world where projector- and tracker-based gaming is a viable consumer reality, this kind of fundamental research will sow the seed.

By akfire1

Leave a Reply

Your email address will not be published.