Thu. Jun 1st, 2023

Video shot and edited by Justin Wolfson. Click here for transcript.

My first experience with a hologram was, like so many other people’s, completely fictional: a small, blue figure projected from R2-D2 in the original Star Wars. About a decade later, I got a taste of the real-world state of the art from the Museum of Holography in New York City, now closed. Holograms existed in all their 3D glory, but they were static. You promised to display one image when the hologram was created, and that was it. No animated messages from princesses.

But since then there has been progress. Holographic screens with actual refresh rates, albeit excruciatingly slow, and other approaches have been described, but products based on these have yet to appear. Meanwhile, non-holographic approaches to 3D have really taken off. TV and movie screens provide 3D viewing with simple glasses, but do not allow interactions. Immersive glasses and gear allow for interaction, but only for the people wearing the glasses, isolating them from everyone around.

So we were intrigued when Shawn Frayne, founder of Brooklyn-based company Looking Glass, offered us the chance to look at what they call Holoplayer One. It is a 3D projection system that allows users without glasses or glasses to interact with the projected images. And, perhaps most importantly, it was almost ready for the market.

3D in the brain

Non-holographic systems create the illusion of visual depth by exploiting how our visual system normally works. Our eyes are individually incapable of much depth perception; instead, slight differences in the information obtained by our two eyes are interpreted by the brain to provide information about depth and relative locations. This happens naturally because our eyes are slightly apart. That separation is enough for them to view three-dimensional objects from slightly different distances and perspectives. To realize how much this matters, all you have to do is spend part of your day trying to navigate life with one of your eyes closed.

But it is possible to trick the brain into thinking that an image projected from a flat screen has depth. The oldest, dating back to the 16th century, is Pepper’s Ghost, which relies on a combination of mirrors and partially reflective glass, as well as distance – you need to be far enough away from it that other visual cues about depth aren’t visible. not spoil the illusion. This was most commonly used to create a “live” performance by the late rapper Tupac.

But the alternative is to simply give your two eyes slightly different images. The 3D glasses you get at the movie theater just put different filters in front of each of your eyes. Combined with the right content, this ensures that your eyes see slightly different images. VR goggles make it even easier by placing different screens in front of each eye.

Shawn Frayne, founder of Looking Glass, indicated that the Holoplayer works on similar principles. It ensures that as long as your face is within a certain distance (a meter or two) and viewing angle, your two eyes will see different images, allowing your brain to interpret the screen as three-dimensional. But it doesn’t need glasses or goggles to do this. How does that work?

The basis of the system is a standard LED display. But the hardware is split into 32 overlapping sub-displays, each showing a slightly different perspective on the item on screen. These subdisplays are interlaced, meaning their pixels are intermixed, rather than separate, separate images arranged on a grid. If you look directly at the LED display, you get a blurry-looking version of the object.

The magic happens after the light exits the LED display. First, it reaches a partially reflective, polarization-sensitive mirror called a beam splitter. This only reflects light if it has a specific polarization and is arranged to match the polarization of the LED. It sends light back to the hardware and to a space above the LED display that is covered in a reflective coating that also rotates the polarization of the light. When the light comes back out of the device and reaches the beamsplitter, its polarization has changed so that it is no longer reflected, causing the images to come out of the Holoplayer.

As a result of all these reflections, the individual screens end up somewhat separated in space. The separation is enough for your eyes to see different images, which your brain then interprets as a three-dimensional image. “Without a headgear, we shoot out nearly three dozen renders at a time,” Frayne told Ars, “and your eyes intercept those renders streaming out of the system into space.”

The system is flexible enough to improve with technology. Frayne showed us a version with a higher resolution screen that looked noticeably better. It is also possible to have the 3D rendering appear in the hardware, which also seemed to improve the image quality. But that takes away from the system’s other selling point: depth-sensitive interactivity.

3D touch

Just above the light-manipulating hardware, the Holoplayer is equipped with an Intel RealSense camera that can track fingers and gestures in space. This makes the system interactive, as it can compare the position of a user’s finger to the space occupied by displayed items. “We track your fingers across the top of the Holoplayer 1 and that tracking is then fed into the application and allows you to manipulate and interact with that floating three-dimensional scene directly,” said Frayne.

Frayne showed software that allows users to draw in 3D with their fingers and other software that works like a virtual lathe and forms a spinning object (the software can send the output directly to a 3D printer). There was even a game where blocks had to be moved through a rotating 3D landscape.

For the game, a standard game controller was fitted with a shaft topped with a white plastic ball that was easy to follow, allowing for the integration of gestures and some button mashing. Right now, Looking Glass only tracks one finger, but Frayne said there’s no reason additional digits can’t also be tracked (Intel technology can handle all 10 of them). And, perhaps more dramatically, those fingers don’t have to belong to the same user.

The projection method is key to making it more than a personal interface. Anyone within the effective area of ​​the screen will see exactly the same as the person using it – and will perceive that user’s finger as interacting with projected content in exactly the same way as the user. “If I touch a spot on a 3D scene – let’s say the front of an X-wing hovering over the Holoplayer – my friend sitting next to me and looking over my shoulder sees my finger coincide with the same tip of that X-wing that I see,” Frayne said. “That means we can have a shared experience for how we interact with that floating three-dimensional scene without a headgear.” This opens up the kinds of interactions you can have with the system, and even creates the possibility that more than one user can interact with a project at the same time.

How easy is it to work within this interface? My experience was quite mixed. While drawing, I could definitely get the system to display what I intended. However, it was a major fail to play the game, as it was too difficult to determine the depth of the controller relative to the depth I perceived from the visuals. Frayne told us that things get better with practice, and younger users have an easier time adapting to it.

A future interface?

The hardware Looking Glass currently offers is not for casual users. Frayne says they want to get it into the hands of developers so they can start exploring how to use a 3D interface effectively. Still, at $750, it’s not far off the price of some VR headsets on the market, and mass production should bring that price down. The resolution of this version isn’t brilliant, but (as mentioned above) it can be increased without changing anything fundamental about the operation.

And there’s no question that this is a fun way to experience a 3D interface. It takes no special preparation to get started – no google or glasses – and as the iPhone first demonstrated, using fingers for interaction greatly reduces the hassle associated with manipulating a gesture-based interface . The hardware itself is compact enough to be easily adapted to work in information kiosks, for example. And in such use, the fact that it’s a virtual environment that’s shared with everyone watching can have some great benefits.

There are also some specialized niches where a full 3D environment seems like a huge asset, such as architecture, manufacturing, and repair shops.

But Frayne certainly hopes there will be a general market for a 3D interface. And while he has some clear ideas about where it could work, his goal with the developer kit is to get others thinking about how to apply the technology.

By akfire1

Leave a Reply

Your email address will not be published.