2013-11-27

This Badass 3-D Camera Array Could Revolutionize Entertainment

Augmented Reality, 3-D printing, retail catalogs, gaming--just a few of 3-D photography's potential applications. A new 64-camera rig from L.A. photographer Alexx Henry could make realistic 3-D ubiquitous.



The appeal of 3-D imagery is innate: It can make the user experience in games, commerce, and communication more lifelike. But reproducing 3-D shapes in a computer takes hours of CAD work and rendering. Unless, of course, you have 64 digital SLR cameras rigged up to capture this stuff in real life.

“Three-D is inherently a more tactile medium,” says photographer Alexx Henry, who has built a camera capture studio called the xxArray, which was used to create the interactive image above.

“To be able to actually reach out and manipulate something and use augmented reality apps to experience the content is incredibe and I think 3-D is the most logical next step. Putting it in everyone's hands in the mobile and tablet space is going to happen with or without us.”

Oh, The Places You’ll Go (In 3-D)

If you could capture a photorealistic 3-D model of just about anything, what you do with it? The xxArray makes 3-D imaging about as simple as entering a photo booth and pushing a button. And the implications for digital media forms are tantalizing to say the least.

Henry is using the rig initially for the launch of Art and Skin, a 3-D tattoo magazine launched just last week for the iPad. But other industries outside of media are already attempting to include more 3-D in their media. Ikea made a splash a couple months ago when it released its augmented reality catalog, and nearly every car, clothing, and bicycle manufacturer these days allows you to “test drive” their wares by spinning a 3-D rendering.

How The xxArray Works

The “array” in the name refers to a group of cameras: 64 Nikon D5200 bodies, to be exact. “There was a lot of trial and error involved in setting this up,” says Henry. “We went through about four different setups before we got it right.”

Software called Agisoft Photoscan stitches the images from the 64 cameras together to make it a cohesive whole. Agisoft was originally designed for the Geographic Information Systems industry as a tool for photogrammetry, which is the science behind Apple's 3-D maps, for instance. But the same techniques that capture buildings and topographic features apply equally to the contours and textures of the human body.

Using the whopping 15 gigapixels of separate images taken by the xxArray, Agisoft stitches together the 2-D photographs into one 3-D model. Normally the process is fairly labor intensive, but with a heavy dose of Python scripting, Henry says his team has gotten the process down to about 90% automated and 10% manual.

Over five dozen DSLR cameras that retail at around $1,000 each may sound like a lot. But in the world of photogrammetry, Henry calls it an “impressively low” number. Similar setups have used upwards of 80 or even 100 cameras.

“It's very easy to solve your problems by adding cameras. But in trying to keep our camera budget to a minimum, we had to be very creative in positioning the cameras,” Henry says.

Using as few cameras as possible is crucial for Henry’s project, if the xxArray is going to be able to scale beyond his studio’s walls. Even with such a relatively low number of cameras, the rig makes extensive use of stereo pairs--cameras targeted at the same focal point--in order to capture rich textures, making images more lifelike.

“Stereo pairing gives incredibly high confidence for the software when you compare those points. We're getting really incredible texture quality,” Henry says.

How The xxArray Could Be Commercialized

“We believe that the 3-D asset is the digital currency of the future,” Henry says. This isn’t Henry’s first foray into the future of digital media; his studio has a track record of innovation in digital publishing, so it's worth taking Henry's foresight seriously.

Augmented reality apps could go far beyond Ikea's furniture catalog. With a 3-D avatar of yourself, a clothing store that integrates 3-D scans of its products could let you see how a suit or dress would actually fit on your body.

In video games, we might finally be able to do away with character creation screens. Instead, you could play as yourself and see your own face and body in the game, competing the immersion experience. Imagine if this combined with virtual reality video games and you could walk through a fantasy world and actually look at yourself in a mirror.

And of course 3-D printing would have extensive use for such a tool. It could join the growing list of software that is simplifying the CAD process, such as Tinkercad. It's foreseeable that software will be developed to translate a 3-D image directly to a CAD format.

Meanwhile, the JavaScript API WebGL builds upon the HTML5 canvas element so that all major browsers are capable of rendering and manipulating 3-D graphics. So we could start seeing this being used on the open web too, not just in apps and games. (The 3-D model above is rendered using WebGL.)

Will It Scale?

Despite the almost limitless potential of the technology, it's not clear if it's going to take off. “Right now there are two main problems that we need to solve,” Henry says. “The first is getting arrays propagated. Second and equally important is getting people to want a 3-D scan. We need to have real meaningful use cases.”

So meaningful, Henry says, that the practice might require new verbiage. “Think about what photography has always been. Photography has always been taking the latest bleeding edge technology and describing something in a new way. That's exactly what photogrammetry is.”