SoftKinetic AR Gesture Input Allows 3D Manipulation with Hands

By Published January 14, 2014 at 10:25 am
  •  

Using an Occulus Rift headset, a 3D-printed bracket, and a Creative 3D camera (shown previously), Virtual Reality (VR) and Augmented Reality (AR) are taking another step in the right direction. SoftKinetic, a company that specializes in 3D depth-sensing and gesture recognition, allowed us to use our hands for what they called "natural interaction in a convincingly smooth 3D demonstration." In other words, furthering of perceptual input technology and movement away from traditional input devices.

SoftKinetic's demo had us manipulating building blocks in 3D space with our hands as input devices.

Before getting too deep in this post, you might be interested to learn about the history of virtual reality & augmented reality, which we recently explored with companies Oculus VR, Sixense, and Virtuix.

 

SoftKinetic explained their approach to improved AR as being three-pronged: Hardware, imaging, and software. First, they designed and produced a Time of Flight (ToF) device to accurately sense depth. Like most ToF devices, theirs uses a laser to perform the depth sensing at the industry standard 320x240 resolution. The second piece of SoftKinetic's approach is a 720p RGB color imager used to recognize gestures. These two hardware components (along with the logic needed to control them) make up the company's new gesture-recognizing input camera.

softkinetic-1Some of SoftKinetic's underlying tech.

SoftKinetic had two different versions of the camera on-site (shown below). They're fairly small -- to the point that one laptop had a version embedded into its top bezel. That means no lugging an extra peripheral around.

The final piece of SoftKinetic’s approach is their software. They refer to it as iisu, or the Interface Is You.

What makes this different than just strapping on an Oculus Rift is that -- with the addition of the camera to the front -- you can now use your hands to manipulate the 3D environment in real-time. Using your hands as the input device is not new, but the level at which their demonstration performed was commendable. For this demo, there were no thimbles to put on our fingertips, there were no speed restrictions on our movements, and the one restriction they did give us we learned during the demo was no longer valid. (SoftKinetic's rep said we couldn’t manipulate two objects simultaneously—one with each hand—but we did it anyway and it turns out it worked!).

softkinetic-3Tim (left) accidentally enters the scene (pointing at the camera) and scares his colleague.

We haven’t had a chance to complete extensive talks with SoftKinetics so we can’t tell you exactly, technically, what makes their tech so smooth in its functionality, but as we learn more we promise to keep you updated.

The possibilities for this technology are many-fold. In the demo we experienced, which was created in Unity, we were given a block spawn point where we could grab blocks and put them anywhere in a 3D space. When we began stacking blocks and building small structures, the obvious came to mind: Minecraft. Building large, complex structures in Minecraft-like games seems like it'd be infinitely more intuitive and painless using such a natural input. SoftKinetic's partnership with Intel has a collection of more already-functional ideas over here. If you’re interested in creating your own 360* environment where your hands are the interface, it’s surprisingly accessible to developers. Here are the tools:

We look forward to further exploring this technology with SoftKinetic! Keep your eyes out for future full-length pieces on this technology.

- Patrick "MoCalcium" Stone.

Last modified on January 14, 2014 at 10:25 am

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge