If geeks love it, we’re on it

SIGGRAPH 2010: Best of the emerging technologies

SIGGRAPH 2010: Best of the emerging technologies

SIGGRAPH 2010 emerging technolgies feature image

The annual SIGGRAPH computer graphics conference doesn’t just bring the “behind-the-scenes” of major motion pictures and video game production—it also shares the spotlight with emerging technologies. There are always spectacular developments in new display technology, human-device interfaces, and interactive art—as well as mind-bogglingly bizarre contraptions that practically beg for a possible real-life application.

I spent several hours examining several such devices and concepts while walking the conference floor. While there were many that vied for my attention, I narrowed down five that I found to be exceptionally interesting. Here’s more about them, and why I felt they were important.

RayModeler: 360-Degree Autostereoscopic Display

“Help me Obi-Wan Kenobi, You’re My Only Hope,” cried forth a holographic image of the desperate Princess Leia in the classic Star Wars Episode IV movie. 3D displays such as this have been subjected to numerous iterations in various sci-fi flicks across the decades, but we’ve yet to see this jump the gap into reality. Sony is the next in line to make the attempt, with their new prototype, RayModeler.

RayModeler is a (get ready for a mouthful) 3D 360-degree-viewable volumetric autostereoscopic display. This means that along with being a “true” 3D image that you can walk around and see from every angle, it also appears in 3D without the use of glasses as each eye automatically receives a slightly different image causing the 3D depth effect. From what I could tell, the display is achieved by using a drum lined with tiny LED lights which, when spun at a certain speed, creates a 3D effect by blinking at specific times. It also banks on the retinal persistence of vision (the fact that an image remains on the retina for a fraction of a second, just like many other displays) to maintain a “solid” image between LED passes. The device can play both pre-captured video, as well as interactive games such as a rather unique take on the classic “Breakout” or “Arkanoid” type of game.

There were two RayModelers on display, both of which were fed by a PlayStation 3 console. The 3D Breakout game was controlled with the two analog sticks on the PS3 controller (rotate entire model, and rotate the paddle below). The video demo could be rotated, and the 3D character could be rotated and zoomed. Overall, I was fairly impressed, and hope that such displays become mainstream someday—as long as the resolution is improved (it was a bit blocky).

Here’s the official demonstration video from Sony:

3D Multitouch: When Tactile Tables Meet Immersive Visualization Technologies

The third dimension proved to be massively popular at SIGGRAPH 2010, as were multi-touch interfaces. The two were happily married in this device by Immersion SAS, which was a large multi-touch device similar to the Microsoft Surface. Displayed was an interactive overhead map, complete with a model of a building. With head tracking, the building moved to face you, causing a rather impressive 3D effect that caters to your actual head position.

Taking advantage of the multi-touch interface, the map could be easily zoomed and rotated, similar to the maps feature on an iPhone/iPad. The head tracking was achieved by tracking a small, strange three-pronged device that attached magnetically to the 3D glasses. To record the video below, I pulled the tracker off the glasses and placed it to the side of my camera (and disabled the stereoscopic effects).

One final note of interest is that the display can support up to two people, not just one. Due to the head tracking, normally a device like this can only cater to one viewer, but this device used two separate video projectors within the housing to shine on the display—one for each person. I was left to assume that the glasses were active-shutter, and were synced separately to see their own image but not the other person’s. Overall, I was extremely impressed with it. I could take or leave the stereoscopy, but the head-tracking was amazingly cool, along with a very smooth and responsive multi-touch interface.

Below: I take the 3D Multi-touch for a spin (with the stereoscopy off), showing the multi-touch interface to zoom, rotate, and pan, as well as the head-tracking effects.

In-air typing interface for mobile devices with vibration feedback

Once again, I was looking at a device that was 3D-centric. From the University of Tokyo’s Ishikawa Komuro Laboratory comes a 3D in-air visual-based interface, which was a rather nifty approach to bridging the gap between real-life hand gestures and interactions with a device. Using a forward-facing specialized camera attachment to a PDA-like device, I was able to use my index finger to type, paint, and move around an image.

The camera (or more likely, two or more cameras) tracked the finger with the aid of four infrared LEDs. Once the finger is calibrated by lining it up with the virtual reticle, you were ready to go, as long as the finger remained within the camera’s boundaries. If you left, you  had to re-calibrate, which was something of a chore after a while. Interaction with the device was achieved by moving your finger around, which manipulated a cursor. “Clicking” was achieved by quickly poking your finger down and back, which would cause the device to issue a brief vibrational feedback to confirm your tap was registered.

There were three main features that were demoed—typing, 3D painting, and object manipulation. The typing app was a virtual keyboard that was used by “tapping” on the letters. While the approach was novel, it was fairly difficult to “tap” without accidentally moving over to another letter on the way down, so typing was impractical in its current setup. The painting was a bit more fun: tap to begin painting, and from there wherever you finger moved in a virtual 3D space, a trail of colorful pentagons would follow. Finally, the third app was a still image that could be “picked up” by tapping, and then dragged around.

Overall I found the device add-on to be rather interesting. Its main drawbacks included slow and frequently required calibrations and difficulty of accuracy, which made it slightly impractical in its current iteration. However, it is certainly a glimpse of what 3D gesture-based interactions could bring in the future. I could also see it shining with integration with interactive displays where physical touch is impractical or undesirable, such as museums or hospitals.

Below: the official demonstration video from the University of Tokyo.

University of Tsukuba: AirTiles and Beacon

The next two may not seem practical, but employed a rather creative use of lasers as an interactive art medium. The University of Tsukuba, Japan brings us both AirTiles and Beacon.

AirTiles: a Flexible Sensing Space

AirTiles is what was called a “flexible sensing space”. Using a series of electronic disks that contained a laser, infrared input and output ports, and a range-finding sensor, the modules could become aware of each other and the space between them. Each were slightly smaller than a cereal bowl, and could be easily picked up and moved around.

To set them up, the first AirTile is activated by pressing a small button, which activates a single laser beam that is drawn onto the floor. When a second AirTile is placed within the first’s laser beam, it also activates and in turn fires off another laser. At this point, multiple modules can be chained together until the originating tile receives the terminating beam from the final module in order to create a triangle, square, etc.

Once a pattern of tiles is achieved, the space inside is then tracked by all the tiles. If something blocking their vision, such as a hand, a shoe, or other intrusion is introduced, the tiles will blink and make an auditory sound effect. While this may seem strange at first, I could easily see similar devices created for security systems, gating in outdoor pets with virtual fences, or even in a toddler’s play area.

Below: I show how to set up the AirTiles together by “chaining” the lasers together. While I chained them properly, I didn’t quite get the square array to detect my hand. However, the triangle array next to it sensed my hand properly by flashing and beeping.

Beacon: an Interface for Socio-Musical Interaction

Tsukuba’s other entry was entitled Beacon, a laser-powered interface for socio-musical interaction. The device consists of a meter-tall cylinder that shines one or more wide laser beam that rotates around the center pillar. When the beacon detects that one of the lasers is tripped, it plays a musical tone.

The Beacon had several user-adjustable settings, such as number of simultaneous laser beams, rotational speed, and what tone set to play (such as piano, synthesized orchestra notes, etc). Another point to note is that the proximity a detected object is from the Beacon will affect the pitch of the tone—closer objects registered lower pitches, and farther made higher tones.

What made the Beacon fascinating to me is that they were rich in social interaction. While you can make a few notes with your own two legs, it took teamwork with others to make entire short tunes. I would love to see one of these set up at the local mall or museum, where I could invite some friends to try and stand in a way to play the opening few notes of the classic Super Mario Bros. theme.

Below: I show how different tones and rhythms can be created by quickly moving three blue plastic cups around between passes of the green laser beam.

Tactile Display for Light and Shadow

The final emerging tech display that really caught my eye was off to the side and rather unnoticed. Brought to us by Kunihiro Nishimura, an associate professor from the Graduate School of Information Science and Technology in the University of Tokyo, is a device that transfers light and shadow to tactile feedback. Basically: you can feel light.

There were two different models that shared similar characteristics. They both were a cylinder about the size and shape of a Quaker Oats oatmeal can, with a screen on top that analyzed light and shadow, and an array of plastic nubs on the bottom that each had a separate vibration actuator. Shadows that fell upon the screen were translated to the appropriate nub below, and the shadows could be felt on the palm of your hand.

While a device that transfers light playing upon a screen may seem silly at first, it certainly could lead to some interesting developments. Besides the fact that it was rather fun to use, it could lead to advances in helping the blind better perceive the world around them. Similar to Braille, what if simple images or outlines could be felt on the sensitive area of a fingertip, a palm, or even the entire back or chest? The entire body could be turned into an adaptation of vision with a device such as this, if implemented properly.

Below: the official demonstration video from the University of Tokyo.


Overall, the SIGGRAPH 2010 lineup of emerging technologies did not fail to disappoint. Some years there is a myriad of solutions looking for a problem, but I think that this year there are certainly some interesting and creative devices that could certainly be put to some good use. In an age where niches rule the day, devices such as the light-to-touch module can really be put to work to make the world a better place. The advancements in display technologies and further applications of 3D are also quite exciting to see. I can only look forward to what the emerging technologies SIGGRAPH 2011 will bring.

Comments

  1. UPSLynx
    UPSLynx Great writeup of the emerging tech section.

    I seriously didn't spend enough time there this year. I meant to check out Sony's 3D prototype but totally forgot, and this just reminded me that I failed :(

    I missed out on a lot of good stuff this year, with all of my paperwork running for AMD. Still, had a lot of fun with what I did get to see. Emerging tech never disappoints.

    Sadly, no shower of hamburger this year.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!