PDA

View Full Version : Robots of SIGGRAPH 2009



metaform3d
08-09-2009, 03:16 PM
I've just gotten back from SIGGRAPH 2009 in New Orleans, and as usual the most interesting part of the show for me were the robots. There's a strange area of overlap between computer graphics and robotics. Scene rendering is like the reverse of machine vision. Animating characters is like programming robot motion, and posing CG characters uses inverse kinematics. There is a lot of cross-fertilization possible and each field could learn a lot from the other.

I'm going to present my top 10 robots of SIGGRAPH in reverse order, starting from the worst and working up to the best. Hopefully I can get them all in by tomorrow when I leave on vacation; if not it'll be another week.

metaform3d
08-09-2009, 03:28 PM
Not sure what it is about Japanese robot researchers, but they always want to make the robot be your friend. Sometimes they do manage to make robots cute or engaging and you want to have them around.

And then sometimes things go horribly wrong. Like when they want the robot to be your baby.

The theory is sound enough, I suppose. Humans react with instinctive attraction towards babies, and all the babies have to do is sort-of smile and wriggle. Seems like it should be easy to make a baby-form robot that will endear itself to the human host. So you go to look in the crib and -- gaaa!

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/yotaro1.jpg

We've gone through the uncanny valley, past the creepy woods and straight into nightmare cove. I couldn't stand to look at the thing for more than a few seconds -- it was truly horrifying.

Here's some of the inner workings from their paper. Let this act as a testimony of what must forever lie outside the boundaries of natural law.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/yotaro2.jpg


Images are from:

Baby Type Robot “YOTARO”
Hiroki KUNIMURA, et al., University of Tsukuba
SIGGRAPH 2009 Proceedings

lnxfergy
08-09-2009, 06:42 PM
That's .... just .... wrong .... I can't even imagine how much money they spent to design/build that contraption.

-Fergs

metaform3d
08-09-2009, 07:18 PM
As our entertainment systems, fixtures and appliances become more networked and laden with features the challenge becomes how to control them all. First-person control involves walking up to the device in question and pressing buttons. Who has time for that? Second-person control takes the form of the remote, which we point at the device in question, effectively indicating the "you" as the target of our commands. But the number of possible smart devices and their options is quickly outpacing even the most complex universal remotes.

The basic concept behind the CRISTAL interface is third-person control. It projects a live image of the room from a high camera with a god-like perspective onto a large multi-touch screen. Touching and dragging on the lamps changes their illumination. Touching the TV brings up movie menus, display options, and images. Movies can be dragged to the TV, and images can then be dragged to picture frames also visible in the image.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/cristal_original.png

So what's the big deal? Since the camera doesn't move it seems like this is just a picture of your room with hot spots. Nothing innovative there. But that all changes when there is a robot in the room.

As you can see in the left-hand image, there is a Roomba in the corner. It has a markings on the top that allow the system to find it wherever it may be located. If you touch the Romba in the third-person image and drag across the floor, the Roomba follows that path. The image on the screen updates as it moves of course. You can also draw a circle around a spot on the floor and the Roomba will navigate there and execute a spot cleaning cycle.

This is a simple prototype, but one could imagine that as the number, type and complexity of home robots increases this type of interface could become a very intuitive way to direct them all.


Image from:

"CRISTAL, Control of Remotely Interfaced Systems using Touch-based Actions in Living spaces."
Thomas Seifried, et al.
SIGGRAPH 2009 Proceedings

metaform3d
08-09-2009, 07:37 PM
If Alexander Graham Bell had any sense of whimsy he would have given up on this whole idea of transmitting sound by electricity. Inventions with practical applications, let alone something that would make -- shudder -- money, are sooooo nineteenth century. The Bell of the double-oh's would have invented the Funbrella, allowing the transmission not of something as boring and practical as sound, but the feeling of rain. He'd call it tele-rain.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/funbrella.png
No, it's not exactly robot-related, unless a telephone is a robot. But it is an awesome concept and -- I will testify -- extremely well implemented. In the demo they did not only very convincing light and heavy rain from Japan (where else?) but spaghetti rain, snake rain, and raining cats and dogs. How many people can say they have actually felt it raining snakes? I can. Well, a recording.


Image from:

"Funbrella: Making Rain Fun"
Ai Yoshida, et al.
Graduate School of Information Science and Technology, Osaka University
SIGGRAPH 2009 Proceedings

metaform3d
08-09-2009, 07:58 PM
I think the freakiest thing about this robot is that you wear it. It's a sweater. Woven with galvanic-skin-response sensors and shape-memory alloy threads (muscle wire), it responds to the wearer's metabolism and changes the airflow to optimize comfort. I guess. Since it was an art piece it's hard to exactly figure out what the creator intended. Here's a quote from the description:
It was designed for a fictional world called Other Earth, where viewers are asked to consider a place where humans live as nomads in technologically advanced, self-sufficient, and low-environmental-impact nodal groups. As conceived by Eskandar, Tentative Architecture is an immaterial architecture that can happen at any time and any place, responding to the immediate needs and environment of its wearer — ideally suited to life on Other Earth.
OK, whatever. I'm a geek, I need to ground my understanding in something more substantial than ecological-fan-fiction. But in any case, it was very weird to watch the sweater breath in and out while it was parked on a manikin.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/sweater.png


Image from:

"Artifacts from a Parallel Universe: Tentative Architecture of Other Earth Coastline Inhabitants"
Xárene Eskandar
SIGGRAPH 2009, Art Show

metaform3d
08-09-2009, 08:18 PM
This thing has "hobby robotics" written all over it. Another art installation, the Growth Rendering Device is basically a machine that transcribes the shape of a growing pea shoot over a course of several days.

But look at this thing: it has servos for movement up and down and back and forth, and it has a very simple microcontroller running the whole show including a bit of breadboard. If this isn't a robot -- by hobby standards -- then I don't know what is.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/peashoots.png

So what appears to happen (I never saw it in operation) is that a mechanical timer switches on the device once a day. It performs a scan of the plant (a pea shoot growing in a liquid medium) in a series of horizontal rows, conferring where the plant blocks light to the paper underneath. Unlike what's shown in this image, there is also a paper transport mechanism that moves the paper under the device after each scan.

The result is a completely autonomous record of the growth of a plant over time. Simple, but at the same time very cool.


Image from:

"Growth Rendering Device"
David Bowen
SIGGRAPH 2009, Art Show

metaform3d
08-09-2009, 08:42 PM
Anyone who thinks that houseplants are emotive beings and yet has to deal with the embarrassing reality that they simply don't care what you say or do -- they need this device. It's an artificial plant -- something well known if you've ever been to Denny's -- but with a twist. It has sensors and actuators. That makes it: a robot!

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/mogasm.png

I have to admit the name of this art installation, "MSOrgm", made me constantly call it Morgasm. It stands for Motivational Sensitive Organism, which only reinforces the double-entendre for me. One can image in the future what heights of pleasure one might enjoy from a plastic plant made from soda straws, origami connectors, muscle wire and Arduinos. If only one was lonely enough.


Image from:

Scottie Huang
National Taiwan University of Science and Technology
SIGGRAPH 2009, Art Show

metaform3d
08-09-2009, 09:18 PM
Anyone who studies biology learns about anthropomorphism. This is the mental category error that assumes that anything with complex behaviors can be treated as if it is human. If you are studying the behavior of welks, for example, don't assume that they do what they do based on "love" or "respect" or any of a thousand other complex human concepts. Think of them as worms that basically want to eat or screw, and sometimes just mess around in the mud.

On the other hand, our power to model the social world is based on assuming that the things around us have human-like features. So what if we add them? Perhaps using anthropomorphic metaphors is a useful way to engage our innate responses to create powerful user interface experiences.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/anthro.png

Or not. Maybe it's just another Japanese attempt to make machines seem more humanlike.

Basically the idea is to add eyes (small wireless LCD displays) and hands (tiny RC servo chains) to everyday devices to allow them to interact with their users in a socially standard, albeit cartoonish, manner. The microwave is the most advanced. If you show a food packet to its camera/eye it will point to its door and control panel and give you instructions (in Japanese of course) of how to insert the food and program the cooking instructions.

This is absurd, of course. Perhaps for small children this is an approriate type of interface -- turning complex and dangerous appliances into talking teddy bears. OK, perhaps not so aproporaite. Nonetheless I rated this higher than other things that seem objectively better. Why? Because this is a good try. It takes our natural innate abilities and maps them into a simplified form that allows us access that we don't have through a normal instruction manual. Perhaps we'll hate antopomorphized appliances as much as the damn paperclip in Windows, but it's worth the research.


Image from:

"Anthropomorphization of a Space with Implemented Human-like Features"
Hirotaka Osawa
SIGGRAPH 2009 Proceedings

metaform3d
08-10-2009, 01:39 AM
Now that we've disposed of the frivolous matters we're starting to get to the real meat of robotics -- robot slaves! For any Dr Who fan the idea of turning humans into robots who obey remote control is a fairly obvious concept. Consider the robomen:

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/250px-roboman.jpg

This always seemed like it required some sort of complex neural interface just to override the subject's free will, let alone provide them detailed instructions about where they need to go and what they need to do. Who knew it was as simple as tugging on their ears?

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/rm1.png

Apparently just like horses, humans respond to simple command forces applied to sensitive parts of their anatomy. Just fit them with a bicycle helmet with two degrees of freedom on each ear, attached through a system of soft, elastic bands, and you have a willing slave. Pull up and they stand, pull left or right and they turn, pull forward they go, back they stop. A simple matter of remapping the servos to a common aircraft remote control and you have a cheap robot slave.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/rm2.png

I cleverly declined having the slave machine attached to my brain --er, ears -- but based on my observations it appeared that I escaped a terrible fate. Anyone who willingly subjected themselves to the procedure subsequently felt compelled to follow the remote commands of their masters. Apparently we are just beasts of burden ready to be ridden.

This is knowledge that is terrible and yet great. I applaud and curse the grad students that built this awful device.


Images from:

"Pull-Navi: A novel tactile navigation interface by pulling the ears"
Yuichiro Kojima, et al.
The University of Electro-Communications
SIGGRAPH 2009 Proceedings

metaform3d
08-10-2009, 03:26 PM
At some level all robots take their cues from life. Some are utilitarian designs that allow machines to perform some task that had previously been the province of humans. Some mimic natural forms in order to take on their attributes, like geckos climbing walls or moths flying. Still others have whimsical behaviors to ape pets, monsters, humans or even -- ugh -- babies.

Rarely, however, do robots ever seem to transcend mere simulation of features of living things and pass into the realm of being living things themselves. Behold Hylozoic Soil.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/hylo1.png

This sprawling art installation covers an area probably 15 feet on a side. It hangs from the ceiling and visitors walk under and through it. The skeleton is a delicate crystalline lattice made from thousands of geometric members laser-cut from transparent acrylic, and supports a wide range of sensors, electronics and actuators. Proximity sensors detect passersby and the Arduino neurons interact to create an overall emotive state. Lacy fronds wave slowly or more frantically, driven by tiny changes in long strands of shape-memory wire. Bladders dangling from the lattice may fill or empty, and tiny motors spin long whips for inscrutable purposes.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/hylo2.png

Created by professor of architecture Philip Beesley, it represents an exploration of the boundaries that architecture forms between humans and the natural world, and perhaps how those boundaries could be made permiable.

It's hard to express how really amazing this thing was. It didn't really do anything, it just sort of was. But it managed to evoke in me some of the same feelings of awe and wonder that can be felt in the presense of some beautiful part of nature. It wasn't quite like diving a coral reef or hiking in the woods, but it was eerily similar.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/hylo3.png


Hylozoic Soil
Philip Beesley, University of Waterloo
SIGGRAPH 2009 Art Show

metaform3d
08-10-2009, 04:14 PM
Building robots is hard. Not just because of the inherent challenges of dealing with technology at the limit of what's possible, but also because it requires meshing so many different disciplines. Electromechanical construction is just the beginning. There's power supply issues, and power electronics, and sensors and interfaces. There's dealing with noisy sensor data in a noisy environment, not to mention the entire sub-field of computer vision. Then there's software, from real-time microcontroller code all the way up the chain to graphical interfaces.

This team managed to put it all together. They have end-user software, high-level task software, computer vision, wireless control and a tight mechanical design. And it all ends up actually doing something practical: folding clothes.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/fb1.png

The user clicks and drags in the graphical UI (a) and defines a set of folds (b) for a particular garment. The task robot (c) then executes those folds in the real world (d). The robot shown in these images from the paper is not the final version. The one at the show had four wheels and only one wide gripper at the front. This shows an evolution of form following function that ended up with a much simpler and much more effective design.

http://forums.trossenrobotics.com/gallery/files/1/7/0/5/fb2.png

The interesting thing about this robot is that it has no onboard sensors. Precision control is accomplished using an overhead camera which tracks a unique pattern on the top of the robot. By matching this pattern the computer knows the location and orientation of the robot as well as the position and shape of the garment. This is the same technique that was used to drive the Roomba in the CRISTAL demo.

The garment is placed flat on a black surface. The computer locates it and drives the robot on a circular arc to get to the right location to start the first fold. It then spins the robot around some, which may be to fine-tune the rotational precision of their feature-tracking system. The robot then drives forwards slowly with its lower jaw scraping the ground until it's just under the edge of the fabric. It closes the jaw, lifts the fabric and drives forwards to make the fold. When the fold is complete it releases its grip and shakes the fabric free. It then orbits the clothes again for the next fold.

You can try to imagine all that, or you can just watch it here.


http://www.youtube.com/watch?v=b9gAVRtw2_4


Images from:
"Graphical Instruction for A Garment Folding Robot"
Yuta Sugiura, et al.
SIGGRAPH 2009 Proceedings

Alex
08-11-2009, 09:00 AM
Can I just say.... WOW! I'm dealing with some serious envy right now!

I didn't even know about this event, so thanks a ton for sharing your highlights with all of us:D

I for one welcome our ear-pulling overlords!