PDA

View Full Version : The Ultimate Kinect thread



RobotAtlas
11-05-2010, 02:58 PM
I've had a couple designs I was playing around with for a $20 projector. The projected stereo is actually getting a lot of usage these days -- I've even seen discussion about using it as the default 3D perception input to the costmap during navigation rather than the tilting lidar.

I was just reading Kinect teardown and they use IR projector:
http://www.ifixit.com/Teardown/Microsoft-Kinect-Teardown/4066/2

Edit:
This is pretty cool:

Our friends at Adafruit (http://www.adafruit.com/) have posted a $2,000 bounty (http://www.adafruit.com/blog/2010/11/04/the-open-kinect-project-the-ok-prize-get-1000-bounty-for-kinect-for-xbox-360-open-source-drivers/) for the first team to write an open source driver for Kinect. The Wiimote has been tremendously useful for our hobby robotics projects, and the Kinect's advanced sensors could really jump-start robotics research.

SK.
11-06-2010, 02:28 PM
Kurt Konolige's paper about projected texture stereo is also worth a read (like basically all publications you can find on the WillowGarage website). It pretty intruiging how much thought went into the PR2's projector:
http://www.willowgarage.com/sites/default/files/ptext.pdf

Also very interesting: Video of Kinect's projected IR points:
YouTube - Kinect with nightshot

SK.
11-11-2010, 12:58 AM
Update:
There is a first (very basic) working Kinect Driver:
http://git.marcansoft.com/?p=libfreenect.git

..with the author deservedly winning the adafruit price:
http://www.adafruit.com/blog/2010/11/10/we-have-a-winner-open-kinect-drivers-released-winner-will-use-3k-for-more-hacking-plus-an-additional-2k-goes-to-the-eff/

SK.
11-11-2010, 01:03 AM
Two months and we'll know a little more. Will be interesting if (or when) Kinect can be hacked and how good the sensor really is.
First answer is: Yes and now (http://www.adafruit.com/blog/2010/11/10/we-have-a-winner-open-kinect-drivers-released-winner-will-use-3k-for-more-hacking-plus-an-additional-2k-goes-to-the-eff/)

Second question will likely be answered in the coming days.

SK.
11-11-2010, 04:27 AM
Just tested Kinect, I have to say the data look very impressive.

Example image:
http://img809.imageshack.us/img809/5162/kinectscreenshot.png
Note that even small depth differences can be seen quite clearly. Haven't done a systematic evaluation though.

lnxfergy
11-11-2010, 08:41 AM
Just tested Kinect, I have to say the data look very impressive.

Example image:
http://img809.imageshack.us/img809/5162/kinectscreenshot.png
Note that even small depth differences can be seen quite clearly. Haven't done a systematic evaluation though.

Wow... I'm still amazed they hacked it that fast. I was trying to pick one up yesterday, but haven't found them in stock anywhere local here. Question -- how far away is that backpack? (I'm wondering what the minimum range is... I know the specs say 6ft, but does it seem like the Kinect might be usable for table-top manipulation?)

-Fergs

jotheberlock
11-11-2010, 09:08 AM
I too am intending to pick up a Kinect. Presumably you can't have two robots with Kinects in the same space? (I'd imagine the projected IR grids would interfere with each other). Which is a shame, because it seems like this'd be great for autonomous mech warfare bots.

SK.
11-11-2010, 09:26 AM
Wow... I'm still amazed they hacked it that fast. I was trying to pick one up yesterday, but haven't found them in stock anywhere local here. Question -- how far away is that backpack? (I'm wondering what the minimum range is... I know the specs say 6ft, but does it seem like the Kinect might be usable for table-top manipulation?)

-Fergs
0.5m ist definitive minimum range, from 0.5m to 1.0m there are occasional drop outs or oversaturation when illuminating surfaces directly perpendicular. From 1.0m out to larger ranges one gets very dense depth images. The backpack is probably around 1.0m away (from color mapping one of course would be able to directly get information about what distance Kinect thinks ;) ).

SK.
11-11-2010, 09:28 AM
I too am intending to pick up a Kinect. Presumably you can't have two robots with Kinects in the same space? (I'd imagine the projected IR grids would interfere with each other). Which is a shame, because it seems like this'd be great for autonomous mech warfare bots.
Haven't tried that yet, but I also strongly suspect this to be the case. Wii "sensor bar" IR lights (different wavelength apparently) don't distract Kinect btw, have one lying around and pointed it right at the Kinect IR Cam.

RobotAtlas
11-11-2010, 09:40 AM
After Kinect was reversed-engineered yesterday, I thought it's time to create one of those Ultimate threads for it in the great tradition of TRC.

One of the main questions I had is what is useful range of Kinect.
So I went to 4 stores yesterday and all of them are out of standalone Kinect. So I was _forced_ to buy the whole system. The whole system does not have USB adapter though, so it would have to be ordered separately.

To test the long range, I tried looking with night mode of my Sony camcorder. In the dark room the dots from IR projector are seen as far as 27 feet. I don't know if Kinect camera has the same sensitivity as Sony though. Considering that Kinect takes 12 W, I see that majority of it probably goes to projector.

I think Kinect will be good for SLAM. If the matcher would work in 3D instead of 2D, it would get a lot easier time matching the data, even in the long uniform hallways. It would be probably too computationally intensive though.

RobotAtlas
11-11-2010, 09:54 AM
Guys, let's move this discussion to Kinect's thread. This one is for PlayStation Eye.

RobotAtlas
11-11-2010, 10:00 AM
I too am intending to pick up a Kinect. Presumably you can't have two robots with Kinects in the same space? (I'd imagine the projected IR grids would interfere with each other). Which is a shame, because it seems like this'd be great for autonomous mech warfare bots.

Wow, jotheberlock, slow down. :) They just hacked it yesterday and you already talking about mech warfare. I'm sure people will find a way to "share" texturing. As long as the surface is textured, it shouldn't matter who textured it. Timing is another way of sharing. You don't really need that much data 30 (or 60?) times per second.

RobotAtlas
11-11-2010, 10:09 AM
sensors used in RGB-D are around $10,000, right?

No RobotNV, they are $150 street. :)

jotheberlock
11-11-2010, 10:42 AM
To test the long range, I tried looking with night mode of my Sony camcorder. In the dark room the dots from IR projector are seen as far as 27 feet. I don't know if Kinect camera has the same sensitivity as Sony though. Considering that Kinect takes 12 W, I see that majority of it probably goes to projector.

Hm, so the Kinect only works at all if you have power hooked up to it as well as USB? I was hoping the power connection was only for the pan/tilt. What sort of voltage does it take?

lnxfergy
11-11-2010, 01:58 PM
No RobotNV, they are $150 street. :)

Well, let's not get overconfident here -- 1.0m as a minimum range is certainly limiting. In particular, this is not going to work too well for picking up objects off a table -- it will however work quite well for finding people in an environment, or avoiding driving into walls....

-Fergs

lnxfergy
11-11-2010, 02:04 PM
Actually, as long as you don't completely wipe out the texture (as in, all parts of the surface are covered with IR light), it should work. I would assume that the device is only doing typical block matching -- as most other algorithms for stereo disparity are intractable in realtime systems. So, the IR light is simply an aid to the block matching, the exact pattern is somewhat irrelevant.

-Fergs

SK.
11-11-2010, 02:12 PM
Actually, as long as you don't completely wipe out the texture (as in, all parts of the surface are covered with IR light), it should work. I would assume that the device is only doing typical block matching -- as most other algorithms for stereo disparity are intractable in realtime systems. So, the IR light is simply an aid to the block matching, the exact pattern is somewhat irrelevant.

-Fergs
I don't think Kinect works that way, it's not classic stereo with texture projection. It only has one IR cam. The pattern gets projected by the projector that has an offset of about 7.5cm from the IR camera. Knowing the exact pattern and the offset, the single IR cam can estimate the distance of the projected points, as their offset differs with distance. I strongly suspect this indeed means that a disturbance by another similar pattern will seriously hamper operation. I guess it won't take long till someone points two Kinects at something simultaneously though, so we'll know more soon.

UncleBob
11-11-2010, 03:32 PM
I thought this thread is about hooking up a kinect to a robot

RobotAtlas
11-11-2010, 04:43 PM
Absolutely it's about connecting Kinect to a robot.

parallax
11-11-2010, 07:22 PM
My lab is picking up a Kinect to play with after seeing the open source hacked driver (good find, SK!). At $150 it's definitely worth exploring the possibilities. As Fergs pointed out it will certainly have limited functionality in terms of some of the features we may want for a robot but there may be ways in which this could prove applicable. My personal thoughts are for using it to aid a stationary robotic platform (like a robot receptionist or something of the sort) in segmenting individual people standing in front of it from the background. This usage is pretty similar what it was intended for, so my hope is that it should perform okay in this role. If it can be done with a low overhead it would definitely be worthwhile. Plus there IS that nifty microphone array... Anyway, I'll make sure to report back here and share when we have some hard data on it.

-parallax

SK.
11-12-2010, 01:16 AM
I wouldn't be so cautious about applications, I see no reason not to mount the sensor on mobile platforms and for it not to be useful during motion. Already did some testing and it looks quite usable. For fast movements there might be need for rolling shutter compensation (visible in the RGB data for slow movements, probably also is there for the IR filter CMOS). You basically get (very dense, high resolution) depth data without the cost of stereo processing. This should relatively easily be adaptable for Visual SLAM applications using Bundle adjustment (to get depth data per feature), and is even more interesting once one tries to leverage the dense depth data additionally.
As posted before, mapping leveraging the advantageous properties of such a sensor can be seen here:
http://www.hizook.com/blog/2010/03/28/low-cost-depth-cameras-aka-ranging-cameras-or-rgb-d-cameras-emerge-2010

A ROS driver is in the works, but I guess it will take some time till the calibration issues are identified and consistently dealt with.

UncleBob
11-12-2010, 02:43 AM
What I mean is not physically put the Kinect on top of the robot. But place it on a table then you walk around in front of the table then the biped mimic you.

DresnerRobotics
11-12-2010, 10:44 AM
Hey guys,

While I appreciate the humor, 3 posts of silly pictures in 1 page of thread that don't contribute to the actual discussion is a bit prohibitive to fostering a good discussion of the Kinect. The purpose of these Ultimate threads is to create a decent repository of information and discussion on topic. I've deleted the pictures and also went through and swept up the Kinect discussions that were in other threads in hopes of organizing things a bit.

/modhat off

defwheezer
11-12-2010, 11:42 AM
Sorry 'bout that, will try not to get carried away again.

defwheezer
11-12-2010, 11:55 AM
My personal thoughts are for using it to aid a stationary robotic platform (like a robot receptionist or something of the sort) in segmenting individual people standing in front of it from the background. This usage is pretty similar what it was intended for, so my hope is that it should perform okay in this role. If it can be done with a low overhead it would definitely be worthwhile. Plus there IS that nifty microphone array..
-parallax

Exactly what I was hoping it could be used for- really accurate people/face 3d tracking to assist a virtual (or animatronic) interactive Kiosk that can discriminate between people and interpret their actions to give appropriate responses. Look forward to seeing what you guys can do with it!

defwheezer
11-12-2010, 03:07 PM
EDIT: deleted....

[Backs slowly away...]

jotheberlock
11-12-2010, 03:45 PM
"What do we (all) want?
Open source drivers for this cool USB device, the drivers and/or application can run on any operating system – but completely documented and under an open source license. To demonstrate the driver you must also write an application with one “window” showing video (640 x 480) and one window showing depth. Upload all of this to GitHub.

You're a little behind the times - this bounty has already been won, hence this thread being started.

RobotAtlas
11-13-2010, 02:54 PM
The accuracy of Kinect is pretty amazing. I tried using kinect_node_subset.bag from http://www.ros.org/wiki/kinect_node in rviz and this is what it shows. Look at the end of video for the shape of the head.

http://www.youtube.com/watch?v=FbzrDJMyWYg

jotheberlock
11-13-2010, 03:36 PM
I've just got one too, and indeed it looks like the best thing since sliced bread for rangefinding. Unfortunately, it+a FitPC2 seems to be a bit too heavy for my AX12-based quad, which has me wondering if I should switch to a hex or rover :/

RobotAtlas
11-13-2010, 04:10 PM
That new PhantomX hexapod that Trossen just started selling can easily carry 4 lbs.
That shold be enough, even considering extra battery requirements. What I like about hexapod is when it walks it has at least 3 legs on the ground. I'm definitely getting a hexapod now.

RobotAtlas
11-14-2010, 02:52 PM
Quick update on the weight. The were not taken with a digital scale, so numbers are approximate.
The Kinect sensor is 500 grams (including long USB cable). Usb cable on Kinect sensor by itself is about 100g. MS Kinect USB adapter with 12V/1A wall power supply is 200g.
FitPC2 with HD inside is 400g.
Total so far 500 + 200 + 400 = 1.1kg (2.5 lb). Not light, but not out of the question for mobile robots.

parallax
11-15-2010, 07:55 PM
The head of my lab just made good on her promise to pick one up for us. She was so excited about it that she couldn't wait to buy one- she picked it up while on her way back from a conference in DC. Here is a screenshot she took less than ten minutes after unboxing it in her hotel room:

http://forums.trossenrobotics.com/gallery/files/3/9/0/0/screenshot.png

Can't wait until we can do some real testing with it. God I love my professors! As promised, I will report back here when we have something more concrete to share.

jotheberlock
11-15-2010, 08:02 PM
A heads up, by the way, unless they've fixed it there's a bug in the libfreenect code as shipped which had me scratching my head for half an hour or so. If you look at the RGB camera code in c/lib/cameras.c at line 164, you'll see

'for (y=0; y<480; y++) {
for (x=0; x<640; x++) {'

And later on in the code there's a line which references '3*(i-640)]' where i is (y*640)+x.

This is a negative index into the buffer when y is 0 - it doesn't show up on the demo app out of pure luck, but it certainly crashed my code quickly enough. I hacked around it by changing line 164 to

'for (y=1; y<480; y++) '

which I would assume makes the top RGB scanline garbage, but works for my purposes. The devs are aware of the problem and working on substituting a better algorithm.

RobotAtlas
11-16-2010, 03:36 PM
I thoughts it's worth sharing this blog post I stumbled upon today about how Kinect works:
http://gilotopia.blogspot.com/2010/11/how-does-kinect-really-work.html

The guy is pretty excited about it:


Researchers would pay thousands until now for hardware that wasn’t half as good as this. The community has embraced this thing with open arms. Microsoft has said they’ll release an SDK to work with their wonderful software (hurry up will you?). Until now computers couldn’t actually see. Even the limited info they could receive wasn’t really that easy to interpret so it wasn’t used. The Kinect has given computers more than eyes, it’s given them vision.






One question is: Why livingroom gaming as the first target? It seems like an odd first choice. My personal theory is that it’s a trojan horse tactic. First get it into everything then enable everyone to write software for it. Developers, developers, developers.

One thing is certain. This has changed the face of computing and robotics forever.

jRaskell
11-17-2010, 08:10 AM
One question is: Why livingroom gaming as the first target? It seems like an odd first choice.


Seems like a pretty obvious first choice to me. It's simply a matter of economies of scale.

I'm sure Microsoft soaked a huge amount of R&D money into the Kinect. If they had developed it specifically for the robotics market, the simple fact is they'd be dealing with a market that would sell tens of thousands, maybe hundreds of thousands of units at most. By targeting their game console platform, they're dealing with a market of millions. That allows them to take advantage of bulk purchasing/manufacturing/distribution as well as spreading R&D and overhead costs over millions of sales instead of thousands of sales. There's also a tangible benefit to their overall console business to boot.

If they had developed it specifically for the robotics market, the pricetag would have been significantly higher, probably 10 times or more what it is, which would have only further reduced the market segment.

cdop
11-18-2010, 07:33 AM
Somebody already put it on a robot:

http://singularityhub.com/2010/11/17/hacked-irobot-uses-xbox-kinect-to-see-the-world-obey-your-commands-video/

Pi Robot
11-18-2010, 12:17 PM
For the RoboRealm users out there, a new module for the Kinect has just been released:

http://www.roborealm.com/help/Microsoft_Kinect.php

--patrick

RobotAtlas
11-19-2010, 09:32 PM
Good news: Kinect consumes around 4 W and can run from as low as 8V and as high as 12.6V (or higher).
See details here:
http://kinect-with-ros.976505.n3.nabble.com/Re-Ros-kinect-Power-and-voltage-limits-td1933722.html

RobotAtlas
12-21-2010, 04:29 PM
I remember seing somewhere older version of Pi Robot using a mirror ball to icrease field of view of a camera.

I wonder if the same approach can be used with Kinect to be able to use it for SLAM?

Edit: A little improvement:
Imagine a horisontal hexagon. Each side represents a bottom side of a mirrow mounted at 45 degrees. Kinect would be pointed upward facing this combination of 6 mirrors.
Side view showing 2 mirrors of 6:


\ /
mirror -\ /- mirror
\_ _/


__o_o__0_
| Kinect |
-----------

Eight mirrors might work better.

Pi Robot
12-22-2010, 11:31 AM
Hey RobotNV,

Interesting idea. However, unless I'm missing something, I'm guessing that the Kinect's depth sensing would not work after been reflected this way. Wouldn't the projected IR pattern get distorted in ways that would mess up the depth calculations?

--patrick

RobotAtlas
12-22-2010, 12:07 PM
I searched YouTube for "Kinect mirror" and it looks like Kinect works with mirrors. The question is how well.
The depth distortion can be dealt with at the higher level.

Pi Robot
12-22-2010, 02:17 PM
Excellent! BTW, do we know that it is necessary to do all this to make SLAM work with the Kinect? I picked one up last week and have played with it just enough on Pi Robot to prove I can start building a map using ROS and Tully Foote's pointcloud-to-laser (http://www.ros.org/wiki/pointcloud_to_laserscan) package and gmapping. But I don't have it set up for battery power yet so I haven't done a full test while driving the robot around to see how much the limited field of view affects the process. Surely Fergs has already done all this in his spare time and mapped out the entire Albany campus? ;)

--patrick

lnxfergy
12-22-2010, 03:17 PM
Excellent! BTW, do we know that it is necessary to do all this to make SLAM work with the Kinect? I picked one up last week and have played with it just enough on Pi Robot to prove I can start building a map using ROS and Tully Foote's pointcloud-to-laser (http://www.ros.org/wiki/pointcloud_to_laserscan) package and gmapping.

The 57 degree field of view is certainly limiting for the Kinect to be used in SLAM. I really have to wonder how well it will work for laser scan matching using the pointcloud-to-laser package (especially when facing forward -- for anyone following along with my Neato/Hokuyo/PML work, the frontal ~90 degrees of view is almost always out of range in larger areas, meaning the pointcloud-to-laser package might not be useable at all in scan matching, and only usable for obstacle avoidance).

Out of curiosity Patrick, what is the grid size you have on your SLAM experiments? (the projected grid in RViZ, not the delta parameter for gmapping). I ask because I can't quite tell the size of your apartment. I know I have serious issues with the Neato/Hokuyo/PML in areas where the scan is out of range, which seems to happen in my lab and at home.

On the subject of Omnidirectional Kinect -- it is an interesting hypothesis. In theory, we should be able to unwrap the depth image, and then use some of the PCL utilities to remove the ground plane, leaving only the obstacles around the robot -- this would yield a much better cloud for the pointcloud-to-laser work (assuming noise isn't too much -- and I'm not sure about that).

The one major problem we might have is: the range and camera separation! The laser and camera are about 4" separated -- how do we get them both on an omnidirectional mirror? (and preferably the same one), without being so far away from the mirror that the camera image barely is on the mirror (in other words, without most pixels being of stuff behind the mirror).


But I don't have it set up for battery power yet so I haven't done a full test while driving the robot around to see how much the limited field of view affects the process. Surely Fergs has already done all this in his spare time and mapped out the entire Albany campus? ;)

We had these silly things called finals (well, mostly final papers) over the past week, so I haven't gotten much else done. My current list of things to do:


Collect a new dataset (as ROS bag) before the holidays -- with a Neato robot, the neato laser, and the Hokuyo all one platform, for better ability to validate what issues are caused by the neato odometry and what is caused by the laser itself (hopefully, this will happen this evening).
I'm getting a number of additional SLAM algorithms ported into ROS currently, as I'm really of the opinion that gmapping is just not the right tool for short range lasers in medium or larger size environments.
I've really got to track down a bug in the ArbotiX-ROS code so that I can get 0.4.0 out. And also so that I can get some videos posted of ArbotiX-based arm navigation.
And yes, I need to get up the nerve to cut that connector off the Kinect and put it on battery. I really was hoping to get a Primesense SDK kit, but.... not looking hopeful.

-Fergs

Pi Robot
12-22-2010, 08:17 PM
Hey Fergs,

Terrific analysis. I believe my RViz grid size was set to 0.2m for the SLAM videos so, for example, the width of the living room is 12 feet although there is a sofa that projects about 3' into the room on the far side (top of the video). BTW, looking at the pointcloud_to_laserscan source (cloud_to_scan.cpp), the max_range is set to 10.0 m. Does the Kinect really have that long a range?

--patrick

lnxfergy
12-22-2010, 08:44 PM
Hey Fergs,

Terrific analysis. I believe my RViz grid size was set to 0.2m for the SLAM videos so, for example, the width of the living room is 12 feet although there is a sofa that projects about 3' into the room on the far side (top of the video). BTW, looking at the pointcloud_to_laserscan source (cloud_to_scan.cpp), the max_range is set to 10.0 m. Does the Kinect really have that long a range?

--patrick

I seem to think it's closer to 4 or 5m (at least, at usable resolutions), but I might be wrong. Even with 4-5m though, you've only got that 57 degree field of view....

Now, one thing I failed to mention above, is also the surfaces in play. So far, I'm almost liking the Neato laser better than the Hokuyo because it works way better with shiny surfaces or high angles of incidence (I believe Nathan Monson called this a different term in his presentation, and I really need to go back and re-watch to catch that term, because I believe it was more descriptive that what I've been calling it). I'm thinking that the Kinect should be more like the Neato than the Hokuyo for household surfaces.

-Fergs

RobotAtlas
12-22-2010, 10:15 PM
I just measured the distance between Kinect IR projector and cameras are 2" and 3". I think 3" away camera is RGB and optional.

lnxfergy
12-22-2010, 10:28 PM
I just measured the distance between Kinect IR projector and cameras are 2" and 3". I think 3" away camera is RGB and optional.

Still -- let's assume a 4" reflector (which is actually bigger than I think most of us are running), you really do want the camera located directly below the center of the mirror, give ~60deg FOV, when the camera is more than 4" away from the reflector, many pixels are wasted (not on the reflector, looking past it). At 4" away, that 2-3" separation is a huge angular difference.

There may be a way to do it -- but I don't think that the traditional type of omnidirectional lens will be well suited.

-Fergs

RobotAtlas
12-22-2010, 11:29 PM
At 4" away, that 2-3" separation is a huge

After drawing a few pictures I see you are right. putting a mirror at an angle shifts projector FOV away from IR camera FOV (assuming in the device they have parallel axes).
So it doesn't look good.

Except that projector and camera are on the same horizontal axis, so mirror can be rotated around that axis without distortion to the depth.
So here's a new design covering ~180 degrees:

http://forums.trossenrobotics.com/attachment.php?attachmentid=2261&stc=1&d=1293081528

Kinect is positioned vertically and looks ahead in the gap between 2 mirrors.
The two mirrors allow Kinect to see to the left and to the right. If you take a bottom depth edge of top mirror and top depth edge of bottom mirror and the gap is not very wide, then this setup will work for relatively vertically uniform environments. RGB info can help with matching too.

elaughlin
01-17-2011, 11:04 PM
Don't know if this is handy or not to anyone here. Just saw it on sale, I didn't know they had a wall mount, so just posting it for people using kinect, who would want an easy mounting solution.

Xbox 360 Kinect Wall Mount

lnxfergy
01-17-2011, 11:43 PM
Don't know if this is handy or not to anyone here. Just saw it on sale, I didn't know they had a wall mount, so just posting it for people using kinect, who would want an easy mounting solution.

Xbox 360 Kinect Wall Mount (http://www.amazon.com/gp/product/B0040Y3K9U/sr=81/qid=1295191351/ref=olp_product_details?ie=UTF8&me=&qid=1295191351&sr=8-1&seller=)

Another cool mount http://www.iheartrobotics.com/2010/12/thing-month-12-trik-tripod-adapter-for.html (I'm planning on doing this tomorrow or Wednesday for my Kinect).

-Fergs

lnxfergy
01-18-2011, 05:18 PM
Another cool mount http://www.iheartrobotics.com/2010/12/thing-month-12-trik-tripod-adapter-for.html (I'm planning on doing this tomorrow or Wednesday for my Kinect).

-Fergs

I printed one of these today on the Makerbot (had to use the heated build platform, as it's pretty big).

-Fergs

parallax
01-23-2011, 03:24 PM
I just wanted to share what a friend of mine in my lab has been up to working with the Kinect in his HRI research:

Here is a video...


http://www.youtube.com/watch?v=GdepIXZTJsw&feature=player_embedded


...and here is an article with source code (http://www.ros.org/wiki/openni/Contests/ROS%203D/Humanoid%20Robot%20Control%20and%20Interaction) written by him.

Enjoy!