PDA

View Full Version : Experience with ROS Arm Navigation and Grasping?



Pi Robot
12-14-2010, 08:37 AM
Hello ROS Fans,

Has anyone succeeded in getting the ROS arm navigation stack and/or object manipulator stack running on their own robot arm? I've got David Lu's arm kinematics package running on Pi Robot's 5-DOF arm and now I'd like to take the plunge into the navigation and manipulation stuff. Fergs, do you have these running on your Armadillo 2WD? I'm doing my ROS coding in Python but I can read C++ if I have too.

--patrick

maitri982
01-30-2011, 08:55 PM
Hi Patrick,

I see you don't have any responses yet to this post...I wish I could help. I can point you to a resource that you probably already found:


http://www.ros.org/wiki/arm_navigation/Tutorials/Running%20arm%20navigation%20on%20non-PR2%20arm


Of course all would be easier if we had the laser scanner and arm from the PR2 :)

I think this will be challenging for sure. It involves a lot of areas including all the kinematics and motion planning, and also locating an object in cartesian space to know *where* you want your arm to go in the first place.


I also don't know if the ROS planners presume Laser scan input or forearm cameras to work.

You have done some really great stuff thus far, so I know you can conquer this eventually. FYI..I am about to watch your presentation to the club out there on Pi and ROS.

Best,
Kevin Cramer

maitri982
01-31-2011, 08:15 AM
Hi Again Patrick,

I just watched your talk on "ROS for the rest of us", and I first want to say I apologize if you thought my above email was condescending in any way. You clearly have dug through the innards of ROS and are aware of all the issues with arm motion kinematics...far better than I.

A couple of other thoughts:

* your goals and mine are identical with robots. I am not so much interested in robots as toys or for battling as I am in practical robotics. I want something that can have real utility such as navigating and retrieving something for me.

* you have a bout a 10 year head start on me :) I wish you were geographically closer as it would be fun to collaborate on building such a robot

* I echo your thoughts on ROS being the way to go. If one wanted to just do robot navigation they could do this with some effort using a laser or stereo cams and writing the SLAM codes themselves. But once you throw in robot arms things get much more complex as now you need to have object recognition, 3D point clouds, kinematics, object pose prediction, grasping algorithms, door opening algorithms and more! So this is where you can take advantage of all the work already done in ROS to accomplish the "fetch me a beer" task...although its still hard to piece it together unless you have a PR2.

* I was wondering what laser you have on your base. You mention a laser in your presentation but I don't see it listed on your site in the hardware section unless I missed it.

* My personal opinion is that a laser is still essential at the base. The Kinect has limitations in bright rooms and also is more nearsighted. So I think a laser at the base for SLAM and a Kinect at the head for vision and 3D point clouds for object recognition and grasping may be the way to go. Ideally Neato will market their laser scanner for incredibly low prices and really open up what can be done with robots. Have you thought about putting a Kinect on Pi's head in place of the cam\sonar setup?

As for ROS, I am working my own way through the tutorials and find them to be of great help. I have a decent grasp of it conceptually, but the challenge can be in piecing together all the components one may need to achieve some goal. I think at some point they need to add a layer of abstraction on top of ROS so the underlying package complexity can be obscured from the robot application designer. So essentially one could create Rules and could pick and choose the method (i.e. package) of their choosing as part of their rules. They really should be able to do this.

If you think about it there is a fairly fixed set of things one might want at this point to do with a robot. Navigate between locations, recognize objects, estimate their pose, do arm motion planning, etc. they also need to add in some voice synthesis and recognition if it isn't there yet so rules around some simple voice commands could be created as well.

Lastly, I think there needs to be some meeting in the middle for these things to be practical. What I mean is that some environmental alterations will be needed for robots to work. For example, instead of trying to understand every possibly glass in the world and how to estimate its pose and how to grasp it, you have some predictable robot plates/cups/glasses/utensils that are easy for robots to work with and that is what people would have to use. Also, if you want robots to be getting things out of cabinets and refrigerators then we can't just be throwing things in there in an disorderly fashion. We need special racks for certain things that make retrieving them easier for the robot.

Sorry for the brain dump...but these are my thoughts on robotics these days. I really appreciate what you are trying to do. FYI...I also have done some decent work with NN's including creating a method for evolving weights and structure of fully recurrent NN's with a genetic algorithm. fun fun :)

Best,
Kevin

lnxfergy
01-31-2011, 09:06 AM
Kevin, I thought I would answer a few of these:


* I was wondering what laser you have on your base. You mention a laser in your presentation but I don't see it listed on your site in the hardware section unless I missed it.
It is a Hokuyo URG-04LX-UG01 (you can buy them from acroname.com) -- it's a $1200 purchase though! I'd actually say you might look at the Neato XV-11 -- I have both a URG and a Neato -- an honestly, while the neato has a few less points in each scan, it's 1/3 the cost and actually has better quality scans in home environments from what I've seen.


* My personal opinion is that a laser is still essential at the base. The Kinect has limitations in bright rooms and also is more nearsighted. So I think a laser at the base for SLAM and a Kinect at the head for vision and 3D point clouds for object recognition and grasping may be the way to go. Ideally Neato will market their laser scanner for incredibly low prices and really open up what can be done with robots.

I think this is true -- at the moment. But note, that the Kinect actually has the same or better range than most of the sub-$5000 lasers. The FOV is the issue for SLAM -- but thats because people are trying to shoehorn a 3D camera into producing 2d laser scans so that it fits into a 2d mapping/localization paradigm. Given some further research, I think you'll see some new 2.5d/3d solutions arise that can use the Kinect (the big issue with any current 3d approach is computational complexity, especially for localization).

I wouldn't hold your breath for a standalone Neato laser -- it doesn't seem to be on their business plan at all from the different presentations by management.



As for ROS, I am working my own way through the tutorials and find them to be of great help. I have a decent grasp of it conceptually, but the challenge can be in piecing together all the components one may need to achieve some goal. I think at some point they need to add a layer of abstraction on top of ROS so the underlying package complexity can be obscured from the robot application designer. So essentially one could create Rules and could pick and choose the method (i.e. package) of their choosing as part of their rules. They really should be able to do this.


Take a look at SMACH -- it sorta moves in that direction (assuming that you already have the underlying components you need).



Lastly, I think there needs to be some meeting in the middle for these things to be practical. What I mean is that some environmental alterations will be needed for robots to work.


Interestingly, this is already kind of happening. People are "robot-proofing" their homes for the Roomba/Neato robotic vacuums.

-Fergs

Pi Robot
01-31-2011, 09:32 AM
Hi Kevin,

Thanks for the arm navigation link. I did start playing with that a few months ago but got side tracked with a new laser scanner and then the Kinect. So many new toys, so little time!

Though I might *appear* to be fairly far along with ROS etc., in fact, I've only been using it for less than six months, starting with the beginners tutorials on ros.org. So my guess is you'll find progress quite rapid and there is a lot of great help on this forum and the ros-users mailing list.

Like Fergs said, I have the Hokuyo URG-04LX-UG01 laser (thanks for pointing out its absence on my hardware list--it's on there now.) But you can get along way into learning the navigation stack using the ArbotiX PML scanning IR sensor as well. (See part way down http://www.pirobot.org/blog/0014/ and a whole discussion on this forum.)

Regarding putting the Kinect on Pi's head, I also happen to have a Videre STOC stereo camera sitting in a box and waiting to get used. It has a much smaller form factor so might be a better fit to Pi's anatomy than the Kinect. Also, I understand the Kinect has fairly high power requirements. I saw one of the PrimeSense developer's devices at the talk and it is much smaller and USB powered. (I believe).

As for placing some markers in the environment, yeah, I was playing with fiducials and RoboRealm some time ago though I didn't get very far before I got distracted with learning ROS. The fun thing about fiducials on your walls is that they just look like little pieces of computer art.

--patrick

lnxfergy
01-31-2011, 09:43 AM
Regarding putting the Kinect on Pi's head, I also happen to have a Videre STOC stereo camera sitting in a box and waiting to get used. It has a much smaller form factor so might be a better fit to Pi's anatomy than the Kinect. Also, I understand the Kinect has fairly high power requirements. I saw one of the PrimeSense developer's devices at the talk and it is much smaller and USB powered. (I believe).

I think the biggest improvement is that you won't have the dead-spot that the Kinect exhibits (no depth information for objects closer than 18"). However, the Videre probably won't have as complete of a depth image (unless you have very cluttered environments). I also haven't pulled my Videre STOC out of the bag yet -- and, I'm not sure when I will given the Kinect.


As for placing some markers in the environment, yeah, I was playing with fiducials and RoboRealm some time ago though I didn't get very far before I got distracted with learning ROS. The fun thing about fiducials on your walls is that they just look like little pieces of computer art.You might note the "ar_kinect" package that I created for the ROS 3D contest -- it does some fun stuff with fiducials and uncalibrated RGB cams.

-Fergs

Pi Robot
01-31-2011, 12:12 PM
Yeah, good point Fergs--and that was the primary reason I picked a short range Videre camera since I want the stereo to work well within arms reach for manipulating 3D objects.

And I totally meant to mention Fergs ar_kinect package. You can see his contest entry here (http://www.ros.org/wiki/openni/Contests/ROS%203D/Improved%20AR%20Markers%20for%20Topological%20Navi gation).

(http://www.ros.org/wiki/openni/Contests/ROS%203D/Improved%20AR%20Markers%20for%20Topological%20Navi gation)--patrick

maitri982
01-31-2011, 12:32 PM
Kevin, I thought I would answer a few of these:


It is a Hokuyo URG-04LX-UG01 (you can buy them from acroname.com) -- it's a $1200 purchase though! I'd actually say you might look at the Neato XV-11 -- I have both a URG and a Neato -- an honestly, while the neato has a few less points in each scan, it's 1/3 the cost and actually has better quality scans in home environments from what I've seen.


-Fergs

I have been reading about the Neato, but I am not too keen on buying and ripping apart the Neato and then creating a board to interface with the LIDAR...and I wouldn't want to use the Neato itself as a base for a robot. But Whether its Neato or someone else, I think a very low cost laser would be a great advance for robotics.

I'll check out SMACH. I know of a these guys doing some really interesting work as well:

http://ias.cs.tum.edu/research-areas/cram/cram

Most here have likely seen this, but for those who haven't here is a great ROS resource with video based training:

http://www.ros.org/wiki/Events/CoTeSys-ROS-School

Best,
Kevin

maitri982
01-31-2011, 12:39 PM
Hi Kevin,

As for placing some markers in the environment, yeah, I was playing with fiducials and RoboRealm some time ago though I didn't get very far before I got distracted with learning ROS. The fun thing about fiducials on your walls is that they just look like little pieces of computer art.

--patrick

I tend to think, perhaps incorrectly, of the fiducials mostly for navigation which is fine, especially for finding wall outlets. But it seems to me that navigation is solvable without them. Perhaps they would also be useful for finding door handles on cupboards, the fridge, etc.

I am thinking more of standard colors and shapes for everyday objects to make it very easy for the robot to identify object classes that it cares about, to compute their pose with vision and depth data, and to compute the necessary grasp.

RFID is another option that could make finding various things a little easier.

I wish i had the time to do all that I want...or to better yet, start another company and form a team to try and tackle some of these problems. I think we are getting close now where some interesting things become possible due to a convergence of various technologies.

Best,
Kevin

Pi Robot
01-31-2011, 06:49 PM
Hi Kevin,

I missed the part in your first message about your experience with neural networks. That would be another way to approach the varying appearance of glasses, plates, etc. You could start by teaching the robot what the objects in your house look like either by showing and naming or by using an online database or even by using a mechanical turk.

Also, regarding the fiducials again: I think they also come in handy for object pose estimation as well as navigation. It's not something I have tried but I believe the PR2 uses one on its recharging plug to help the vision system position the plug into a wall socket.

--patrick

lnxfergy
02-01-2011, 03:05 PM
Hello ROS Fans,

Has anyone succeeded in getting the ROS arm navigation stack and/or object manipulator stack running on their own robot arm? I've got David Lu's arm kinematics package running on Pi Robot's 5-DOF arm and now I'd like to take the plunge into the navigation and manipulation stuff. Fergs, do you have these running on your Armadillo 2WD? I'm doing my ROS coding in Python but I can read C++ if I have too.

--patrick

Ok, I never actually responded here, because I hadn't gotten that far -- and then the semester got real busy. I'm now back at play.

On the Armadillo arm (4DOF+gripper), I had custom C++ IK based on hacked up NUKE exports -- this allowed me to specify only the position (and a desired angle of attack). I didn't at the time have any way to do collision avoidance due to limited sensory.

I'm now working on a new robot arm (5DOF+gripper), and tried out the arm_kinematics package as I don't want to write more IK. Patrick, do you have any tricks for dealing with the smallness of the IK solution space? (specifically, with only 5DOF, we can't get full control in 6-space)

-Fergs

maitri982
02-01-2011, 09:52 PM
It appears that cTurtle does have some good stuff for object grasping and manipulation here:

http://www.ros.org/wiki/object_manipulation

This is supposedly robot independent...

Kevin

Pi Robot
02-02-2011, 08:25 AM
I'm now working on a new robot arm (5DOF+gripper), and tried out the arm_kinematics package as I don't want to write more IK. Patrick, do you have any tricks for dealing with the smallness of the IK solution space? (specifically, with only 5DOF, we can't get full control in 6-space)

-Fergs

It's been a couple months since I looked at this but yeah, I remember having the same issue. To check that things were working, I ran FK on a posed arm configuration, then guessed at a set of perturbations of the joint angles that I figured would still be in the solution space. I think I got this to work a few times. But to the more general point, my understanding of David Lu's IK is that you have to specify the orientation as well as the position of the end effector. For many cases, I'd just like to specify the position and feed in "don't care" values for the orientation. I vaguely remember a message from Antons Rebguns on the ros-users list that the smart_arm_kinematics package uses OpenRAVE and allows for IK solutions when specifying just the position of the end effector.

--patrick