PDA

View Full Version : [Project] Grinder skating hexapod robot



psyclops
04-29-2013, 12:38 PM
Hello folks! I am new to the forums and I would like to introduce my latest progeny, a new skating hexapod called Grinder. The robot has 20 DOF utilizing 12x MX64T and 8x MX28T servos and just won the Gold Medal in the Walker Challenge and the Silver Medal in Best of Show at RoboGames 2013.

http://www.gotrobots.com/images/grinder_desert.jpg

The legs use a locking linkage mechanism that allows the robot to stand up without the motors being energized when the servo horn is inline with the driving linkage, resulting in zero force being transmitted back to the motors. In addition, there are passive inline skate wheels mounted on the end of the legs which are angled up when walking, but engage when the legs are spread, allowing the robot to skate at about 6 mph.

http://www.gotrobots.com/grinder/images/Grinder_Standing_800.jpg

You can view Grinder's page at http://www.gotrobots.com/grinder/ - not much up yet but I'll be adding info as I go.

Video of the robot skating: http://www.youtube.com/watch?v=PklfvlpnBwk

Video of winning the Walker Challenge: http://www.youtube.com/watch?v=opmPjHO_UP4

Video of the ripple gait: http://www.youtube.com/watch?v=lLe8_BYZ3II

The current software was rushed for RoboGames and allowed for rudimentary gaits and skating. I am currently in the process of adding IK into the software to allow for more elegant and smooth motion. I also have a sensor array to build and mount which, in conjunction with the stereo machine vision, should allow the robot to localize itself autonomously.

KevinO
04-29-2013, 01:03 PM
I stumbled upon your videos and posted them so Xevel could see last week. It turned out he was standing next to you at robogames. :) Nice work.

jwatte
04-29-2013, 01:10 PM
I was really impressed by this guy at RoboGames. Very big!


I also have a sensor array to build and mount which, in conjunction with the stereo machine vision, should allow the robot to localize itself autonomously.

Do you have a particular algorithm/implementation/library in mind already? I've been looking at this, but am a little bit disappointed in the state of stereo software. I'm actually considering just using mono with image area classification for localization instead; it might work just as well and be more robust.

KevinO
04-29-2013, 01:18 PM
I've been working with the prime sense camera and although it's not stereo. I've been able to identify depth of objects although it's still only one pixel at a time. If the object isn't one of the pixels I sampled it doesn't see the obstacle. So you could say that is a flaw. :) Here is a pic from my gallery that shows it on the hex. Still very much a work in progress. Also to note its a bit slow running on the Raspberry Pi especially if I'm using the color camera and microphones at the same time.

http://forums.trossenrobotics.com/gallery/showimage.php?i=5173&catid=newimages

Xevel
04-29-2013, 01:37 PM
Yeah, Hello there :)
Glad you finally found you way here, welcome! :)
I can't wait to see what this one is going to look like with smooth and subtle motions!

psyclops
04-29-2013, 02:16 PM
Thanks!

psyclops
04-29-2013, 02:19 PM
Hi xevel, yes I am just getting started. I have a very simple IK demo in, but I'm looking forward to getting some smooth gaits down. I'm pretty sure that I can get the skating speed up pretty high with an optimized IK algorithm... videos to follow.

psyclops
04-29-2013, 02:21 PM
I don't really at this stage; I did some very basic machine vision stuff in my AI course, and I have two separate friends who have working code (one in C, one in Perl) to look at. Stereo vision is a bit down my todo list behind full IK integration and basic sensor code. What are you coding in?

psyclops
04-29-2013, 02:30 PM
I've been working with the prime sense camera and although it's not stereo. I've been able to identify depth of objects although it's still only one pixel at a time. If the object isn't one of the pixels I sampled it doesn't see the obstacle. So you could say that is a flaw. :) Here is a pic from my gallery that shows it on the hex. Still very much a work in progress. Also to note its a bit slow running on the Raspberry Pi especially if I'm using the color camera and microphones at the same time.

http://forums.trossenrobotics.com/gallery/showimage.php?i=5173&catid=newimages

Does the Prime Sense camera have any libraries to you are working with? What language? Looks pretty interesting; I had considered putting a Kinect on the robot but it's a bit big; this looks like a better bet. I might put a Neato LDS on it too (which would be cool as I designed the mechanicals and optics on it).

hwan we
04-29-2013, 02:32 PM
robot design good
awesome ~~~:robotsurprised:

KevinO
04-29-2013, 02:55 PM
Does the Prime Sense camera have any libraries to you are working with? What language? Looks pretty interesting; I had considered putting a Kinect on the robot but it's a bit big; this looks like a better bet. I might put a Neato LDS on it too (which would be cool as I designed the mechanicals and optics on it).

I'm playing around with openNI SDK and some open source middleware you can download. I'm still very early in my development. I'm currently finishing up speech integration using Kurt's code base as a guide.

In regards to IK have you looked at Kurt's Phoenix code port? It's C/C++ and has all the IK options we have all come to know and love. :P

jwatte
04-29-2013, 04:43 PM
I'm developing in C/C++. I looked at OpenCV but it seemed too academic (in the sense of "only just barely working to pass the semester project presentation") to actually use for real.

KevinO
04-29-2013, 04:54 PM
Did you look at openNI ? I didn't mention openCV.

jwatte
04-29-2013, 05:34 PM
Nope, I haven't yet looked at OpenNI. I was just reporting what I had looked at :-)

That being said, I looked at the reference, and it seems to only just give me raw device access. I'm already doing that with video4linux. My concern is more with high-level things, like constructing a scene representation from two correlated images, classifying image fields, shape recognition, etc.
Does OpenNI support plug-in "software sensors" that can provide this kind of functionality? I didn't see anything on the site.

KevinO
04-29-2013, 05:37 PM
Ah my bad then. :)

timmy_toner
04-29-2013, 07:08 PM
I like the 5 bar design! 4 and 5 bar mechanisms are crazy useful and I’m surprised I haven’t seen more in the robotics community.

OpenNi does support plug-in “Middleware” libraries Like NITE2 but Most of them involve skeletal tracking or gesture recognition.

Th232
04-29-2013, 07:44 PM
Very nice design, especially the linkages. What's the total weight?

tician
04-29-2013, 08:12 PM
Awesome robot.

OpenNI is mostly for Kinect/Xtion/Primesense devices but they were hoping other device manufacturers would produce devices and drivers to work with their 'standard' software interface (apparently, the folks at softkinetic did not want their TOF camera to play nice with OpenNI). The basic driver/low-level stuff provides a standard method to retrieve depth and registered color images. The NITE middle-ware processes the depth and color images to provide skeleton/hand tracking and some gesture recognition, and outputs that to the high-level OpenNI interfaces/functions. There is a completely open-source driver for the kinect that does mostly the same as the drivers/low-level OpenNI, but the rest is pretty much roll your own (most basic would be to segment objects based on depth data and trim the registered color image to the same area/blob). There was also an OpenNI wrapper for the Kinect4Windows driver, but I have not kept up to date on its progress (allows the nicer K4W drivers without the Windows Kinect SDK).

OpenCV is still a bit "not great" on stereo depth map generation, but passive stereo matching is pretty difficult even with really high quality/resolution cameras. The 'full-package' PR2 uses an IR projector to help create 'texture' on plain/flat surfaces (as a form of structured lighting) to improve block matching for the two stereo camera pairs (near and far). There was a pretty cool paper from willow garage on their texture projector and how they greatly improved their stereo depth maps. The cheaper version just uses a kinect or other primesense device (lots of support in ROS, easy to use, readily-available, and very inexpensive).

I really should be working on my final class project instead of typing this, but I just cannot seem to stop myself. "Environmental Text Mixture Monte Carlo Localization" - yay... ಠ_ಠ

jwatte
04-29-2013, 09:01 PM
Does prime sense work outdoors now? I'm especially interested in identifying/localizing lawns, concrete paths, trees, picnic tables, and persons...

tician
04-29-2013, 09:22 PM
Does prime sense work outdoors now? I'm especially interested in identifying/localizing lawns, concrete paths, trees, picnic tables, and persons...
Since it uses a low-ish power IR projector, not really... It can work in the dark, and in the shade up to a point, but direct sunlight and other IR light sources tend to blind the depth sensor. The primesense sensor is standard structured lighting: measures distortion of a projected pattern with a single camera to estimate depth. Without the pattern, it is nothing but a really expensive webcam. You probably could try using the B&W IR image and the color image for basic stereo depth map generation, but not sure how well it would work (IIRC, you can get either depth map or B&W IR image, but not both - if usual depth is kaput, why not try stereo with IR?).

CasperH
04-30-2013, 07:54 AM
Cool! The design made me think of Dark Eldar warships and spacesuits from Warhammer 40k, especially the color purple. :cool:

Years a go I found this skating robot with 4 legs that could also walk. It was my understanding from Xevels post that you already had hear of this one? http://www-robot.mes.titech.ac.jp/robot/walking/rollerwalker/rollerwalker_e.html

bloftin
04-30-2013, 02:31 PM
I saw Grinder skate and walk at Robogames this year and it is one of the coolest things I saw there this year. Great stuff. Thank you for sharing.

CasperH
05-07-2013, 11:52 AM
I like the 5 bar design! 4 and 5 bar mechanisms are crazy useful and I’m surprised I haven’t seen more in the robotics community.

So do I, it is a nifty mechanical way of taking the load of the servo's. I wonder if it can be used in a humanoid robot design to take some load of the knee joint or have a zero energy required balance state while standing up right. Psyclops, do you have a good theoretical/practical reference for designing these? I can do the math but some insight in design possibilities would be nice.

psyclops
05-25-2013, 01:17 PM
Yes, the linkage design means it takes no energy to stand still, and the robot does not collapse when it loses power. The total weight with 2x5000MAH 4S LiPos is about 11kg, which is only slightly over my design goal of 10kg. I based the design off my earlier robot Ziggy ( http://www.gotrobots.com/ziggy/ ) and added the third degree of freedown to allow me to move the leg outwards, which allows for skating. I designed the mechanism by mating the linkages together and adjusting their lengths in CAD which allowed me to tweak the geometry while moving the linkages around to optimize the envelope and positions. The math came later... although I did put force equations into the CAD so that I could check that I wouldn't be overloading the motors.

I have subsequently solved the IK equations and I'm in the process of integrating it all together with a Visual Python simulator... I took about a month off after RoboGames (to rest!) but I'm back on the project now :)