View Full Version : Let's Discuss Kinematics, Shall We?
tom_chang79
08202009, 03:52 AM
I want to open up a whole new thread to talk about Kinematics and its principles applied to Hexapods (Crustcrawler, Phoenix, *H3R, etc.). This thread is aimed more mathematics and not the modeling of it in software. Although, if you want to express your point of view with software, it's ok as well...
My first question I want to throw out there, perhaps for those of you who are gurus in kinematics, can help me understand it better:
Given a successive rotation about all three of the axis in threespace, how do you truly determine which is of the 6 is the correct order to use?
For instance:
B = matrix for a successive rotation
Ay,beta = rotation matrix of angle beta about the yaxis
Ax,gamma = rotation matrix of angle g about the xaxis
Az,alpha = rotation matrix of angle alpha about the zaxis
So
B = Ay,beta * Ax,gamma * Az,alpha
Euler proved that twelve independent triple rotation can be had for any combination of alpha, beta, and gamma with any of the three axis... So essentially, there's a total of 12 different "B" matrices depending on the order of rotation...
Each of the twelve combination yields a different matrix. When do we know which order to use for say, hexapod? Perhaps I just don't understand when to use which order...
:confused:
darkback2
08202009, 08:47 AM
Don't worry this isn't going to be a very helpful response.
Maybe I just don't get it, but I don't look at kinematics the same way. (OK...not maybe).
A couple of things. Lets take a three degree of freedom leg. First of all non of the servos have continuous rotation. For most this limits us to a 180 degree swing at each joint. Also, the shoulder or coxa is fixed to the body. I generally see the coxa as my X. The hight of the foot, or angle of the leg in relation to the ground as Y, and the length of the leg, or the distance between the shoulder and the foot as Z.
Given that, sure Z has to change as X changes, and also as Y changes...so then I try to figure out a relationship between a change in either x or Y coupled with the limitation of Z....
I guess what I'm getting at is that for me it works better if you start out at the same end of the leg. If you know either X or Z and where you need to get to, the rest seams to fall into place.
Then again, Maybe I'm just exposing my clear and total lack of understanding.
DB
lnxfergy
08202009, 01:18 PM
(Warning: I am NOT an expert in Kinematics)
Am I correct to assume that you are doing Body Forward Kinematics? That is, you are rotating the body about 3axis and need to find the translation of the point at which the leg attaches to the body? Might just be that I'm being dense, but I just can't see how this would apply to the leg kinematics, since the order of application of the rotations is set by the way your servos are connected to the body....
If you are doing Body Kinematics, then I think the "correct" order depends on how you want the body to move. Any of the 6 is "valid" in that it will give you a result, but of course those results may not correspond very closely to what you intended. If we assume fixed axis, then I would think that the order would be roll, pitch, then rotation about the vertical axis last, as this causes the least distortion from what I would think was the expected value?
EDIT: I'm also assuming here that our fixed axis go through the center of the robot's body. The rollaxis is horizontal and stays parallel to the ground, pointing in the direction of "forward" motion. The panaxis is vertical, perpendicular to the ground and the rollaxis. The pitch axis therefore is therefore found by the righthand rule, and would be horizontal and perpendicular to the rollaxis.
Fergs
tom_chang79
08202009, 03:06 PM
Fergs, you hit the nail right on the head. Yes, I'm referring to a body (global) to local (legs) translation. I was experimenting with my rotation kinematic functions yesterday and just racking my brains out trying to figure out why my single rotation doesn't match my successive rotation, when the successive rotation is set where two of the three angles are "0" (no rotation), and the third angle is set...
In theory,
Br = B * Gr
is equivalent to
Br = Ay,beta * Gr
Where B is your successive matrix, Br is your local coordinate, Gr is your global, Ay,beta is your rotation matrix angle beta about the yaxis, for alpha = 0, gamma =0, for any beta
Am I totally off in my thought process here?
darkback:
What you describe are the leg movements. I found out that you really don't need Inverse kinematics, just an overglorified simple trig to solve for the leg movement. However, you will need kinematics if you want to yaw, pitch, roll your body and also to account for the "offset" legs at the four corners of a hexapod!
:D
EDIT:

Regarding programming in general, all hail the ser_out, printf, and any other print to terminal functions for debugging!!!
darkback2
08202009, 03:17 PM
I'm currently employing a much easier solution to this problem. So I have my leg movements all set, sort of independent of the body movements...So I know that isn't exactly as easy as I'm suggesting, but think of it this way. Each servo setting has an offset input that is either positive or negative. So I can lower/contract the front legs and extend the back legs, or side legs, or twist of the body, independently of the actual leg movements...I'll try to get it fully up and running and post a video in the next couple of days.
DB
tom_chang79
08202009, 03:22 PM
Can't wait to see if Che...
:)
lnxfergy
08202009, 03:42 PM
Fergs, you hit the nail right on the head. Yes, I'm referring to a body (global) to local (legs) translation. I was experimenting with my rotation kinematic functions yesterday and just racking my brains out trying to figure out why my single rotation doesn't match my successive rotation, when the successive rotation is set where two of the three angles are "0" (no rotation), and the third angle is set...
....
Am I totally off in my thought process here?
I'd suggest posting your code (and the output trace),so we might see what's going on, that sounds like a bug to me, not the underlying math...
Fergs
lnxfergy
08202009, 03:55 PM
I'm currently employing a much easier solution to this problem. So I have my leg movements all set, sort of independent of the body movements...So I know that isn't exactly as easy as I'm suggesting, but think of it this way. Each servo setting has an offset input that is either positive or negative. So I can lower/contract the front legs and extend the back legs, or side legs, or twist of the body, independently of the actual leg movements...I'll try to get it fully up and running and post a video in the next couple of days.
DB
DB 
The real point to IK is to be exact (smooth), and allow positioning the end effector anywhere inside the (near) infinite range (obviously, due to resolution issues, it's not entirely a continous and infinite range, its much more discrete than that). Obviously, if you only need to put an effector in a few positions, poses work nicely. When you want to go beyond that, you're describing a parameterization of the base pose, where your offset is a parameter. There are a few problems with that though:
First, a linear rotation about a single axis does not create a linear movement at the end effector (it swings on an arc). This is why Issy was SO violent when walking at RG. When he only moved the horizontal thrust servo (commonly called the Coxa servo), the end points of his legs are swinging through an arc, and have to shift the entire weight of his body as they do so (and are fighting the legs on the opposite side). His new IK code will walk much smoother (which means less wear and tear on the servos, and most importantly less heat)
Second, if you do end up moving multiple servos in a leg to try and offset this arc issue, you end up being half way to IK.
The actual IK required for posing a 3DOF leg is quite simple. On a leg such as those on Issy or the Phoenix, a single function (atan2) can give you the horizontal thrust (coxa) position off of the X,Y offset of the end effector (I'm using the X and Y axis to define the floor). Then the remaining 2 servos are in a plane (which is parallel to the Z axis, and goes through the origin and the end effector X,Y points) and you can quickly solve them using the law of cosines (see tutorial on 2DOF here: http://www.learnaboutrobots.com/inverseKinematics.htm).
Fergs
Zenta
08202009, 05:31 PM
Hi,
Euler proved that twelve independent triple rotation can be had for any combination of alpha, beta, and gamma with any of the three axis... So essentially, there's a total of 12 different "B" matrices depending on the order of rotation...
Each of the twelve combination yields a different matrix. When do we know which order to use for say, hexapod? Perhaps I just don't understand when to use which order...
:confused:
When it comes to the order of rotation you have to decide what order you want to use yourself. The following rotation are always affected by the next, so you have to priority in what order you want them. For the time I'm working with a powerpoint presentation about hobby robotics. My boss asked me if I could hold a little course/show about my hobby. I'm not going very deep in the code but I did mention the rotation part.
This is the page I made about rotation kinematics:
(Sorry for the bad picture quality)
http://forums.trossenrobotics.com/gallery/files/1/5/3/5/rotation_kinematics.jpg
I did a fast translation from Norwegian, maybe if some of you are interested I could translate the whole presentation and share it. Its pretty basic though.
This is the matrix I used in Xan's code, pay attention to the Y and Z axis (switched place).
And this is the front page ;)
http://forums.trossenrobotics.com/gallery/files/1/5/3/5/hobby_robotics.jpg
Zenta
When talking about kinematics Denavit Hartenberg conventions should probably be mentioned. In industry robots these are the standard for modelling kinematic chains. Every link/joint is described by 4 parameters and there are some rules as how to come up with them systematically.
There´s a learning curve at the beginning though and it might be overkill for a chain with few joints.
lnxfergy
08202009, 07:07 PM
When talking about kinematics Denavit Hartenberg conventions should probably be mentioned. In industry robots these are the standard for modelling kinematic chains. Every link/joint is described by 4 parameters and there are some rules as how to come up with them systematically.
There´s a learning curve at the beginning though and it might be overkill for a chain with few joints.
Yep, that'd be the standard Industry or Acadmic approach. You typically don't see that creep up too much in hobby robotics though, mainly because there aren't that many people doing kinematics  and most of what exists is limited to <= 3DOF.
Fergs
lnxfergy
08202009, 07:27 PM
I also thought I would link in another paper that discusses the linear algebra involved (although, not DH convention) http://elvis.rowan.edu/~kay/papers/kinematics.pdf
Fergs
DresnerRobotics
08202009, 07:40 PM
Thread has been stickied, good info/discussion so far guys. Carry on!
CogswellCogs
08202009, 08:05 PM
Earlier in the year, I found this pretty informative PPT regarding DH parameters and robotic joints.
http://www.mcgill.ca/files/cden/MECH572lecture5.ppt
tom_chang79
08202009, 10:35 PM
Lots of great information here, thanks guys. Keep it coming!
Zenta, you have nothing to apologize for, your picture is very clear... :D
Regarding the order of rotation, how are you handling yours? Do you stick with one order or do you switch order of rotation depending on what you are trying to accomplish?
Another thing I wanted to ask is determining the quadrant in 2space for Phoenix's legs. You have:
YX
ZY
ZX
In each of these axis, the Phoenix's legs each maps to a different "quadrant."
The reason why I'm asking is because the compiler for my Atom Pro 28 has a limited range for FCOS and FSIN functions.
As such, you can still solve for all six legs, you just have to flip the "signs" of the solution.
For instance, looking from a birds eye view, with the front of the Phoenix pointing "Up", the legs are numbered:
Y

Front
5 0
4 1 x
3 2
Back
Servo group 0 would have positive FCOS and FSIN, where servo group 2 would have a positive FCOS but a negative () FSIN, servo group 3 would have both negative FCOS and FSIN.
Did you have to deal with this sign issues as well since the FCOS and FSIN function has a limited range? I suppose translating the angles to a table would probably take care of this and give the full range of 2*Pi
Right now, my "Yaw" function works, and it works perfectly, my "Yaw" is defined as the rotation about the "Z" axis (please take note that my reference axis are alphabetized a bit different then yours).
So for my translation/transform functions, I have four. One indepent function for each axis, and the forth one for a successive rotation.
So say my functions are:
GTOLZ  Takes in angle alpha and does the Br= Az,a * Gr
GTOLY  Takes in angle beta and does the Br = Ay,b * Gr
GTOLX  Takes in angle gamma and does the Br = Ax,g * Gr
GTOLZXY  Takes in angle alpha, beta, and gamma, and does the rotation in the order of Z, then X, then Y Br = Ay,b * Ax,g * Az,a * Gr
Now suppose:
alpha = 0.5 (radians)
beta = 0.0
gamma = 0.0
Shouldn't:
GTOLZXY = GTOLZ
and if
alpha = 0.0
beta = 0.5
gamma = 0.0
Shouldn't:
GTOLZXY = GTOLY
???
So far, my GTOLZXY = GTOLZ when beta and gamma is fixed at 0 radians (no rotation about the Y and X axis) EXCEPT my local Y coordinate and local X coordinate comes out with a inverted sign:
GTOLZ GTOLZXY
L(X,Y,Z) > L(X, Y, Z)
Perhaps I'm using the wrong set??

I just want to recommend the book, Theory of Applied Robotics by Jezar (you can Amazon it).
This book is VERY complete when it comes to kinematics for robotics, including an introduction chapter on DenavitHartenberg notations...
HOWEVER, I must warn you that the author will constant switch around his references of alpha, beta, gamma, phi, psi, theta with reference to which axis those angles are rotating about...
I suspect, this is my problem, I modeled some of the equations in there without realizing that Jezar pulled a switcharoo with the angle designations... :p
:D
tom_chang79
08212009, 03:05 AM
I just discovered a small but useful note that I must've missed,
Apparently, the angles are not FREE to be assigned to any axis. The angles are more designated for the order of the movement. From Jezar's book, angles are expressed by:
Phi
Theta
Psi
The order of rotation is always Phi, Theta, and then Psi, so the 12 combination is expressing the rotation of those order for any given axis...
I'm finally starting to get it...
This kinda dawned on me when I looked at Zenta's picture, and wondered why his successive rotation matrix look different then mine, even though we are doing something similar.... Looking up the twelve different combos in the apendix, I started to wonder why Phi, Theta, and Psi was always expressed in the same order for the twelve combos...
It is by definition that first move is always Phi, then Theta, and Psi
:p
lnxfergy
08212009, 11:31 AM
It is by definition that first move is always Phi, then Theta, and Psi
Hmm, is this a standard convention, or just the author's?
Fergs
tom_chang79
08212009, 05:18 PM
I believe it's the standard convention, but don't quote me on that... I discovered it was because the 12 independent equations all depend on this consistent order of rotation about these angles.
The angles are what's in order, rotating about various axis in that order of equation is what makes the equation so different from each other...

I finally got my rotation equation to work. Here is dump from my ser_out command:


phi:0.0000000000
theta:0.0000000000
psi:1.0000000000



GTOLX

XX:1.0000000000
YY:0.5115365386
ZZ:1.5168486833

SGTOLZXY

XX:0.9999997615
YY:0.5115365982
ZZ:1.5168488025


phi:0.0000000000
theta:1.0000000000
psi:0.0000000000



GTOLY

XX:1.5921409130
YY:1.0000000000
ZZ:0.1660931706

SGTOLZXY

XX:1.5921409130
YY:1.0000000000
ZZ:0.1660931706


phi:1.0000000000
theta:0.0000000000
psi:0.0000000000



GTOLZ

XX:1.3817731142
YY:0.3011687397
ZZ:1.2500000000

SGTOLZXY

XX:1.3817729949
YY:0.3011686503
ZZ:1.2500000000
XX  Local Xcoordinate
YY  Local Ycoordinate
ZZ  Local Zcoordinate
As you can see, by fixing one angle with a value and fixing the other two angles at zero angle, each single transformation should equal the successive transformation. It doesn't come out exact, but I believe that it's because of the precision of the FCOS and FSIN functions in the Atom Pro IDE.

darkback2
08212009, 06:38 PM
Ok...I don't know how to link to this, but there is a lecture on this subject available through Itunes U. It is from Stanford University's Introduction to Robotics Course, Lecture 2, by Oussama Khatib.
The lecture gets way over my head really quickly. I've been watching it repeatedly over the past few weeks hoping something will sink in.
DB
tom_chang79
08212009, 07:43 PM
Is this lecture on any of the video sites like youtube? I'd really be interested in seeing it. I've been looking to take a course (not for any college credit or anything) in Kinematics to get formally educated in it...
sthmck
08212009, 09:58 PM
Yes actually, it is on youtube. I watched the course a few month ago. Pretty interesting stuff.
lnxfergy
08232009, 09:27 PM
One more resource I just thought of: Steve Lavalle's Planning Algorithms Book is freely available on his website: http://planning.cs.uiuc.edu/
Hobbyists typically stop at 3DOF for their robots, since that's the highest level at which a closedform solution is achievable (as far as I know). Once you pass that 3DOF level, you're looking at advanced algorithms, typically through searchbased guess and solve the Forward Kinematics approaches. Chapter 3 of the book discusses kinematic chains and transformations. Later chapters are focused on methods for planning the motion of manipulators, both how to solve those higher DOF chains, and how to solve them with additional constraints such as obstacle avoidance.
EDIT: (one algorithm covered in there, that I think is very approachable, is Rapidly Exploring Random Trees (RRT), chapter 5)
Fergs
tom_chang79
08252009, 04:22 AM
Yeah, I've always pondered about how to implement an IK engine for my 6DOF Bioloid. I've always wondered how the "industrial" guys like Honda and other Japanese researches who are doing humanoids program their bot...
I can't see the ASIMO being just a bunch of sequence of gaiting, it looks too natural to be like that.. Unless Honda truly did derive their walking by just empirical methods (like what we do in hobby humanoid world)...
Does anyone out there have a biped have IK engine that's feeding their walking algorithm?
lnxfergy
08252009, 01:34 PM
Yeah, I've always pondered about how to implement an IK engine for my 6DOF Bioloid. I've always wondered how the "industrial" guys like Honda and other Japanese researches who are doing humanoids program their bot...
I can't see the ASIMO being just a bunch of sequence of gaiting, it looks too natural to be like that.. Unless Honda truly did derive their walking by just empirical methods (like what we do in hobby humanoid world)...
Does anyone out there have a biped have IK engine that's feeding their walking algorithm?
I'd imagine that most bots are using a series of many gaits (which could be thought of as presolved IK solutions), very good interpolation, plus some dynamic response from foot sensors and/or gyros/accerometers. (I believe that pretty well defines how Giger works).
The issue with doing continuous IK at higherdof is that you run out of processing power. I'd imagine most of us would run out of memory before we ran out of speed (MCMC methods like RRT are very memory intensive, as is any intelligent search...). 8bit micros need not even try. An ARM with a decent amount of memory might be able to take a try at it....
Fergs
tom_chang79
08252009, 03:01 PM
Yeah I hear ya. The matrices not only gets complicated, but the processing power of doing matrix math for large dimension would be just insane amount of horsepower. Maybe an RFtether to a biped is not so bad afterall? I wonder given a nice chunk of 6DOF kinematic engine, how my Core i7 would crunch it? So far, it eats any application I throw at it...
Back to the subject of kinematics, I'm starting to think if there's much more then rotation kinematic is needed for hexapods. I think orientation and movement kinematics needs to be incorporated too. So far, my assumptions are that the origin of all six of the legs are the same origin as the global (body) of the hexapod. With four of the legs having an angle offset...
tom_chang79
08262009, 02:20 AM
I guess I should clarify my question:
So far, up to this point, I can my hexapod walk forward and backwards at any angle. If I fix the angle to 90degree, then the hexapod starts to crab walk...
I've only used the principle of rotation kinematics:
Br = Q1 * Gr
Where:
Br is your local coordinate [x y z]
Gr is your global coordinate [X Y Z]
Q1 is the inverse of Q which is the rotation matrix
By adding a rotation offset (60degrees) of the four "corner" legs, I can solve for Br. This Br is then fed into the "IK Engine" which will calculate the PWM for each of the three servos (Coxa, Femur, Tibia)
As for the other two legs, I just simply rotate along the Zaxis (which is the axis that goes THROUGH the body from the ground to the sky) with the variable Phi. If Phi is set to 90degrees, then it "crab walks"
I guess it comes back this question. Since the "Phi" variable has to do with the YAW of the hexapod, and is accomplished by the rotation kinematic, can the same simple rotation kinematic (successive rotation) be used to solve for a stationary roll and pitch?
What I mean by "stationary roll and pitch" is the fact that the tip of each leg does not move, but the body will "roll" and pitch" according to the other two angles, theta and psi, which rotates about the Y axis and X axis respectively?
Are you guys (and gals) that are using kinematicbased code, only incorporating rotation kinematics to make this possible? I'm excluding the pulse calculation, since that part is just simple trig...
Or am I missing something here? Do I need to incorporate orientation kinematic and/or motion kinematic in order to do all the fancy roll, pitch, and yaw combination that I see like on the Powerpod program, Zenta's Hexapod, and Winchy Matt's (Matt Denton's) Hexapod?
lnxfergy
08262009, 10:52 AM
Can you define what exactly you mean by "orientation kinematics" and "motion kinematics"?
What I've found (but I've only been playing with body kinematics for a few weeks) is:
I store the leg end effector position (as measured in x/y/z from the coxa servo connection to the body) as well as the X and Y distance from the center of the body to the coxa connection.
For each iteration, I calculate for each leg: (this is pretty similar to how Xan does his phoenix code)
the gait offset on the leg for X/Y/Z.
how the 3 body rotations affect the leg end effector, specifically, how much does it change the x/y/z coordinates of the endpoint.
use the original end effector position + gait offset + the result from body distance, to do the 3DOF calculations on the leg.
After all legs are calculated, check for valid positions, and write it out, to the servos
Note: my X/Y/Z convention is more like a rover bot than most walkers. X is the axis from the center of the body protruding through the front of the bot, Y is the axis from the center of the bot out the right side.
You can see a video of what I've got working so far, this is mostly crud because Issy doesn't have enough freedom in each of his joints (and his femur:tibia ratio is terribly low), but I'll be changing his leg configurations this weekend:
http://www.youtube.com/watch?v=ZacVmNutZxM
Note: his walking really isn't quite IK yet, it's hitting "out of range" frequently on the servos, so his legs aren't getting where they are supposed to yet, but it's a heck of a lot better than he was at robogames.
Fergs
tom_chang79
08262009, 03:23 PM
I guess I'll explain it more when I get home. I haven't grasped those chapters yet so I'd be embarassed to even try explaining what it is :p
As for the kinematics, I think the technique that I'm using is very similar to Xans, nearly identical. Except that I did some shortcuts which is causing me problems.
For instance, I have a resolution factor which I used to decrement the Global coordinates. I then "increment it back (so that the global coordinates are back to its original value).
During each increment and decrement, I make the end effector move to that position. So that let's say:
Start: Gx=0 Gy=0
End: Gx=0 Gy=5
The end effectors doesn't try to go from 0 to 5 immediately, since it doesn't work for walking (I tried;))
So I make the resolution say 0.1:
Go to Gx=0, Gy=5
First move: Gy=0
Second move: Gy=0.1
Third move:Gy=0.2
.
.
.
Finally Gy=5
Then for the legs that needs to come "back" (the ones in the air)
Gy=5
Gy=4.9
Gy=4.8
.
.
.
So on. I think I need to modify the code like what zan does, keep each resolution size proportional to the total distance from the original location to the final location... That way, I don't need to fiddle with how much "stop and go" I need to put in between each incremental/decremental step...
darkback2
08272009, 01:50 AM
Ok...so here is what I have.
I'm using a graph with a range of 200 for both X and Y. I then subtract 100 from both X and Y which gives me a range of 100  100 for X and Y. Then I multiply both by 1 so I have a range of 100 > 100 for both...and finally I subtract each of these four things from the other. I use that for the offset value for Squidwords legs, and that makes him lean in different directions.
I hope the video makes it all clearer...sorry for the servo whine.
http://www.youtube.com/watch?v=kCraUCcoeM
DB
tom_chang79
08312009, 03:08 AM
Inxfergy, That is some fast quad you got there... I had no idea that the AX12+ were capable of that sort of speed...
Darkback, your controller is pretty cool... What is it? Is it some tablet PC of some kind?
darkback2
08312009, 11:10 AM
Hey Tom,
Squidword is controlled using a wibrain (http://www.wibrain.com/). It is an ultra portable PC, and weighs 1.2 pounds with the battery. I suppose I could lighten it up a lot if I shared the battery with the robots.
As for the Kinematics, I wonder...For me, there is a difference between the number of degrees of freedom, and the actual number of degrees of freedom. For squidwords legs, he uses 3 servos per leg, and while that adds up to 3 dof, the ankle and hip/lift servo both just do the opposite of each other. So he really only has 2 actual degrees of freedom per leg?
DB
Adrenalynn
08312009, 01:40 PM
I'm thinking that kinematics and what you appear to be doing are generally not classified in the same definition, DB. That could be part of the confusion.
tom_chang79
09082009, 10:26 AM
Ok, I'm struggling to understand how to make a hexapod rollpitchyaw while stationary and/or while it's moving.
I got the "roll" part down by modifying the walking code. Let me start by explaining what's known:
Br=Ag*Gr
Br = Local Coordinate
Ag= Successive Rotation Matrix
Gr = Global coordinate
In order for me to make the robot walk and/or roll, I seed Gr=[Gx,Gy,Gz], where if Gy is nonzero, the hexapod will either walk forward or backwards (depending on the +/ sign), if Gy is fixed to 0 and Gx is nonzero, the hexapod will crab walk left or right (depending on the +/ sign). If Gy and Gx are both nonzero, the hexapod will walk at an angle (but the body is still pointed at the same direction).
In order for it to rotate, I seed Gy with some value, but fix the local "X" coordinate by not adding any contribution from the transformation. For instance if Br=[xx,yy,zz] then:
XX gets added to the length of the legs horizontally
YY gets added to the length of the legs vertically (which the length on the Yaxis of the legs are always zero)
ZZ gets added to the height of the legs (which is always the height offset when the legs are at a 90degree bend)
What I'm struggling to understand is, if the hexapod's body is at rest at [0,0,0] and not moving, and we want the body to rollpitchyaw, would this just be [0,0,0] with some angle phi, theta, and psi, where you rotate angle phi about zaxis, rotate theta about the yaxis, and psi about the xaxis?
Doing so would yield a "0" all the way since:
Br=Ag*Gr = [0,0,0] if Gr=[0,0,0]
I don't understand, when I see rollpitchyaw demo from such programs as Powerpod, Zenta's PEP, Xanore's Hexapod Code, and Matt Denton's, are the Gr vector (global coordinate) being seeded with some value during each of the successive rotation?
For instance, do I do a:
Gr = [0,1,0] for some arbitrary first rotation
Gr = [1,0,0] for some arbitrary second rotation
Gr = [0,0,1] for some arbitrary third rotation

I guess the second big question is how are you all modeling the body (global) and the leg (local) with respect to each other?
My mistake could be that the two "middle" legs of my hexapod has 0degree offset with respect to the global coordinate, so I'm assuming that for the two middle legs:
Br=Ag*Gr , where the angle offset is always 0, so the when rotating about say, angle phi, it's always just phi with no offset.

Do I need to account for the distance between the Global origin "O" and the location of all six local (leg's) origin "o" ? I'm starting to suspect that might be my problem with all this...
:confused::confused::confused::confused::confused:
lnxfergy
09082009, 01:43 PM
My take, and how IK is currently implemented on Issy:
If [0,0,0] is the center of the bot, then no rotation about any axis changes that point (the center is still exactly where it was, because we rotated about it!). On the other hand, we have points in space where the legs attach to the body, and where the body touches the ground. Because these leg points are a distance (>0) away from the rotation point ([0,0,0]) a rotation matrix will affect the position of those points.
Thus, given a point where the leg meets the body (I'll call this the Coxa point, [xC, yC, zC], C for coxa, for me Z is the vertical axis and thus zC typically = 0), and a set of rotations about the three axis, you can compute the change in position in space of [xC,yC,zC] which I will call [xR,yR,zR] (R for rotated). Both of these are in body coordinates, relative to the body center. We then have the position of the end effector (the foot), which we'll call [xE,yE,zE], also in body coordinates. Further, we have a gait translation that we wish to apply [xG,yG,zG]. Thus we find that the inputs to our 3dof leg solving equation is:
[xE  (xR+xC) + xG, yE  (yR+yC) +yG, zE  zR +zG]
Which is the distance of the end effector to the coxa point where the leg joins the body. (note, we don't need to take into account zC, since zC=0)
One thing I want to note here is that Xan appears to do this differently, and I'm not sure how much (if any) it changes the results. His phoenix code rotates the end effector and then subtracts out the coxa offsets.
Fergs
[EDIT: I realized I blundered a bit above, I had said that [xR...] was the rotation of [xC...], I meant to say it was the change that occurs when rotating [xC...], not the absolute position. I've fixed the mistake]
tom_chang79
09082009, 09:25 PM
Thanks Fergs, that was a very well written explanation of what is going on. I guess I was missing that link of understanding how the addition is supposed to happen...
I guess my biggest problem was, I've always assumed that the 0,0,0 was at the tips of the legs that doesn't have any offsets (the middle right and middle left)...
I guess I need to always account for the offset of the legs with respect to the center of the body...
lnxfergy
09082009, 10:17 PM
Yep, all coordinates used during body IK calculations need to be in reference to the body itself. When you then do the 3dof leg IK equations, you typically use coordinates that reference from the coxa servo rotate point.
Note that I've corrected an error in my terminology above.
Fergs
tom_chang79
09092009, 01:40 AM
Thanks for the clarification. I was a bit confused on why we add the position after rotation to the original position of the coxa.
One big question I have. When you say the coordinate of the end effector E, isn't this coordinate in reference to its respective coxa, which is 0 degrees (locally or with respect to the coxa) for ALL legs, for say a hexapod like the Phoenix, BUT the ORIGIN of the effector, coxa, is 60degrees offset from the origin of the body?
Another question along the same lines:
The equation:
[xE  (xR+xC) + xG, yE  (yR+yC) +yG, zE  zR +zG]
This coordinate is with respect to the body? Don't we need to calculate L = [Lx, Ly, Lz], where L is the coordinate with respect to its LOCAL origin, its respective coxa, in order to crank it through Leg IK that works for all?
Also, does that equation above assume that:
xG = yG = zG = 0
If the body is not gaiting but just rolling, yawing, pitching (or combination of)?
Here's an example, the coxa location of each "group" of legs (clockwise, starting from "1o'clock" position) with respect to the origin of the body (center of the Phoenix's body) looking at the mechanical drawing are:
[xC, yC, zC]
Group 0 G0 = [1.6875, 3.234, 0]
Group 1 G1 = [2.4805, 0, 0]
Group 2 G2 = [1.6875, 3.234, 0]
Group 3 G3 = [1.6875, 3.234, 0]
Group 4 G4 = [2.4805, 0, 0]
Group 5 G5 = [1.6875, 3.234, 0]
To calculate each of the leg coordinate, do I first do:
Br = Ag*G0
where Ag = successive rotation matrix, G0, G1, G2 and so on for each respective legs
Br is the resultant local coordinate
After I get Br = [xx,yy,zz], do I then do:
[xE  (xx+xC) + xG, yE  (yy+yC) +yG, zE  (zz+zC) +zG]
(note that I've replaced xR with xx, yR with yy, and added and replaced zR with zz)
for each leg?
I guess to sum up the question, what to do with the legs in the four corners (1o'clock, 11o'clock, 5o'clock, 7o'clock positions) that have an offset of say, 60degrees? Do I account for the coordinate for the end effector with respect to each coxa, assuming that the starting point is at 0degree with the coxa?
tom_chang79
09142009, 11:11 PM
Ok, this concept has been marinating in my head for quite some time. The vector summing you shown me Fergs, was conceptually displayed in one of my dreams a few days ago :eek:
Here's one thing I don't get.
If "change in position in space of [xC,yC,zC] which I will call [xR,yR,zR] (R for rotated)."
Does this mean that:
Suppose we have this general transformation equation again:
Br = Ag * Gr where is your global coordinate, Br is your local, and Ag is your transformation matrix...
Does this mean that:
Br = (xR+xC, yR+yC, zR+zC)
or did you actually mean this:
Br = (xR, yR, zR)
because I tried this:
Er = end effector coordinate from body
Gr = Gait coordinate from body
Br = coordinate of the coxa from body
I tried to do this:
Er  Br + Gr
but it didn't work... I don't understand?
EDIT:
I guess my question is REALLY this:
if R is the change of C (coxa) position then:
Br = Ag * C
then,
R = C  Br
then, wouldn't:
xR+xC = xC  xBr + xC = 2*xC + xBr
???
tom_chang79
05122010, 03:38 AM
My take, and how IK is currently implemented on Issy:
If [0,0,0] is the center of the bot, then no rotation about any axis changes that point (the center is still exactly where it was, because we rotated about it!). On the other hand, we have points in space where the legs attach to the body, and where the body touches the ground. Because these leg points are a distance (>0) away from the rotation point ([0,0,0]) a rotation matrix will affect the position of those points.
Thus, given a point where the leg meets the body (I'll call this the Coxa point, [xC, yC, zC], C for coxa, for me Z is the vertical axis and thus zC typically = 0), and a set of rotations about the three axis, you can compute the change in position in space of [xC,yC,zC] which I will call [xR,yR,zR] (R for rotated). Both of these are in body coordinates, relative to the body center. We then have the position of the end effector (the foot), which we'll call [xE,yE,zE], also in body coordinates. Further, we have a gait translation that we wish to apply [xG,yG,zG]. Thus we find that the inputs to our 3dof leg solving equation is:
[xE  (xR+xC) + xG, yE  (yR+yC) +yG, zE  zR +zG]
Which is the distance of the end effector to the coxa point where the leg joins the body. (note, we don't need to take into account zC, since zC=0)
One thing I want to note here is that Xan appears to do this differently, and I'm not sure how much (if any) it changes the results. His phoenix code rotates the end effector and then subtracts out the coxa offsets.
Fergs
[EDIT: I realized I blundered a bit above, I had said that [xR...] was the rotation of [xC...], I meant to say it was the change that occurs when rotating [xC...], not the absolute position. I've fixed the mistake]
I am so glad that Andrew stickied this topic. I have little to no idea what the heck I was doing or was thinking and this thread really served as a "log" of my thoughts about a year ago...
I quoted you Fergs because this point finally dawned on me. It was thinking about what the "change" of the end effector after a rotation that got me thinking that the same "change" is the offset that needs to be presented to each of the legs in order to accomplish the body roll/pitch/yaw.
Example 6 on page 39 of Reza's book really affirmed this notion. It was such a simple example that I often overlooked its implication on our beloved hexapods...
If the entire hexapod was in the air, with the legs at "center" position (found by calibration), after some successive body rolls (center of the body of the hexapod, for the Phoenix with Atom Pro 28 _ miniABB, near the pair of Tantalum bypass caps), the tips of the legs will be at an offset (again, the legs have never moved, it's still in its "center" position).
If one were to do a vector difference of the original position and the calculated position after the transformation, you'd get the difference:
Gr = Q * Br
Gdifference = Gr  Br
Where Gr is the new global coordinate, Br is the initial position of the end effector (tips of each leg) and Q is the transformation matrix. Gr, Br, and Gdifference are all vectors (X,Y,Z).
You take the Gdifference, and you tell each of the leg (through the XYZ to PWM converter found out by the closed for trig equation).
I need to start coding this... Ah father time, if only you gave me 25 hours in a day instead of 24 hours... I'd still most likely blow it on sleep... :p
SteamAutomaton
09022010, 08:28 PM
I may have missed this, but where is a beginning tutorial on this subject? I am confused about how many legs are being used 4,6,8?
I need to start coding this... Ah father time, if only you gave me 25 hours in a day instead of 24 hours... I'd still most likely blow it on sleep... :p
(silliness)
Who else here is working 25h/8d/53w a year?:p
(/silliness)
Yours,
SA:)
lnxfergy
09032010, 03:25 PM
I may have missed this, but where is a beginning tutorial on this subject? I am confused about how many legs are being used 4,6,8?
I don't think there is a beginning tutorial  you might find individual tutorials on things such as IK for a 2link planar arm, but the general theory of kinematics is really not a beginner's topic. Generally speaking, we're using linear transforms to convert coordinates from one frame of reference to another (given the constraints of the legs). It's almost entirely just an application of geometry and linear algebra  and then some work to optimize the code version (no reason to do all the matrix calculations or even store the whole matrix when many values are always 0).
As for # of legs  it's effectively irrelevant. If your legs are similar, a single IK solver can generate position values for any # of legs. The gait generator would be the part that is tuned for # of legs, because it has to pick N endpoints for N legs.
Fergs
SteamAutomaton
09032010, 07:54 PM
Generally speaking, we're using linear transforms to convert coordinates from one frame of reference to another (given the constraints of the legs). It's almost entirely just an application of geometry and linear algebra  and then some work to optimize the code version.
I guess that would mean that I would need to dust of those high math brain cells.:o Or C&P the fine work that is shared by the experts here.:D
As for # of legs  it's effectively irrelevant. If your legs are similar, a single IK solver can generate position values for any # of legs. The gait generator would be the part that is tuned for # of legs, because it has to pick N endpoints for N legs.
Fergs
Thank you, that explains why there no mention of the # of legs.
SA;)
tom_chang79
11012010, 03:59 PM
I don't think there is a beginning tutorial  you might find individual tutorials on things such as IK for a 2link planar arm, but the general theory of kinematics is really not a beginner's topic. Generally speaking, we're using linear transforms to convert coordinates from one frame of reference to another (given the constraints of the legs). It's almost entirely just an application of geometry and linear algebra  and then some work to optimize the code version (no reason to do all the matrix calculations or even store the whole matrix when many values are always 0).
As for # of legs  it's effectively irrelevant. If your legs are similar, a single IK solver can generate position values for any # of legs. The gait generator would be the part that is tuned for # of legs, because it has to pick N endpoints for N legs.
Fergs
+1. Fergs hit it right on the spot.
The leg coordinate to pulse transformation was calculated using simple geometry/trig. You can solve it by holding each of the dimensions as a variable, and using various identities, you can solve them into closed form (within limits of course).
The body IK transform is really done with matrices, but, you can solve each element in closed form, then translate those closed form expressions into code...
I am currently moving towards fixed point. Using tables and predefines to speed up the calculations of trig. I guess I'm trying to offload processing time and shifting it towards memory time...
My progress however, is moving quite slow... My robot sponsor (job) is taking up all my time... :p
SteamAutomaton
11022010, 06:07 AM
+1. Fergs hit it right on the spot.
The leg coordinate to pulse transformation was calculated using simple geometry/trig. You can solve it by holding each of the dimensions as a variable, and using various identities, you can solve them into closed form (within limits of course).
The body IK transform is really done with matrices, but, you can solve each element in closed form, then translate those closed form expressions into code...
I am currently moving towards fixed point. Using tables and predefines to speed up the calculations of trig. I guess I'm trying to offload processing time and shifting it towards memory time...
Correct me if I am wrong, but do matrices tables assume a flat surface? If so, how would you account for rough terrain?:confused:
My progress however, is moving quite slow... My robot sponsor (job) is taking up all my time... :p
My robot sponsor takes up my time also, along with my education rejuvenation (college) taking up both time and my brain's processing clock cycles.:rolleyes:
SA;)
tom_chang79
11022010, 04:02 PM
Correct me if I am wrong, but do matrices tables assume a flat surface? If so, how would you account for rough terrain?:confused:
Actually, the matrices used during the transform does not assume any surface...
The way "walking" robots uses these, conceptually, are backwards of how it's explained in most of the kinematics books out there, which describes an assembly robot arm. The robot arm is assumed to be bolted to the ground, and the tip or the end effector has to travel to x,y,z globally given that it is attached to X,Y,Z locally.
For walking robots, you can think of the end effector moving to "push" against the ground/floor, to move the body which it is bolted to...
Also, kinematics does not assume any form of terrain. The way I'm using it, it just tells me how something globally gets affected from a local effect (or vice versa).
As for the rough terrain, assumption that it would be unknown and random. You would have to add some sensors to detect the offsets (what is the "zero" point of the end effector if "zero" is the leg at some predefined angle, say 90degrees with respect to the "ground" which may be a face of a steep hill), determine with those sensors how you want to orient the body (roll, yaw, pitch comes into play) and then use those offsets and the orientation of the body and make it walk...
sarendt
02272011, 11:21 PM
Would it be possible to create a "Kinematics" or "Math" topic for questions about them?
The question I want to pose is "Does one need to learn Forward Kinematics (FK) intametly in order to move on to Inverse Kinematics (IK)?"
Thanks,
Scott
lnxfergy
02282011, 09:57 AM
Would it be possible to create a "Kinematics" or "Math" topic for questions about them?
The question I want to pose is "Does one need to learn Forward Kinematics (FK) intametly in order to move on to Inverse Kinematics (IK)?"
Thanks,
Scott
Well, "learning FK" for joint position calculations would basically equate to understanding how to apply transformations given known joint angles, so I would say yes. FK for position can be done using high school geometry and trigonometry. If you move to larger actuators or start to worry about forward kinematics for joint velocities you will have to introduce slightly more complex things, like jacobians.
Fergs
sarendt
03012011, 02:12 AM
Ok, Thanks Fergs!
scott
gonzo
03212011, 07:57 AM
Hi guys, it's an interesting topic that you're talking about.
I'm not an expert in kinematics, but I've learnt it for a year and implement it
in some of my robots like phoenix hexapod and Katana Industrial arm simulation.
But I think it's time to talk about a whole new level, maybe DYNAMICS will be a good
point to start. I've been dreaming about whole body forceinteractive robots.
Imagine adding COMPLIANCE CONTROL to Phoenix hexapod so it reacts when we push or pull
its legs. That would be wicked cool...:veryhappy:
tician
03212011, 11:12 AM
Compliance control does not really have to involve dynamics* in an engineering/calculation sense, and is actually kind of simple to implement using the dynamixel servos. The dynamixels give a load value estimated by sensing the current through the servo's motor. If the load is greater than a threshold, then decrease the applied torque and/or change the goal position. Of course, compliance control using hobby servos without any internal load or position feedback would require external force/load sensors on the leg segments and/or end effector to detect objects.
*<pedantry> Kinematics is the half of dynamics describing just the motion of an object (path of travel and time taken). What you are referring to as 'DYNAMICS' is kinetics, the other half of dynamics which describes the effects of mass (gravity and inertia) and external forces on an object. </pedantry>
tom_chang79
04122011, 11:13 PM
Hi guys, it's an interesting topic that you're talking about.
I'm not an expert in kinematics, but I've learnt it for a year and implement it
in some of my robots like phoenix hexapod and Katana Industrial arm simulation.
But I think it's time to talk about a whole new level, maybe DYNAMICS will be a good
point to start. I've been dreaming about whole body forceinteractive robots.
Imagine adding COMPLIANCE CONTROL to Phoenix hexapod so it reacts when we push or pull
its legs. That would be wicked cool...:veryhappy:
Well, in theory, you can add sensors to find the load of a servo, but it will take a bit of work. You will have to wire up a current sensing resistor in between botboard/ssc32/arc32 board and your servo's power line and sense the current by sensing the voltage across it... You then take the voltage across the resistor and pipe that into a differential amp and connect the output of the differential into the A/D ports of the botboard/arc32...
This can get messy with 18+ servos. Robotis servos are a huge advantage in this department in that each servo has its own micro inside that serves as the serial interface, current/load sensing, temp, encoder, and etc...
4mem8
04162011, 04:43 AM
This is all very interesting stuff guys and love reading it, but sad to say its way above my head to understand, love to implement this 1k into my ed 209 aka the scout 6dof, but i guess im dreaming lol. but carry on guys all good to read.:), oh btw is flowstone any good for bipeds?? i might be able to understand a gui icon based interface better than code.
tom_chang79
04172011, 05:31 PM
This is all very interesting stuff guys and love reading it, but sad to say its way above my head to understand, love to implement this 1k into my ed 209 aka the scout 6dof, but i guess im dreaming lol. but carry on guys all good to read.:), oh btw is flowstone any good for bipeds?? i might be able to understand a gui icon based interface better than code.
Welcome back MIKE!!! I actually have never heard of flowstone. Zenta (Kare) on these forums and the Lynxmotion forums have a successful biped working. It's a very scoutlike reverse knee design, which he incorporated kinematics into. Maybe he has an Excel PEP program or something that he can share with you...
I've abandoned all works in bipeds for now since I'm trying to concentrate on hexapods... Mike, I'm not sure if I've ever recommended this book, but you'll see that I've recommended this book many times:
Amazon.com: Theory of Applied Robotics: Kinematics, Dynamics, and Control (2nd Edition) (9781441917492): Reza N. Jazar: Books
This is the only book I needed for coming up with body translations... But I would highly recommend browsing the pages of the book online before you buy, the same texts have different effect on different people...
:)
4mem8
04182011, 01:29 AM
Hi tom, yeah its good to be back after being so long away. Thanks for the recommendations on that book ill have a look ty. Sorry to here that your not into bipeds lol but hexapods are cool to. currently working on my T1 from Terminator lol.will be remote controlled to start with then convert to pc mini itx based system.
Omg Tom just looked at the book on Amazon lol the equations are way bevond me, lol thats why i never get to program what i build lol .
sarendt
04182011, 03:00 AM
4mem8  I also highly recommend the same book! Your concern about the equations is understandable, but after having studied the book for a few months the equations aren't nearly as bad as they first look. The important part is to walk threw the book slowly, and as you move forward each equation is built on some part of the last. If you jump into the book even in the middle of the second chapter the equations would look like a night mare! Hope that helps!
Scott
4mem8
04182011, 04:44 AM
Sarendt: yeah i tend to agree with you, very much like digital electronics, i build a lot of circuits, look at the whole package and its daunting, break it down into sub sections and digital blocks it becomes easy to construct. check these circuits that i built a 32 channel i.r remote halfway down page, quite daunting but broken down no problem.http://www.robosapienv24mem8.page.tl/4mem8Robot.htm (http://www.robosapienv24mem8.page.tl/4mem8Robot.htm)its just that im not good on math lol, so not to sure if i can understand it that well, but i will go ahead and order it and see.
parallax
04182011, 11:08 AM
Just another endorsement for that book~
After seeing it in this post I went to my University's library and checked it out. It's a great read so far!
4mem8
04182011, 03:02 PM
sounds good then parallax, looks like my brain will be scrambled lol after reading it. and probably will be ignoring my wife for a few months lol.mmmmmmmmm now is that good or bad lol.
tom_chang79
04182011, 03:37 PM
I just want to point this out  something that I was stuck with understanding kinematics and legged robots and the software behind it:
Implementing inverse kinematic DOES NOT make your robot walk... Kinematics can only tell you where the end effector (tips of the leg) need to be given the body is (x,y,z) and going to the next (x,y,z) and etc...
Here are the "layers" of code you will need to make it walk (at least the layers that my software is structured).
Navigation  This is the layer which I tie my sensor inputs to feed the gaiting layer which is below. For those of you who are using remotes/radio controlled, you can replace this layer with the controller interface software (such as for the PS2 controller like in many of LM's kits).
Gaiting/Body Position  This layer is where the walking gaits and the body rollpitchyaw is written. A few constants are only needed(such as how big are each step, how many paces, etc.). This layer also sequences the legs, since you have to have at least ONE leg up in order to "cycle" through a walk/rotation/run/etc... For example, this is a sample line of code that I use to call up certain gaits:
ripplewalk(1.0,0.0,2.0,5)
This makes the bot do a ripple walking gait, step size of 1 inch on the x axis, 0 inch on the y axis, and 2.0 inch on the zaxis, and it will walk this way for 5 paces.
Inverse Kinematic  This layer takes the constants from the gaiting/body position and calculates out the position of the end effector (leg tip) in order to accomplish certain body and leg orientation and positioning. (This is where Reza's book comes in handy).
Angle/Pulse Calculation  This layer is takes the position of the end effector and translates it into angles of each servo for the end effector (leg tip) to be at the position that was calculated above. Once you have to angle, it's a matter of just simple proportion to calculate out the correct PWM for each servo. (This layer uses simple trigonometry and trig identities to solve since the femur, tibia, and the coxa are all known).
LOWEST  Servo controlling. For most of us, this is just a small piece of code to send a string to the serial port to control our servos through SSC32. For ARC32 users, this could just be PWM syntax... This layer also takes care of the different "signs" you have to put in, because you may have to add or subtract certain PWM offsets depending on which "side" the leg is attached to (left side of the bot, right side of the bot which has a mirroring effect).
sarendt
04192011, 01:47 AM
@4mem8  I'm sure you will enjoy your time, maybe you should see if your wife wants to help? Mine asked if she could learn kinematic's with me and help me :*) Also your bot's look really sweet! I am reading through your page now.
@Tom_Chang79  that was a great review of the software and motion control! I always thought of IK as the glue between what you wanted and getting there. So it was up to me to decide where the robot was to go (either through remote control or some type of navigation algorithm) and then I feed that to the IK software for that particular robot. Details about speed, gait and ground topography would also be feed into the IK software. This would then produce values that would tell us how much to move the servo's. That seems to be in sink with what your saying, you just said it better.
Cheers,
Scott
4mem8
04192011, 02:54 AM
Tom: thank you for that info, i still think it is out of my comfort zone and hard for me to grasp it. but i enjoy building them lol.
Sarendt: lol my wife help me er dont think so, he he, she is about as interested in robotics as a fish on land lol. would be nice if she was. :)
tom_chang79
04192011, 02:52 PM
Mike,
If you want to dip your feet in programming, I STRONGLY recommend this book:
Amazon.com: The Waite Group's New C Primer Plus (9780672303197): Mitchell Waite, Stephen Prata: Books
It's about 5 bucks, and it's an OLD OLD ANSI C programming book. I've had this book since I was a little boy and this was the one reference book I've always used when programming in C.
My opinion, if you can write some C codes, you'll have no problems transitioning into Basic, Java, and etc...
This book is great. Just read it from chapter to chapter, it explains many of the syntax and structures VERY well... Better than most of my Comp Sci teachers I had over the years...
4mem8
04192011, 03:01 PM
Thank you Tom ill try to get one after Easter is over . :)
Edit lol : just purchesed it from Amazon lol
jhertzberg
04212011, 12:55 PM
Well, in theory, you can add sensors to find the load of a servo, but it will take a bit of work. You will have to wire up a current sensing resistor in between botboard/ssc32/arc32 board and your servo's power line and sense the current by sensing the voltage across it... You then take the voltage across the resistor and pipe that into a differential amp and connect the output of the differential into the A/D ports of the botboard/arc32...
This can get messy with 18+ servos. Robotis servos are a huge advantage in this department in that each servo has its own micro inside that serves as the serial interface, current/load sensing, temp, encoder, and etc...
Maybe not,
What if you just monitor the 2 servos in the Coxa (shoulder)?
That would only be 8 total. It would also cut down on what you need to process.
Jeremy
gonzo
04222011, 12:14 PM
@ tom_chang79
@jhertzberg
I'm pretty sure that is not necessary when we talk about compliance control. I've read some books as reference and starts to make a clear understanding about compliance.
I think compliance control is one of the main key to make Matt Denton's hexapod walks very well on rough terrain.
Here is what I've been thinking about:
In order to make a compliance leg (that is the leg that response to the uneven surfaces) we have to set a model that suites best for our compliant leg. And I think that model is mass springdamper system. Implemented with P.I.D. control method, we can make our leg follow a trajectory that keeps the contact force constant.
That's the theory I'm currently working in but it will take some times for me to make it works in reality.
Btw, how's your hex now tom? Has he awakened yet? :happy:
Really excited to see it compete with mine in terrain adaptation? haha.. :veryhappy:
Baranyk
04172012, 01:55 PM
Hey all,
I know I'm rather new to all this, but I do have some ideas I'd like to share to try. At the moment, my robot is... nonexistent, so I can't really comment on the mathematics and practical application. But I think I can comment on some of the code and maybe throw some ideas out on how we can optimize a bit.
Regarding CPU cycles and memory:
Aside from pure code optimization (which is grand, but my strengths in matrices are currently questionable at best), perhaps a more distributed architecture might be called for. For example, if you use two controller boards (x2 ArbotiX, for example), and then utilize the existing serial communications line (XBee), could you not split off some of the equations for half of the legs? Granted, a master board would have to be designated for the equations revolving global placement, but given a properly parametrized serial packet, the requested math ought to be split if you're only working on a single leg.
Pseudo:
master: I want current positions from these legs (temporary storage, can delete once done > opens up RAM since variables aren't stored on the master, but on the slave)
slave: returns positions
master: I want to execute body movement this far forward, I solve for legs [ 1 : n/2 ]
slave: receives command to solve for legs [ (n/2)+1 : n ]
slave & master confirm completion, synchronize, execute. Master deletes memory kept on the slave. The completion confirmation and the synchronization doesn't even need to take place on the serial line, you can use free DIO (assuming you have free DIO...)
If this has been thought about, shoot me down and I will hang my head in newbie shame.
Zenta
04182012, 01:56 AM
I believe the method you are referring to is similar to how this guy solved it:
http://www.youtube.com/watch?v=3Q0LgfZaug
http://www.youtube.com/watch?v=hQ6J5qHqblo
I'm quoting him from youtube:
Xwalker is a quadruped robot that I've designed and built by myself. All robot's parts (electronics, mechanics and software) except servos were created from the ground up.
On board there are 5 microcontrollers (ATmega) connected by SPI, which are responsible for different tasks:
1. "Brain"  controls other Âµcs, SPI network, LCD, accelerometer and gyroscope (via I2C), calculates Kalman filter.
2. "Power"  measures battery voltage ( two LiPo batteries  one for electronics and one for servos) and current consumption. Adjusts suitable voltages (for servos and electronics) and also measures temperature via 1wire network and DS18B20 sensors in 5 important localizations in robot.
3. "BT_RX_TX"  responsible for transmission /receiving and coding/decoding data from Bluetooth modules.
4. "SERWO1"  controls 6 servos, 2 feet sensors and measures voltage in servos' potentiometer to obtain information about servos' overload.
5. "SERVO2"  has the same function as SERVO1 but for the other two legs.
The Xwalker is controlled in real time by PC via 2 Bluetooth connections  one to transmit and one to receive data. I wrote a special multi thread program for it. The program also displays all information from the robot on a screen and enables user to control the robot.
tom_chang79
04232012, 04:08 PM
Hi Baranyk,
The point you brought up reminds me of the topic discussed during my undergrad days, which was parallel programming. Splitting off task is a neat idea, especially when you can push more of the complicated or things that require long execution time (where the "O" gets to become polynomial or beyond, which is quite common with these trigintensive IK calculations).
However, the challenge then becomes memory management and synchronization, since ultimately, everything revolves around the time frame within the transit time of servos from position A to B. Granted, there are ways to intervene using interrupts, however, synchronization, memory management, as well as delegation of tasks/resources is the key into parallel programming.
On my bots, I do enjoy a bit crude method of distributed computing, I say crude because it's only between two boards and it's more of a masterslave style of distributed computing rather than true parallel.
I'm using SSC32, and even on my current octopod, in which I'm using the ARC32, I'm still using the SSC32 servo controller, since I don't have to worry about movement of servos, it's all done through a single serial pipe.
I may dabble with ARC32's ability to control servos in the near future, but for now, using an SSC32 is such a convenient way to port codes. Letting the SSC32 control the servo means that if I want to get rid of the ARC32 in the future for something even more powerful, I can do so without having to worry about PWM and such, since the SSC32 takes care of the PWM interface to the servo and turns that into a single TTL serial stream.
The major overhaul I need to do with my code currently is converting my decimal points to fixed points. It's been something that's been sitting on the back burner for a while now :tongue:. Even so, my current IK is using wasteful float/double functions, but the robot has yet to stutter.
However, I realize that the demo code only demonstrates the IK engine and the gaiting. Once you start closing this open loop control with sensors (which will ultimately require interrupts), fuzzy logic for decision making, I am 99% sure that I will start to see stutters since the float/double trig functions are very wasteful on system resources. A trig table would alleviate it, a the cost of memory, but I believe it's well worth the cost since it's a bit ridiculous to ask the processor to recalculate the same sine, cosine, arctan, and etc., functions over and over...
I think compliance would be something I will eventually have to dabble with in the near future. Currently, because I'm using hobby servos and not a fullon "actuators" like the Robotis' AX12+, sensors would need to be added in order for the compliance to work. I know that many of the guys with Lynxmotionbased bots have used the touch/pressure sensors very successfully, however, I think for advanced implementation of these features would ultimately require something like Robotis' actuators, since they have full sensor suites, such as current measurement (which you can use to indirectly estimate the amount of torque it is exerting), temperature (are the servos heading towards implosion?), and etc.
The great thing about using "dumb" hobby servos is that it forces the end user to implement his/her own mechanisms for compliance, forces the end user to actually understand it at a lower level than you would if it was packaged in a black box.
Progress is slow for me however, I'm lucky if I get about two hours a night to work on my bots... :tongue: Having a huge gap of time in between doesn't help either. Last time I worked on updating my IK engine, I spent about 4 hours implementing something I've already implemented in code the year before... :tongue:
Powered by vBulletin® Version 4.2.3 Copyright © 2020 vBulletin Solutions, Inc. All rights reserved.