View Full Version : Genetic Programing and Evolutionary Robotics w/the Khepera II

08-08-2008, 04:23 PM
Hello All,

I've been lurking on the forums for a while, I don't post much though. There was a discussion about the Singularity a while back, and few members expressed in interest in speaking more about AI, GP and NN.

So, I just want to update people as to where my research is strong and where my research is lacking.

For a number of months I have been reading as much detail as I can into the following robotic platform, the Khepera II.


This book http://laral.istc.cnr.it/evorobot/simulator.html is the leading guide for Evolutionary Robotics, along with NUMEROUS other PhD thesis type papers.

Dario Floreano is one of the leading researches in the field, http://lis.epfl.ch/member.php?SCIPER=111729, some of his latest work being this: http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie and here http://technology.newscientist.com/channel/tech/dn11248

I've got my hands on a Khepera II from the nice folks over at http://www.roadnarrowsrobotics.com/. They are one of the only US sellers of the Khepera.

I've worked with some simple POC c code for using genetic programming to solve polynomials given X and Y values. Pretty cool, but not much practical value.

I've also been playing with the webots simulator http://www.cyberbotics.com/. This simulator allows to run c code for the Khepera, and then transfer it to a live Khepera. Great for Evolutionary Robotics where you need to evolve a controller of a number of hours or days, the simulator allows you go into 'fast-forward' mode to expedite things.

The problem I'm having is that nowhere, NOWHERE, has anyone put together sample c code to evolve a controller via genetic programming. I've read about 30-40 thesis papers with great claims, accept every author skips out on providing the source code to generate the controllers.

So I'm posting here to see if anyone has any experience in these fields, and would like to share ideas and research with me.

I've looked into PerlGP, Groovy/Java GP, C GP and some others (python I think). Would love to hear about other peoples Evolutionary Robotic experiments. Or GP.

- sk

08-08-2008, 05:18 PM
Karl Sims has done some great work in the related field of CGI. See here (http://www.karlsims.com/evolved-virtual-creatures.html), particularly, for some very biomorphic "robots" evolved entirely in simulation. I was at the original talk and his paper describes in some detail his method for building nervous systems from a variable genetic description. It's a variation of an L-system that generates a fairly complex wiring diagram from simple parts.

I've also played around with some ideas of my own, with varying degrees of success. What types of behaviors are you trying to model?

08-08-2008, 05:27 PM
Hey metaForm3d - Thanks for the response. I'll look into Karl Sims' work over the weekend.

I'm trying to model *simple* behaviors - wall following/avoidance, obstacle avoidance, light following, etc. While I understand how to code these in C, I still don't understand how to evolve them via genetic programming.

I'm really interested in predator / prey and swarm behavior - but I want to start simple. I'm also interested in perhaps developing a methodology breaking down complex tasks in pre-solved simple tasks (evolved) to be able to solve more complex problems.

I'm also interested in developing a framework for GP / ER development in higher level languages (php, perl, python).

Care to explain some of the research you've done?

08-08-2008, 06:19 PM
Have you looked at game "AI" at all? True it's simplistic generally, but the root behind predator/prey and swarm behavior, fight/flight, food gathering, protection, ... can frequently be very easily seen and demonstrated in game AI - frequently with available source-code.

08-08-2008, 06:50 PM
Care to explain some of the research you've done?Happy to. Guess I'm not getting much work done today anyway...

I did some game theory, proving the Nash Equilibrium isn't an evolutionarily stable solution to Traveler's Dilemma. The result is interesting but the modeling was trivial.

I had a lot of success replicating Karl Sims image evolver. These are basically images generated by evaluating an equation for each X,Y in the image plane. The genotype is a function tree: there are variables (X or Y) or constants as the leaves, the internal nodes combine two or more inputs using a simple function, and the final output is the color, RGB=f(X,Y). Sims had methods for mutating, mating, and blending these trees.

I have started on some projects for using genetic algorithms for robotics, but I haven't gotten very far yet. The idea was to evolve gaits, in this case on a simple triped. You wouldn't think that a 3DOF triped could have a gait, but playing around with manual control I found there were some quite successful ones that involved hopping. My program could generate progressively evolving gait patterns and drive the servos, but I ran out of gas trying to build the rig to automate the testing.

I hope to get back to that at some point.

Designing the genotype is really the black art to this kind of algorithm. It's frankly criminal that researchers aren't releasing their code, because that's the whole point. I'll tell you what I've learned, and perhaps you can apply it to your situation.

In the next post...

08-08-2008, 07:11 PM
I have learned some rules of thumb to apply when designing genomes.

First you want your genome to match the problem you're solving as closely as possible. Ideally a very simple genome will produce plausible or at least interesting results. If there are symmetries they should be exploited so that mutations can have symmetrical effects (though not always). You don't want to have your robot evolve a clever behavior for turning right but not be able to use it when turning left. This just requires thinking up front.

Second you need a genome that's very flexible. Mutations are random, so you need something very robust to random changes. Ideally every possible genome created by any string of mutations will at least do something, and most mutations should have results similar to the parent. I've found that by making all the elements the same (like Sim's RGB functions) with the same ranges of effect, catastrophic mutations can be reduced.

Sex is great -- that's why it's evolved independently half a dozen times or so on this planet alone. If there's a way to compare two similar genomes and pick one set of differences from one parent and a different set of differences from another parent, the fitness of the children can be improved much more rapidly than by random mutation alone. It's something worth thinking about when designing your genome.

more later...

08-08-2008, 07:48 PM
If I get some time in the next week I'll try to extract the genetic code from my gait project and post it here. I had some important changes I wanted to make but I can describe those in the comments.

Here are the various types of genomes I have tried (or at least planned to try) for genetic algorithms.

1) Function Tree. A function tree is basically a set of function nodes set into a tree structure. They each take scalar or vector inputs and produce vector outputs. Mutation involves changing scalar values, change node types, adding or removing nodes. These are great if you want a single output vector to be computed from a set of inputs. Every tree produces some kind of output, and mutations have stronger or weaker effects depending on how far they are from the root.

2) State Machine. This consists of some set of states which correspond to discrete behaviors. The input is a single symbol which changes the state by looking up the new state in a table. The genome is basically the table, and mutations to any slot in the table produce valid state machines. Mutations tend to have less effect as the table grows.

3) Function Network. This is what I was using to generate gaits. There's some set of input states which are generated as sine waves with different periods and phases. Then a network of processing nodes combine sets of signals and feeds them to subsequent nodes. Finally the result is the same number of output states which can be used to drive degrees of freedom. Mutations all have about the same effect, except those that increase or decrease the number of states. For symmetric outputs I was duplicating the entire network but with variations specific to each symmetry.

1 and 3 are basically functions, where the inputs (in the case of 3, time) completely determine the outputs. (Most neural nets have this property as well.) 2 is capable of having a memory, such that the same input now can have a different effect than the same input a while ago based on what happened in between. The ideal robot behavior will combine the two, which is a problem I've been wanting to work on but haven't had the time.

08-08-2008, 08:11 PM
Couple other quick things I forgot to mention. Some of the input to the gait generator were command signals, indicating direction to go, speed, amount to turn.

Clamping ranges is important. All the inputs are floats in the range of 0-1, and all the functions have outputs in the same range, so the final result is always 0-1. This would work well for microcontrollers, which could use 0-255 integer values.

08-08-2008, 10:01 PM
The problem I'm having is that nowhere, NOWHERE, has anyone put together sample c code to evolve a controller via genetic programming. I've read about 30-40 thesis papers with great claims, accept every author skips out on providing the source code to generate the controllers.

So I'm posting here to see if anyone has any experience in these fields, and would like to share ideas and research with me.

Lots of software links here -

Also, the source code for the software used in Nolfi / Floriano is found here -

off the page you previously referenced -