Results 1 to 5 of 5

Thread: Learning Obstacle Avoidance By Example

  1. #1
    Join Date
    Apr 2009
    Location
    Stanford, CA USA
    Posts
    590
    Rep Power
    47

    Learning Obstacle Avoidance By Example

    This video demonstrates learning by example in an artificial neural network that controls the motion of a mobile robot. The robot uses four sonar sensors and three IR sensors to detect the ranges to nearby objects. A wireless controller is used to initially remote control the robot past some test objects while the robot records the sensor readings and motor control signals. This data is then used to train a 2x7 artificial neural network (2 motors and 7 sensors). Once the network is trained, it is used to control the robot without intervention from the operator.

    For more information, see http://www.pirobot.org/blog/0007/

    Last edited by Pi Robot; 11-23-2009 at 11:16 PM.

  2. #2

    Re: Learning Obstacle Avoidance By Example

    That's pretty cool. It seems like the hardest part about using a neural network is figuring out how to apply it to a problem.

    Are you using an open source neural network library, or is the software all custom?

  3. #3
    Join Date
    Sep 2008
    Location
    Albany NY area
    Posts
    22
    Rep Power
    37

    Re: Learning Obstacle Avoidance By Example

    Nifty! Do you expect to come up with a library of such trained examples that you can switch between depending on context (presupposing you're constructing something autonomous)? Or do you expect this particular behavior will always be engaged (you don't expect your robot to interact intimately)? Is this behavior expected to be part of a subsumption architecture?

    B/R
    --
    Gorbag
    We are pattern matching machines

  4. #4
    Join Date
    Apr 2009
    Location
    Stanford, CA USA
    Posts
    590
    Rep Power
    47

    Re: Learning Obstacle Avoidance By Example

    I am using the most excellent open source AForge.NET package which you can find at http://www.aforgenet.com/. It has piles of machine learning routines as well as a large vision library. To give you an idea how easy it was to train the neural network in the obstacle avoidance video, here is all the code you need using AForge.NET:

    ActivationNetwork neuralNetwork;
    neuralNetwork = newActivationNetwork(new SigmoidFunction(), nInputs, nOutputs);
    ActivationLayer layer = neuralNetwork[0];
    DeltaRuleLearning teacher = new DeltaRuleLearning(neuralNetwork);
    teacher.LearningRate = learningRate;
    while (i < nEpochs && error > 0)
    {

    error = teacher.RunEpoch(Inputs, Outputs) / SensorData.Count;
    i++;
    }

    The array Inputs holds the recorded range data (three IR and four sonar readings per sample) and the array Outputs holds the corresponding recorded wheel speed data (two values per sample). Once trained, you can drive the robot by feeding the current sensor readings into the network to get the wheel speeds:

    Outputs = neuralNetwork.Compute(Inputs)

    --patrick

    Quote Originally Posted by Adam View Post
    That's pretty cool. It seems like the hardest part about using a neural network is figuring out how to apply it to a problem.

    Are you using an open source neural network library, or is the software all custom?

  5. #5
    Join Date
    Apr 2009
    Location
    Stanford, CA USA
    Posts
    590
    Rep Power
    47

    Re: Learning Obstacle Avoidance By Example

    Yes, that would be a cool idea. Since each behavior mode (in this case "wander but avoid obstacles") is defined by a simple matrix (in this case a 2x7 matrix plus two bias values), you could easily build up a library of behavior matrices and assign each a priority as in a subsumption architecture. At the moment, I am simply experimenting with the learning algorithms since it wasn't clear to me that they would even work with real-world data--simulations are always so much cleaner. I just finished putting together a simple Bioloid arm and video camera for the robot and I'd like to try training the arm to execute various tasks such as picking up an object, but it will take me awhile to get some results...

    Quote Originally Posted by Gorbag View Post
    Nifty! Do you expect to come up with a library of such trained examples that you can switch between depending on context (presupposing you're constructing something autonomous)? Or do you expect this particular behavior will always be engaged (you don't expect your robot to interact intimately)? Is this behavior expected to be part of a subsumption architecture?

    B/R

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Contest Entry The eyeRobot: Robot Blind Aid
    By Shrimpy in forum Project Showcase
    Replies: 16
    Last Post: 02-07-2011, 09:20 AM
  2. Question(s) Writing an educatonal book on robotics
    By darkback2 in forum Robotics General Discussion
    Replies: 107
    Last Post: 05-24-2010, 11:51 PM
  3. Question(s) Learning embedded
    By Resilient in forum Robot Computers
    Replies: 6
    Last Post: 12-18-2008, 02:03 PM
  4. Learning Electronics
    By jes1510 in forum Robotics General Discussion
    Replies: 12
    Last Post: 12-06-2008, 12:12 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •