Page 1 of 5 12345 LastLast
Results 1 to 10 of 41

Thread: Gepetto

  1. #1
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    87

    Honorablemention Gepetto

    Gepetto
    Specifications
    Specification Dimention
    Style Phidget Servo or RC based robot
    4, DC Gearhead HG37F260-031
    Vantec RDFR21 ESC
    Phidget 4 servo controller
    Software MAX/MSP Jitter/Cyclops
    Colson 4” * 2” wheels
    Height in inches 29
    34 armed**
    Length in inches 22
    33 armed**
    Width in inches 29
    Frame materials Oak, Poplar, Steel, Paint

    Introduction:

    Gepetto (named after the old man who makes pinocchio the puppet who wants to be a real boy) was conceived of in response to two things. Most importantly looking around I have noticed that a lot of things people make these days seam to be made to look modern, but not to last. For example look at cars today. So much of what goes onto and into them is plastic crap that is going to break off the first time you touch it. (I know this isn’t actually true...and plastics have come a long way and are much more durable than they were in the past) I wanted to make something that looked and felt as though it wasn’t going anywhere. I also wanted a sort of timeless sense. Here you have this thing that seams like it could have been made forty years ago, but inside of it you have some technologies that are pretty modern.

    The second reason is that I make robots as an art form. In going around to different galleries near where I live, I was always being told that my work was too radical for people to accept as art. People would ask. “what would someone do with that thing?”.
    So I decided to see if I could make something sellable.

    Mounting a gun on its back was an afterthought.




    This really hurts.

    Concept:

    The idea behind this robot was to create a stable mobile platform upon which to base an artificially intelligent robot. This has always been a problem for me. I know that we aren’t exactly there yet, but it seams to me that a lot of the robots that I have come across focus solely on mobility. That is they have programs that allow them to climb stares, or go over big rocks and boxes, or avoid them all together. At the same time while some of these robots can do things like find their way from point A to point B, and avoid obstacles along the way, I have not seen a lot in the way of a robot that would “decide” to go from point A to point B, Nor am I sure that I know what that might look like...but none the less, that is my over all goal.

    The Platform:

    In the past I worked primarily with two robotic platforms, the lego mindstorms system, and the ER-1 by Evolution robotics (www.evolution.com). The ER-1 was discontinued several years ago, and I decided then that I would not subject my work to the limitations of any one company. I then began looking for other ways to have successful platforms, and decided on using the Phidget Servo controller in conjunction with MAX/MSP.

    Continuing the concept of the ER-1 a robot that uses an umbilicus concept only carries its mother (the computer) on its back, I have built a three phidget servo controlled robots which carry computers around with them.


    In addition to being able to carry around a computer, I wanted this robot to be fairly stable. The frame design allows individual wheels to move independently with a wheel travel of almost 3 inches. Given that the wheel diameter is only five inches, that is quite a bit of movement.

    Gepetto’s motors, while much stronger than I thought they would be, are still undersized given the weight they have to carry. Gepetto won’t get stuck on carpet edges or the like, but I don’t see him climbing any mountains...or even doing well in grass.

    The video screen on Gepettos head does actually work, and using MAX/MSP I can display different pictures or video on the screen. I plan on using it for video conferencing while doing telepresence stuff in the future. I also wanted to make the robot capable of displaying its “mood”.

    I plan on making a much more capable model this summer using many the same concepts.


    The Software:

    To be honest, I haven’t done nearly as much work on the software end of things as I should have at this point. I guess that is because in part the programming part is not as much fun for me as the building part.

    In any case I have worked up some Max patches for doing things like changing the probability of an event based on virtual rewards or punishments. Really just a random number generator spitting numbers through a gate. Say from 0-9. The gate begins open up to 8 (9 gets rejected). A number making it through the gate triggers the behavior. So to start with the gate almost fully open, and most number gets through. If I then “punish” the robot. The gate is now half way closed and will cut off any number over 5. Further punishments will work the gate down to 2. There will still be some chance that the robot will perform the behavior, but it becomes less and less likely based on punishments. Rewards work the opposite way. Rewards keep the gate mostly open. These gates are the way in which behaviors are triggered thus a robot can simply “choose” not to do something you ask it to do.

    Similarly the next iteration of this robot will have “see far” devices mounted on all 4 sides. In this way it will be able to respond to new things in its environment, or choose not to respond. Again, I will be able to encourage it either way with rewards and punishments.

    “Mood” displayed on the screen on its head will be based on a culmination of all of its gates. A sort of average, coupled with how recently it has been punished. The more recently the more negative the mood. Mood will have the effect of making behaviors more or less likely to be performed, and also more or less erratic.

    I have also worked a bit with the cyclops patch for MAX/MSP. This is a smaller program that triggers events based on visual data. Cyclops is not quite as advanced as I would like for it to be, but either with work, or with other people getting involved I’m hoping to be able to create behaviors based on visual information. As it currently stands I can trigger events by having either a specific color, or a movement in different zones on the screen.

    Problems:

    Given that this is really just a platform, I have run into very few major problems.

    One problem was making room around the motors so the shocks would fit. While this was a problem I should have foreseen during the CAD drawings, I did the drawings without actually having purchased the motors yet, and I didn’t realize that the motors would stick out as far as they did.

    With the first Gepetto I also made the mistake of reconsidering the gap between the two frame rails. This made it so that the battery didn’t fit the way it was supposed to, and I had to add the dropped shelf.

    Finally the original wheels that I chose were not designed for a robot that is so heavy. I switched them out for the colson wheels. The problem with using colson wheels is that I had to find a creative way to mount them on the 6mm shaft output from the motors. I used 1 3/8 oak wood dowels, and drilled out the ends using a 7/8th inch bit. I then used epoxy to glue aluminum hubs into the wooden shafts. I drilled two holes on opposite sides of the wheels, and dropped in wood screws that go into the shaft.

    The software has also been a problem. The phidgetservo object for MAX/MSP does not work in MAX version 4.5. I can upgrade to 4.6 but for that I would have to buy an upgrade for Jitter (the video software). Given that I do all of this on a school teachers salary, Gepetto will have to wait.

    The Future:

    Included with this post should be drawings of the next version of this robot. It will be similar in concept, and capable of running the same software, but the computer used will be my sony vaio vgn ux180p. Its a micro PC. This unfortunately means no cyclops (the video software that Gepetto is capable of running). Instead I am going to use the I-cube which is a box that translates sensor data into MIDI events. Through these sensors Gepetto II will be able to create a model of her environment and respond to it.

    Front View


    Side View


    Top View


    Links:

    Software:

    www.cycling74.com

    http://www.trossenrobotics.com/phidgets.aspx

    http://infusionsystems.com/catalog/index.php

    Hardware:

    http://infusionsystems.com/catalog/index.php

    www.robotmarketplace.com

    www.trossenrobotics.com

  2. #2
    Join Date
    Feb 2008
    Location
    OMNIPRSENS
    Posts
    166
    Rep Power
    33

    Re: Contest Entry

    The suspension and movement are very impressive on that gravel drive.

    Keep it up!
    MYKREATION:
    http://forums.trossenrobotics.com/sh...=6217#post6217

    Somnium est ubi nos ipsi primas agimus partes.

  3. #3
    Join Date
    Sep 2006
    Location
    Chicago Illinois
    Posts
    682
    Images
    174
    Rep Power
    75

    Re: Gepetto

    Thanks for the post. This is an impressive project. I love the simplicity of the trigger servo. I have to give you kudos too for sacrificing your body for the entertainment of the kids. I can't believe you were taking shots that close. I've gone paintballing and that is close range!

  4. #4
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    87

    Gepetto "Learns" to respond to a wii remote.

    This patch makes use of Aka.wiiremote by Masayuki Akamatsu, and the phidgetservo object.

    For the sake of this discussion I will use words which anthropomorphize Gepetto. I may at times refer to Gepetto “wanting to” or “choosing to” do something. “Wanting to” for example refers to an increase in the potential that an event will occur. Similarly “choosing to” refers to a random number generator resulting in a number which triggers the event to occur. While I understand using these words may cause some confusion, I personally believe they best express the manor in which the simulation works.

    Purpose

    While this patch is far from representing true artificial intelligence, the lessons learned from its creation can be applied in a myriad of situations. This patch simulates an animal learning to respond to commands. The robot has the option of responding to stimuli in a myriad of ways. Through the use of punishments and rewards the robot can be trained to respond properly to a command that it is given. The same robot can be retrained later to respond differently to the very same set of commands.

    How it Works


    The patch can be broken up into three parts. First data is captured from the Wii controller. This data is incredibly dirty, and needs to be cleaned up so that commands can be given in a predictable manner.

    The commands can be summarized as follows. Holding the remote level on both the x and y axis results in a command of 0 (or no command). Pointing the remote downward results in a command of 1. Tilting the remote up results in a command of 2. Twisting the remote clockwise while it is pointing away from you results in a command of 3, and twisting the remote counter clockwise results in a command of 4. Returning the remote to the level position returns it to a 0 command. The 0 command position is not connected to any output, and simply registers the time in between commands.

    The second part, is the most difficult to express, In the second part, the robot is given the commands and a series of processes are done. These processes result in the robot choosing to respond.

    For the purposes of this project the robot is given 4 response banks. Each bank is associated with 5 possible responses. Move forward for 1 second, move backward for 1 second, turn left for one second, turn right for 1 second, or do nothing.

    When a command is received the software routes the command to one of the 4 response banks. There the command triggers a random number generator to pass a number on to a bank of 4 ranges. 0-24, 26-49,51-74, 76-98. If the number falls into one of these ranges then the robot will respond to the command by moving forward, moving backward, turing right or turning left respectively. If the number is not in that range, then it will simply be dumped.

    The third part of the software formulates commands and outputs them in a meaningful way to the phidgetservo object.

    The robot will then perform the chosen response.

    Within 3 seconds of the robot performing its response, the operator can do one of three things. Nothing, results in no change in behavior. Pressing the trigger on the Wii remote (punishment) results in a decrease in the range of numbers available for triggering the chosen response. Pressing the A button (reward) does two things. First it increases the range of numbers available to trigger the chosen response. Simultaneously it decreases the range of available numbers for triggering any other response. The software is limited to 4% for reducing the range for any possible response. Similarly the robot is limited in the % of producing the desired response. This means that as a robot becomes more and more trained, the same robot becomes less and less responsive.

    All 5 responses remain possible for several reasons. I could have written the patch so that as one potential increased the others decreased with no limits. I chose not to do this because once the robot learned a set of responses it would become immutable. The inherent flexibility built into this system means that you can retrain the robot to respond differently to a given input.

    I also took my dog into consideration while assembling this patch. My dog doesn’t respond to given commands very often. While I know there are people in the world who have dogs that respond instantly to commands, if mine sat without repeated commands I would be incredibly surprised. Most of the time he simply looks at me and smiles.

    Future Implementation

    The underlying principles of this program can be applied in a myriad of places. In another discussion I will show how Gepetto’s camera and infrared sensors work, and the data from the camera is analyzed. The learning aspect of this software will be applied to the camera and infrared output analysis. In this way Gepetto can respond to visual stimulus with a greater degree of realism. Gepetto can also be set with “moods”. These moods will change starting potentials of response.

    Software Problems Encountered

    Aka.wiiremote:

    This patch is relatively stable once it is running, but was incredibly hard to get to that state. The computer and remote would either, not connect, partially connect, or properly connect. When the remote and computer failed to connect, I had to restart the software before it would connect. At times I had to restart the computer. I also often had to pull the batteries out of the remote before it would work. I’m not sure if this problem stems from bluetooth interference or not. Changing environment and electronics in the area seamed to have little effect on successful pairing.

    Partial connections didn’t occur as often as no connection, but were a lot more annoying. The data stream from the Wii would either be random or drop to the lowest possible value. This would trigger the robot to begin dancing about as each jump in values triggered a new and often different response. This made it impossible to even begin training the robot because of its erratic movements.

    Stack Errors:

    Because this program requires feed back loops, I kept getting stack errors. This is one of the reasons that disciplining the robot does not increase the potentials of the other options. Increasing one option simultaneously decreases all others. This created really bad feedback loops which crashed the program.

    Discipline:

    It was really difficult to ensure that incoming discipline went to the right place. MAX?MSP reads the page from right to left and top to bottom. This means that an event that happens at the bottom of a page happens after an event at the top. This causes events that are passed from the bottom of the page to the top of the page to never occur, because the event at the bottom of the page does not send the message until after it is required. I had to add pauses in several different places.

    This also caused the program to crash.

    Unpredictability:

    The nature of this concept is adding unpredictability to a robots software. This causes the robot to behave erratically, and I did some damage to both the robot and my house in the process.

    I originally programmed the patch to reverse behaviors that were performed that were incorrect, this led to a whole host of problems. Gepetto would make a mistake, I would correct him. And he would do the opposite of the command that I had intended for him to perform, and not the one that he did perform. Similarly when that problem was fixed because of erroneous inputs (holding the remote improperly and sending commands by accident) Gepetto would have received a new command before being issued the correction. This also led to Gepetto reversing the wrong command.

    Gepetto II will include infrared sensors. I’m hoping to use these to make gepetto choose not to follow interpreted commands that would lead to contact with a wall.

    Here is the video that shows the training of Gepetto using the Wii remote.


    I look forward any help.
    Last edited by darkback2; 02-28-2008 at 01:22 AM.

  5. Re: Gepetto

    Thats really cool. Im a big fan of paintball. You might want to look into getting a remote cord for the co2 tank. They run pretty cheap on ebay (about $20,) and would give more stability to your arm and gun. But overall it looks really good.
    $DO || ! $DO ; trytry: command not found (aka "Do or do not. There is no try.")
    Blog of my Internet Controlled Robot here
    http://forums.trossenrobotics.com/blog.php?b=43

  6. #6
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    87

    Re: Gepetto

    Gepetto Vision System.

    Purpose:

    The main purpose of this vision system is to add to the overall functionality of Gepetto, and to test out various systems for Gepetto II (which I have gotten permission from the wife to begin working on tomorrow provided I scavenge all of the electronics, and a lot of the other parts from the current Gepetto.). I am also working to create a viable vision system that does not rely on specific colors, or close matches. Gepetto identifies potential obstacles by comparing different parts of what he sees.

    I also wanted to make it so that Gepetto can be more responsive to his environment, as part of the over all artificial intelligence concept.

    That is not to say that there is any “learning” going on during this demonstration.

    How It Works:

    This patch uses the Cyclops object to interpret video signals coming from one of the two cameras. For the purpose of this robot I am using an external isight camera which is mounted in a downward position on Gepetto’s head. I have made several assumptions to help make this plan work.

    1) Gepetto is starting far enough away from a wall, that his entire field of vision will be taken up by a drivable surface.
    2) The bottom center of Gepetto’s field of vision is not an obstacle.
    3) The floor is a relatively uniform color.
    4) The walls and other obstacles are not the same color as the floor. (an invalid assumption in my house.)

    The video feed is divided up into 144 squares. Each square outputs an average color for all of its pixels. This is in RGB format. This information is then passed on to the processing section of the software.

    Video is processed in 2 ways. First predetermined squares look for changes in brightness, and output values when changes are detected. This is good for alarm systems and the like where the camera can remain in a stable place. I also found this to work well for defining obstacles if the threshold for difference was set high enough to avoid the robots own movement from triggering the sensors.

    Second, chosen squares are compared to each other in order to find differences between them. A threshold is set for color difference in any of the three colors. So for example, if Gepetto is looking at a blue floor, and comes across a slightly different shade of blue floor, Gepetto would recognize that different shade as a separete object provided the difference exceeded the threshold. Gepetto would also recognize a red section of floor only the difference would be in the red section.. Exceeding the threshold indicates that an obstacle is present.

    The software then passes that information on to the movement section of the software which turns on and off the motors in response to stimulation. I believe that this coupled with an IR system would make the robot incredibly capable in a myriad of situations.

    Problems:

    First off all, my wife wants to kill me because Gepetto is relatively heavy, and made mostly of steel. He did a number on our dishwasher while I was trying to get this software to work properly. (put pillows in front of the cabinets)

    I had planned on adding a fail safe using the wii remote, but left my remote at school. So I had to run around the room chasing him. (add the fail safe)

    The current version of Gepetto is simply too wide for a video camera with this narrow of a field of view to be very useful. Obstacles will often fall outside of his field of view. (the new version (Gepetto II will use a much narrower frame, and have a much higher camera mount.)

    The Cyclops software uses a lot of memory. This resulted in changes made to the software not saving and slow response times from the sofware allowing various commands to sneak through to the phidgetservo object.

    If the robot is too close to the wall, it cannot see the wall as an obstacle because its entire field of view is occupied by the wall. This allows the robot to repeatedly slam itself into the wall...or refrigerator...or dishwasher. (Gepetto ii will use a conjunction of video, and infrared.)

    The Future:

    Gepetto ii will definately combine all of the above ideas into one robot. I’m planning on setting up modes In certain modes he will be able to patrol an area, and in other modes he will be remote controlled, and capable of basic telepresence. I have access to a house under construction. I’m planning on taking gepetto ii over there once completed to further test all of these different systems in conjunction with each other, and along side IR.

    I have a set of USB IR sensors, and also a claw with IR sensors in its gripper. I plan on mounting this and the gun interchangeably.

    I would also like to implement the concept of moods. Moods will work similarly to modes, different moods will set various parameters such as speed of learning, aggressive behavior, attention span (when training with the remote, if you give the same command too many times in a row, or not enough commands in a given amount of time, the robot will switch to “wander mode”), and responsiveness.


    Enjoy,

    DB

  7. #7
    Join Date
    Sep 2006
    Location
    Chicago Illinois
    Posts
    682
    Images
    174
    Rep Power
    75

    Re: Gepetto

    This is all very impressive work. Sorry to hear about the dishwasher though! The first autonomous bot I built was a converted RC car chassis with an ITX mounted on it. Here it is when it was a tethered bot. So I finally get this thing built after a month and turn it on for the first real trial and it instantly takes off across the room at full speed and I go sprinting after it, but there's no chance since it's too fast. I got to watch my $1,500 bot slam into the wall at full speed. LOL.

    >>>>I would also like to implement the concept of moods. Moods will work similarly to modes, different moods will set various parameters such as speed of learning, aggressive behavior, attention span (when training with the remote, if you give the same command too many times in a row, or not enough commands in a given amount of time, the robot will switch to “wander mode”), and responsiveness.

    Speaking of behavior code, Kdwyer and I were chatting about that recently. Here's a quick repost of my comments:

    ---------------------------
    "An experiment I would LOVE to see done is a behavior program based on Freud's Id, Ego, and Super Ego or something similar. Where the main idea is that different "needs" and "desires" compete for the attention of the "conscious" decision making side of the brain. There are two main objectives of the robot 1) to stay alive 2) to be as happy as possible. The bot dies when it runs out of juice or when it's "depressed for too long."

    The Id influences by being happy when there is plenty of "food" (battery charge) or knocking down "happy points" when he's hungry too long. The Super Ego could be mimicked by creating something psychological the robot prefers and wants to keep in order. Making the robot obsessive compulsive about his/her environment for instance is an easy way to implement some competing behavior to instinctive urges like needing to eat and sleep. An experimenter could make the robot prefer red walls and the walls would occasionally change color (backlit by lights) and to change it back the robot has to bump the wall 20 times or something. (burning energy)

    This whole concept can be expanded out by adding social behavior with many bots and the need for social interaction (trading friendly IR pulses) there could be 5 bots and only two food supplies (outlets) creating competition and the need for bots to learn when to be aggressive etc... here would be many fun ways to add complexity to the competing decisions a robot has to make.

    I think this would be a fascinating way to learn about building decision making algorithms and learning about actual animal behavior as well. I have a half written white paper on it somewhere. I should look for it..."
    ---------------------------

    I wonder if the MAX/MSP software would be a good platform for something like this? Not my idea exactly per say, but creating those layered structures. I don't know much about that platform.

  8. #8
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    87

    Re: Gepetto

    MAX is basically C++ with pictures. I've been using it for about 15 years now...actually...it will be 15 years next september (MAX/MSP was originally an art program. There is a free version call PD (Pure Data) which isn't quite as developed, but is free.). I'm dyslexic so the idea of looking at line after line of code is about as inviting as oh...I don't know...something really bad. One thing that I love about MAX is that the learning curve is very shallow, but the program is incredibly capable.

    I'll start working on your suggestion for Gepetto II. I think I'll start out with something simple though such as competing tasks. A guy I took a seminar with at Stanford assembled a patch that did something similar. It had different entities that voted on randomly created music. Some of the entities liked random sounds, while others liked some sort of structure. Another created the music using random number generators and markovian analysis of Shoenberg, and some of Stravinsky's later work.

    I'm thinking along the lines of a patch that uses the camera to fine and collect colored paper scraps off of the floor, competing with a patch that seeks the darkest part of the room. I will be able to implement IR though, so the bumping into walls and appliances should be a lot less likely.

    I'm thinking the amount of discipline as well as time intervals between paper finds will increase the intensity of the dark seeking patch, while the paper seeking patch will respond to periods of inactivity and the reward of finding paper (a scrap of paper found results in the robot continuing to seek paper.

    Oh...BTW, I know I said I wouldn't start working on Gepetto II until this summer, but here is a picture of the computer case. I made it while the wife was getting her hair done.

    I'll keep everyone posted as to my progress.

    DB





  9. #9
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    87

    Re: Gepetto

    Hey guys, thanks for the recognition...Awesome.

  10. #10
    Join Date
    Sep 2006
    Location
    Chicago Illinois
    Posts
    682
    Images
    174
    Rep Power
    75

    Re: Gepetto

    Thanks for the awesome project and extensive posting. I think a lot of people are going to find this thread and learn a lot from it.

    >>>Oh...BTW, I know I said I wouldn't start working on Gepetto II until this summer, but here is a picture of the computer case. I made it while the wife was getting her hair done.

    This reminds me of a great quote from the show "The West Wing"
    "Don't marry a genius, they never want to sleep."

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •