PDA

View Full Version : [Question(s)] Johnny 5 for AI project



Clime
03-07-2009, 03:54 PM
Hello,

I have already posted this to some other robotics forum (lynxmotion's). Hopefully, there is no problem in doing that...

I am very excited about Johnny 5 model and I would like to pick him for my university AI project. Nevertheless, before the final verdict I would highly appreciate advices of some skillful folks here because I am a total beginner at this field. I am a pure programmer and I have not even assembled any robot in my life so that be nice to me when it comes to hardware terms, please:-).

The project goal:
A robot that is able to carry out text commands in a natural language.
Examples of such commands would be:
- Find a red cube and pick it.
- Scan your surroundings and obtain 2D map of it.
- Find a white object and follow its movement (e.g. the rabbit http://www.lynxmotion.net/images/smiles/icon_wink.gif)

The solution outline:
Software part:
Natural language commands are parsed with a program in Prolog that extracts known subgoals (e.g. “pick subgoal” with appropriate parameters that instructs what to pick) and executes C++ function that takes care of this subgoal. The function communicates with the API of RoboRealm for using its image processing features and controlling robot actions. The robot’s task is to listen to these “atomic” action commands and perform them (no autonomous logic).

Hardware part:
The most promising (and coolest:-)) robot for the project I have found so far is Johnny 5 with the following configuration:
- Johnny 5 Combo Kit
- SparkFun Bluetooth® Modem - BlueSMiRF Gold (to enable wireless control)
- One or two pretty good cameras as sensors (to give Johnny vision)
- Power accessories:
> NiMH Quick Connect (mates to battery) (BATC-01)
> Battery Connector with Switch (WH-01)
> 6.0 Volt Ni-MH 2800mAh Battery Pack (BAT-05) (Servos)
> 12 Volt Ni-MH 1600mAh Battery Pack (BAT-01) (Sabertooth and Motors)
> 6.0 - 12vdc Universal Smart Charger (USC-02)

My questions (to which answers will be highly appreciated):
- My idea is that the bluetooth modem and cameras will be connected to SSC-32 board to control servos without wires and to transmit the video stream from the cameras to a host computer by using the bluetooth modem. Is this possible or should I get wireless cameras instead to stream the signal? There might be a problem with amount of data transmitted.

- Is only one cam enough for my project or would two cameras do a better job? (I have heard two might be better for object recognition)

- What cameras would you recommend to me? They do not have to be really cheap - 50$ is allright and if they have to be wireless I could accept even a bit more expensive ones (or one). I also concern an easy integration (assembly) with Johnny 5.

- Are cameras sufficient sensors for me or do I need some additional sensors?

- Does Johnny 5 Combo Kit provides all stuff I actually need to build him and get run? What are recommended tools (screwdrivers etc.) I should use? I suppose these tools are not a part of the kit.

- Are the power accessories just enough for my future Johnny 5 or do I need some additonal power supply for the cameras and the bluetooth modem?

- Do you have any other ideas on my planned approach?http://www.lynxmotion.net/images/smiles/icon_smile.gif

As I said I will really appreciate any answers and comments.

I am also thinking about using SRV-1 for my project. Do you think it is a better robot for me and my plan?

Thanks
Clime

elios
03-07-2009, 05:34 PM
this is the question that adrenalynn asks - how experienced are you? would it be wise to start small then work big, setting accomplishable goals that work towards the bigger picture?

idk about you, but this is quite a mammoth task. ~~ what bits do you already have? and can you summ. up what you are trying to do into one paragraph, because that is then your end objective.

will post some more later ~~ lunch time (stomach/brain gurgles)

lnxfergy
03-07-2009, 06:38 PM
Bluetooth will not allow you to carry a video signal back, you're gonna want a wifi camera.

Find a red cube and pick it up -- what is your time frame -- the inverse kinematics to get the robot to pick something up is quite tough, finding that object using vision alone is also quite tough.

2D map == probably not using vision, you'll likely want either IR or sonar rangefinders for map building, visual map building is pretty terrible -- and really tough.

-Fergs

asbrandsson
03-07-2009, 09:26 PM
Hello,

Here is a couple of ideas.

The first is to use one camera and add a laser range finder to determine distance from the robot. There was students who won treasure hunt with an er-1 using this method.

Second is to look at an xbox game called your in the movies. What it does is snap a shot of the back ground and then when you inject something new into the screen only the new object comes up. This way you can isolate objects for identification. Unfortunately you need a lot of processing power for this type of operation. Something like an i7 or intel core 2 quad. I also think that without attaching the camera directly to the johnny 5 you will not get the real time processing you need to make the robot run smooth.

Asbrandsson

Adrenalynn
03-07-2009, 10:38 PM
Actually, I respectfully disagree with you that background subtraction requires substantial processing power. It's actually one of the simplest transforms that can be made to video...

RoboRealm has a reasonably quick subtraction that can subtract the background five or six times a second on a 533mhz C3 easily along with a few other transforms.

Clime
03-08-2009, 01:40 PM
Woah, again I am overwhelmed and almost at the beginning, it seems.


this is the question that adrenalynn asks - how experienced are you? would it be wise to start small then work big, setting accomplishable goals that work towards the bigger picture?

Well, I am quite experinced in programming - mainly C/C++ but after a bit of warming up I could do some stuff in Java/Prolog/Haskell/asm. I can program a chess algorithm (minmax with alpha-beta prunning), I know some bits about neural networks (never implemented though).

I do not know anything about image processing and robotics at all but I am really eager to learn about it. As I can see It won't be that easy. In my project I want to reach high goals but of course I will try some simple things at first and I will proceed from the bottom (for instance I'll write a program that commands a robot to go ahead for a while etc.).


idk about you, but this is quite a mammoth task. ~~ what bits do you already have? and can you summ. up what you are trying to do into one paragraph, because that is then your end objective.

I don't have anything but the plan:-). But right, I probably underestimated difficulty of the problem.

I have thought through my goals one more time and here is what a robot should be able to do for sure:
- perform written movement instructions such as: go ahead for five seconds, then turn left and stop, rise your left arm, ...
- be able to recognize colors and objects with a few basic geometric shapes (e.g. a cube, sphere), to find these objects on the ground in his surroundings and come up to them. For instance, he should be capable of performing the written command: "Find a red cube and come up to it."
- from the previous point, he must be able to avoid obstacles and remember what places he has already visited and what is left (this entails building of the map).

I can handle a simple parsing of natural language. Things that bother me most are: hardware, receving data from sensors and image processing.

These are tasks I would like to teach him later (probably not a part of the project itself):
- grasping objects and moving them
- tracking moving objects
- listening to human voice instead of written commands (of course I will use some software for it)
- playing chess with me, washing up, hovering, seducing girls for me etc.

Clime
03-08-2009, 01:59 PM
Bluetooth will not allow you to carry a video signal back, you're gonna want a wifi camera.

Find a red cube and pick it up -- what is your time frame -- the inverse kinematics to get the robot to pick something up is quite tough, finding that object using vision alone is also quite tough.

2D map == probably not using vision, you'll likely want either IR or sonar rangefinders for map building, visual map building is pretty terrible -- and really tough.

-Fergs

So wi-fi camera then. The issue is I would rather use openCV instead of RoboRealm now and I could not find any wi-fi camera that is said to be compatible with openCV:-/. OpenCV is better for me because it is a library that can be incorporated easily into my C++ procedures. For RoboRealm I would need to have a Windows station on which it can run and communicate with its API via sockets. I want my project to run on Linux. RoboRealm controlling interface to SSC-32 is therefore lost:-/.

I don't know what time frame and inverse kinematics are. Therefore, I'll probably leave out grasping objects.

Sonar rangefinder with a vision does not sound bad at all If it could make things easier. Perhaps I could also use laser rangefinder according to asbrandsson's suggest but I don't have any idea how difficult it is to measure distances with laser.


I also think that without attaching the camera directly to the johnny 5 you will not get the real time processing you need to make the robot run smooth.

I want to have a camera attached to Johnny 5 for sure.

Thanks for tips so far, guys!

lnxfergy
03-08-2009, 02:13 PM
When I said time frame, I meant what is your time frame for this project? Does it have to be done by the end of the semester, end of the year, is it a PHd thesis that has 3 years of time to finish?

-Fergs

Adrenalynn
03-08-2009, 02:19 PM
It's actually pretty easy to measure distance with a laser and a camera. It's simple Trig. The camera is a fixed below the laser, the laser is directly above it. The camera sees the laser some number of pixels above or below center-line due to parallax. That describes a triangle.

If you need it explained with more algorithm detail - I can do so after brunch. ;)

The Trend IP-110W comes with an API/library - but it's also for windows.

Linux has always had pretty wimpy video support. It's not necessarily the right tool for this job. Your best bet would probably be to find the IP wifi camera that you like best, and start talking with the manufacturer. Set aside a year to eighteen months and plan on writing your own drivers and codec implementation for it. (And no, I'm not AntiLinux. I'm actually a huge supporter. But I also like screwdrivers. I don't use them to drive nails, generally. I like hammers, and I don't generally use them to turn bolts either. ;))

Adrenalynn
03-08-2009, 02:25 PM
That said, you could set the IP camera up to push jpeg stills as fast as it can to the linux machine. Cambozola is worth looking at. Then you could use opencv to analyze the jpeg stills. Sloooooow. Actually, all of opencv is crazy slow to start with. So this just kinda adds another layer of sloooow. :)

Clime
03-08-2009, 02:45 PM
When I said time frame, I meant what is your time frame for this project?

Aah, sorry. It should be finished by the end of the semestr, at least major part.

Adrenalynn
03-08-2009, 02:56 PM
I suspect you will find that your focus needs to narrow a little bit. I suggest you start looking at EVERY pre-produced advantage you can find. Premanufactured robot kit. Roborealm handling vision and control. Prebuilt hardware.

Focus 100% of your attention on the NN/Ai. Writing and training the NN is going to require 135% of the time you have. Focus on the piece you can't simply buy.

Clime
03-08-2009, 03:03 PM
Well, perhaps I could do it for Windows but :-(. Is there any other good video/image processing C++ library like openCV that is faster?

If you could give me more details on this laser measurement, you would be nice, Adrenalynn.

You wrote somewhere that using just one sensor is lame. I guess, I should get a sonar too, especially if it is easier to build a 2D map with it.

Premanufactured robot kit...could Johnny 5 or SRV-1 be allright? Or do you mean Bioloid kit etc.? I
really like Johnny 5 (and SRV-1 too).

nn == neural network? Well, will I need them? I thought neural network algorithms are in-build in RoboRealm or openCV tools for image processing and I just have to program some not-neural-network classic algos.

lnxfergy
03-08-2009, 04:08 PM
I think the issue here is defining what it is you want to do...

You mentioned using an NLP parser to give the robot commands, but what are these commands going to achieve? Initially you mentioned manipulation of objects, although apparently the actual manipulation has now been tossed. It would seem that you initially proposed to build a real world implementation of a SHRDLU-like system, using the Johnny 5 robot as a platform. Such a project is quite intensive obviously, hence why SHDRLU-like systems are almost entirely done in simulation...

I think you need to define exactly what it is that you intend to have your robot do. Don't forget to make your goals measurable. For instance, saying "I'm gonna build a robot like Johnny-5" is not nearly as measurable as "Build a Johnny-5 robot which can pick up and move colored boxes in a 2ft x 2ft arena, based on user input which will be obtained via speech recognition"

-Fergs

Clime
03-08-2009, 04:26 PM
Yep, I know, I didn't specified it well. But the following is enough, isn't it? Could you please point out what should be worked out in more details in the following specification?

The robot should be able to do:
- perform written movement instructions such as: go ahead for five seconds, then turn left and stop, rise your left arm, ...
- be able to recognize colors and objects with a few basic geometric shapes (e.g. a cube, sphere), to find these objects on the ground in his surroundings and come up to them. For instance, he should be capable of performing the written command: "Find a red cube and come up to it."
- from the previous point, he must be able to avoid obstacles and remember what places he has already visited and what is left (this entails building of the map).

lnxfergy
03-08-2009, 06:34 PM
Yep, I know, I didn't specified it well. But the following is enough, isn't it? Could you please point out what should be worked out in more details in the following specification?

The robot should be able to do:
- perform written movement instructions such as: go ahead for five seconds, then turn left and stop, rise your left arm, ...
- be able to recognize colors and objects with a few basic geometric shapes (e.g. a cube, sphere), to find these objects on the ground in his surroundings and come up to them. For instance, he should be capable of performing the written command: "Find a red cube and come up to it."
- from the previous point, he must be able to avoid obstacles and remember what places he has already visited and what is left (this entails building of the map).

Given that you seem to be very interested in vision, you might also want to take a look at Tekkotsu. It's a package out of CMU, which is specifically intended for Cognitive Robotics (Touretzky is very heavily into cognitive vision). I use it in a robotics class I teach. Granted, the support for different hardware is pretty minimal, but it does run under Ubuntu 8.04, using a Logitech Pro 9000, and I believe it has support for an SSC-32. www.tekkotsu.org (http://www.tekkotsu.org)

It does support most of the "finding an object in field of view", it also has a visual world map builder, although I'm not sure of it's current status. Likely, you would have to spend time getting your particular hardware in line with it, but a lot of the REALLY hard stuff is taken care of.

-Fergs

Clime
03-08-2009, 08:33 PM
Thank you a lot! Tekkotsu really has SSC-32 support and it looks like a great piece a software.

The only serious problem is still a camera. I would prefer some ip camera that could be nicely attached to Johnny 5 but more importantly from which I could get video stream to Tekkotsu to process. I wish I knew more about these things.

Anyway, thanks for your kind help guys. I know I am asking too much noob questions but I am really trying on my own too and hopefully, I'll get soon "into" it.

lnxfergy
03-08-2009, 09:10 PM
Thank you a lot! Tekkotsu really has SSC-32 support and it looks like a great piece a software.

The only serious problem is still a camera. I would prefer some ip camera that could be nicely attached to Johnny 5 but more importantly from which I could get video stream to Tekkotsu to process. I wish I knew more about these things.

Anyway, thanks for your kind help guys. I know I am asking too much noob questions but I am really trying on my own too and hopefully, I'll get soon "into" it.

I just want to warn you, tekkotsu is still under development, especially right now, as most of the new stuff is only available through the CVS server. But the cognitive vision part is quite good.

Mind if I ask,what University you are at?

-Fergs

Clime
03-08-2009, 09:26 PM
Not at all:-). It is Masaryk university in the Czech republic - Faculty of computer science.

Thanks for the warning but still it looks very well. The streaming thing will be very likely possible too. I will do some research on this soft in a next few days.

DresnerRobotics
03-09-2009, 12:34 AM
Out of curiosity, what rules out onboard processing? The J5 platform has some considerable payload capabilities, and plenty of room... would allow a much wider range of cameras to be interfaced.

SteamBoat
03-09-2009, 01:37 AM
Help me out on this one if you would.

The conversion of the natural language part of this to something that the robot understands and does is a totally different part from what is necessary to get the robot to do work. By that I mean, somewhere along the lines you are going to have to translate to the robot the commands you want interpreted. If you could get the thing working with a totally structured language and then put a normal language interpreter in front of that don't you get to the same place as if you just taught the humans to use the structured language from the start?

Take the example command: "Find a red cube and come up to it."

Breaks down to something like:
Asses area in range of sensors and detect objects.
If object found then asses shape and color.
If shape not cube or color not red then asses next object.
If no objects in range meet criteria then relocate.
Etc. until found object meets criteria or entire range explored or ????

But all that happens AFTER the translation from the statement in natural language into what the robot can understand.

Where I'm coming from on this is that I don't get the priority of the natural language fitting into this. Yes, I can understand the desire for being able to use natural language but where does it fit into the big (read practical) scheme of things? Why won't three controls work? One to set shape, one to set color and one to initiate the search?

Clime,

This isn't about you, your project or how you've stated it. It's about my understanding of AI and human interaction with robotic and computer systems. The ability to translate what a person wants into action by ANY outside entity isn't easy. I know. Not only am I married a parent and a pet owner, I work in software development and I know it's not easy to communicate, either direction, with other team members or customers.

BTW: I for one don't clearly understand what you mean by "...come up to it." I think I do but I'm not sure. Would your robot know for sure? If it doesn't then what does it do?

To me, AI is the ability for a system to not know what, "...come up to it.", means, ask you for a translation, then the next time ask you or the next user if they mean the same thing and then eventually be able to determine with what certainty in the future it should be able to assume, in context, what the user means by "...come up to it." and act, or not act, accordingly.

But that's just me.

MArk B.

Clime
03-09-2009, 03:52 AM
Interesting. Well, you are certainly right that determining what another entity really says or wants is extremely difficult problem because meaning of some sentence depends on many factors. Nevertheless, I don't want my robot to really understand, I am going to make it as easy as possible for now. Again, I have no experience in NLP:-/. I have to get all this knowledge...

My simple idea at the moment is that it will split a sentence into words and try to match these words with actions that it can do. If there is a match and the action needs a parameter then it will try to find known objects (arm, cube, sphere, ...) or "action specifiers" (forward, left, fast, slow, ...) in the words that follow. There also might be color specification connected with some objects.

If parsing rules won't succeed then there will be no action performed even if the command "should be understandable" - that means it mentions only known robot's actions, objects, action specifiers and it is syntactically a correct command in natural language.

If it succeed then a robot will perform an action according to his clear definition of this action without considering anything else.

That is my first lame idea which can be upgraded after I read some articles about NLP and study some algorithms.

Clime
03-09-2009, 04:03 AM
Out of curiosity, what rules out onboard processing? The J5 platform has some considerable payload capabilities, and plenty of room... would allow a much wider range of cameras to be interfaced.

Well, that would need onboard image processing without any help of specialized software such as RoboRealm or Tekkotsu. It could be possible with some image processing firmware such as on SRV-1 cam but I think It would not be enough for me atm. Please, tell me if I am missing something.

If I didn't need image processing it would be without problems to have no host (and cool also).

lnxfergy
03-09-2009, 07:24 AM
The conversion of the natural language part of this to something that the robot understands and does is a totally different part from what is necessary to get the robot to do work. By that I mean, somewhere along the lines you are going to have to translate to the robot the commands you want interpreted. If you could get the thing working with a totally structured language and then put a normal language interpreter in front of that don't you get to the same place as if you just taught the humans to use the structured language from the start?

One of the goals any roboticist should have is to make robots easier to use. Teaching humans can be hard, think about how many times you've seen someone fussing with a VCR because it doesn't work like the other dozen models they've used in their life -- if a VCR understood english, problem solved. One of the big issues with the whole "digital information age" is that we are creating systems that cannot be used by everyone. Look at simple things such as the roomba, it seems simple enough, yet their phonelines are jammed with people who cannot understand how to use it's interface to setup when it should run. If the roomba could do NLP, you would simply say "clean this room every other day, around 2pm" -- and you'd be done!


To me, AI is the ability for a system to not know what, "...come up to it.", means, ask you for a translation, then the next time ask you or the next user if they mean the same thing and then eventually be able to determine with what certainty in the future it should be able to assume, in context, what the user means by "...come up to it." and act, or not act, accordingly.

If you have a system that asks for a translation every time, you'll probably end up building a system with millions of translations of specific phrases, which of course forces you into a search-based solution, which will be extremely, painfully slow. A system with a general purpose parser, plus a world-model to reference it against, can be more efficent, of course such systems are still often limited to "blocks world" systems and very closed domain (example: SHRDLU by Winograd). Of course, any system will eventually come upon phrases it cannot resolve, and it will have to ask questions -- but isn't that part of nearly any conversation? Your boss asks for 3 TPS reports, you'll likely ask for a deadline, and what the reports should be on... if you don't, and instead just do whatever, you probably wouldn't be considered very "intelligent".

One of my research areas is NLP, so forgive me if I am a big supporter therof. However, NLP has been historically considered an AI-complete problem, that is, if you fully solve the "NLP problem" you have solved the "AI problem". This can be seen easily by AIs emphasis on (infatuation with?) the Turing test, which basically equates intelligence to conversation. We use language SO MUCH, that before we accept advanced machines doing all sorts of things, we will likely require langauge from them.

-Fergs

lnxfergy
03-09-2009, 07:27 AM
Well, that would need onboard image processing without any help of specialized software such as RoboRealm or Tekkotsu. It could be possible with some image processing firmware such as on SRV-1 cam but I think It would not be enough for me atm. Please, tell me if I am missing something.

If I didn't need image processing it would be without problems to have no host (and cool also).

You could mount a Pico-ITX (1 ghz, 1GB ram) onboard the J5 (see tyberius previous J5 project, also note, that the Chiara, CMU's robot for Tekkotsu, uses a Pico ITX). Then you just connect a webcam to the J5.

If you use Tekkotsu, then you can also view the Camera over wifi/ethernet on your computer via the Controller GUI

-Fergs

Clime
03-09-2009, 11:04 AM
You could mount a Pico-ITX (1 ghz, 1GB ram) onboard the J5 (see tyberius previous J5 project, also note, that the Chiara, CMU's robot for Tekkotsu, uses a Pico ITX). Then you just connect a webcam to the J5.

If you use Tekkotsu, then you can also view the Camera over wifi/ethernet on your computer via the Controller GUI

-Fergs

I'd love to have Johnny 5 with such equipment but I could never put it together. Chiara is going to be sold in autumn this year and It is quite expensive anyway. I have found for instance Qwerkbot, which is very nice and allows onboard processing. Ill keep looking for others.

Adrenalynn
03-09-2009, 01:05 PM
Putting a Pico ITX on a J5 is "trivial" (forgive me, Tyberius...). It's a stand-alone computer. It takes nothing more than a battery, a wide input power supply, and a Pico board, and some standoffs to mount it.

http://www.logicsupply.com/products/px10000g
http://www.logicsupply.com/products/picopsu_60wi
http://www.smarthome.com/1288SWN/Nylon-Standoffs-for-Circuit-Boards-ELK-SWNS1/p.aspx

We're done. Now you can plug a keyboard and monitor into it and run any operating system you want on it (that runs on a PC), or you can plug a WiFi stick into it and put it on the WiFi network, then VNC or PCAnywhere or SSH or X into it (or whatever). It's a full-on 1Ghz computer. If you want more processing power, there's a DualCore 1.6Ghz tiny little Intel board too.

Neural Network/AI NLP is a bit different animal, as I've implemented it for various projects. You train the neural network with what you say vs what you mean. From there it continues to learn having the ability to ask questions and remember the answers.

Have a look at A.L.I.C.E. (http://www.alicebot.org)

SteamBoat
03-09-2009, 08:10 PM
That is my first lame idea which can be upgraded after I read some articles about NLP and study some algorithms.

Please don't think what you are doing or how you are going about it is lame in any way shape or form. I sure didn't pickup as quick as you have been.

Keep working on it. You've got some great people helping you. (excluding my harrassment of course.;))

MArk B.

Adrenalynn
03-09-2009, 09:08 PM
I think your insights have been valuable myself, MArk. They're a valid approach to the project.

Clime
03-11-2009, 10:32 PM
Hello,

I have chosen the following platform. Any comments? Please, I'd be grateful for answering TODOs below.

robot Johnny 5 Combo Kit
motherboard VIA EPIA PX 10000 Pico-ITX Motherboard
hdd Seagate FreeAgent Go 250GB 2.5" extern HDD, USB 2.0, 5400 RPM, 8MB
memory KINGSTON 1024MB/533MHz DDR2 CL4
webcam Logitech QuickCam E3500, 0.3MPix, mic
usb wifi wifi ASUS WL-167g
mic Microphone Logitech Dialog-230 OEM
repro LOGITECH V-10
power_1 6.0 Volt Ni-MH 2800mAh Battery Pack
power_2 12 Volt Ni-MH 1600mAh Battery Pack
power_3 NiMH Rechargable Battery 7.2v 4500mAh
source MB (bat) PicoPSU 60W Wide Input Range DC-DC Power Converter
source MB (el) HP 65W Smart AC Adapter 634
laser (range) Laser Module - Red with TTL Control
bat. conector Wiring Harness - Battery Connector
bat. conector Wiring Harness - Battery Connector

TODO: what are Nylon Standoffs for?
TODO: power for laser? can I connect it to MB?
TODO: any frames/plates to attach MB, hdd, repro to?
TODO: is NiMH Battery 7.2v 4500mAh right for VIA PX 10000?

OS: gentoo linux
Robo software: Tekkotsu

Adrenalynn
03-11-2009, 10:50 PM
TODO: what are Nylon Standoffs for?
TODO: power for laser? can I connect it to MB?
TODO: any frames/plates to attach MB, hdd, repro to?
TODO: is NiMH Battery 7.2v 4500mAh right for VIA PX 10000?


1. Mounting the motherboard.
2. No. It needs to be driven from a TTL line.
3. That's something you'll need to work out. You can look at what both Tyberius and 4Mem8 did with theirs.
5. Define "right"? A few minutes? You want to get as close to 12v as possible which as much potential amperage as possible. 9.6v @ 4500 would be the bare minimum I'd suggest. You need to define your desired runtimes and then do the math instead of trying to take shortcuts.

You also don't mention what "power 1, 2, 3" are for.

DresnerRobotics
03-11-2009, 11:33 PM
Putting a Pico ITX on a J5 is "trivial" (forgive me, Tyberius...). It's a stand-alone computer. It takes nothing more than a battery, a wide input power supply, and a Pico board, and some standoffs to mount it.


Large part of my argument for PC based robotics in my articles on the J5 robot was based around how incredibly easy it is to implement, so no offense taken whatsoever. I was just the quickest to slap the two together and proclaim ingenuity! :p

Also Clime: Invest in a solid state hard drive, with a decent R/W cycle count. You'll thank me later when you're driving it around and your HDD doesn't skip....

Clime
03-11-2009, 11:51 PM
Thanks both. Ill have a look at hdds once more then.

2) I suppose i can connect the laser to SSC-32 and read the laser output from there then.
3) Hehe, fun:-).
4) Ok, Ill try to get stronger batteries.

power_1: servors
power_2: sabertooth and motors
power_3: MB

Do you see anything missing that I should get?

anandoc
03-14-2009, 08:29 PM
TODO: is NiMH Battery 7.2v 4500mAh right for VIA PX 10000?


Try to get as close to 12V or above. The losses under 12V are quite significant (i made a thread in the "power" forum) and you would get lesser run-time for the same capacity.

Clime
04-05-2009, 05:50 PM
Thank you for your advices regarding batteries. Very useful and I am about to pick those:

motherboard and motors battery: NiMH Battery Pack 12V 5000mAh
motherboard and motors battery : NiMH Battery Pack 12V 5000mAh
servos battery: NiMH Battery Pack 6V 4500mAh

But they weigh 2.3 kg together...is it possible to overburden Johnny 5? Should I carefully concern weight of the parts? I looked at the motors specification but I just do not know how to do the required calculations. Could you help?

Adrenalynn
04-05-2009, 08:26 PM
Yeah, you need to keep the entire thing under 2.7kg most likely, although by replacing the motors with motors more than 10x as beefy, I've been able to get it up to about 3.8kg There's a substantial risk of destroying the frame at that weight, so you need to beef it up, which, of course, adds weight.

DresnerRobotics
04-05-2009, 08:43 PM
My J5 weighs 4.5kg...

But I use some mystical discontinued 7.2v motor that they originally shipped with.

Adrenalynn
04-05-2009, 08:46 PM
Specs call for 5lbs. I'd be hard pressed to recommend anyone exceed them by that much. When it snaps in half, I don't really wanna be responsible.

Clime
04-11-2009, 02:24 PM
5 lbs with or without torso?

Adrenalynn
04-11-2009, 02:33 PM
Total. That's the base's rating.

Clime
04-11-2009, 03:24 PM
What is base's weight then? If it is, let's say, 3 lbs and Johnny 5 combo kit weighs 6.25 lbs then there is 5 lbs - 3.25 lbs = 1.75 lbs to spare (for batteries, control chips, motherboard, ...). That is 0.793 kg. That is enough for a stuffed bear, not for three NiMh batteries and all the other stuff. Please, tell me I am wrong.

Adrenalynn
04-11-2009, 03:42 PM
The base's weight is inconsequential. Its payload is 5lbs per the documentation. Honestly, it's pretty wimpily built, imho. The offsets for the standoffs make little sense, the motors are unsupported across their length, and the polycarb is awfully thin. That said, I'm carrying 7lbs on mine and it hasn't broken yet. The motor mounts are terribly flexy though. It's kind of a shame - the track design itself is fabulous.

elios
04-11-2009, 04:29 PM
4mem8 has a way of strengthening the poly on the j5's base. ill point him in this direction

Adrenalynn
04-11-2009, 04:30 PM
Sure. Just brace it.

Or better yet, build a base that is better.

Clime
04-11-2009, 04:35 PM
Payload is carrying capacity, right? I wanted to say that if the base is already loaded by 3.25 lbs (guessed! weight of Johnny's 5 torso) in Johnny 5 combo kit then there is only 1.75 lbs left for the other stuff (e.g. batteries, motherboard), right? I suppose that this payload applies on stuff placed on the top of the chassis as well as stuff placed in the deck. Just make this clear to me, please.

Adrenalynn
04-11-2009, 04:39 PM
Yes, that is correct to the specs - assuming that's the correct torso weight. I don't have a J5, just the TriTrack itself

Clime
04-11-2009, 05:02 PM
Wow, now I can forget about my configuration with batteries that have together 1.44 kg and all the computing stuff that I have been considering for a month (me noob). Or brace/rebuild the base somehow and perhaps get stronger motors...

lnxfergy
04-11-2009, 05:18 PM
Have you looked at Tyberius' Johnny 5? He has a pico-itx and quite a bit of battery (he's also done significant upgrades to the torso that would have added weight...). If I recall, he may have surpassed the 5lb limit without incident, but I'm not sure about that....

-Fergs

Clime
04-12-2009, 10:44 AM
Yes, I have looked at it. He said his Johnny 5 had 4.5 kg with some 7.2V motors. Mine would be around 4.8 kg with 12V motors. I think, higher voltage should mean higher motor power and consequently capability to move with more weight on it. Therefore, I assume that those motors:
Gear Head Motor - 12vdc 50:1 120rpm with stall torque 8.87 kg-cm could handle it well (those motors are included in the combo kit). It could also help to know stall torque of the motors that Tyberius has got for comparing.
What do you think? I suppose, this 5 lbs limit is due to the lexan construction not due to power of motors.

Regarding the construction of the base, I will probably replace the lexan plates with something more solid.

lnxfergy
04-12-2009, 12:18 PM
Lexan is pretty strong... My guess would be that at >5lbs you would find the robot can't go up an large incline, and that the motors may heat up. If you are staying on a level surface, not driving around continuously, and also not concerned with top speed, you can probably get away with it.

-Fergs

Adrenalynn
04-12-2009, 12:44 PM
Those motors are absolute junk. And don't believe for a moment that one twelve volt motor is the same as any other twelve volt motor.

I'll put my Banebots motors at 6v up against those twelve volt motors in both speed and power. And I'll crush them like they're not even there. There's a lot more to a motor and gear box than the voltage it runs on.

Clime
04-13-2009, 12:11 PM
Yep. I have also got an interesting answer from Robot Dude:

"The amount of permitable payload depends mostly on what surface the robot is to be operated on. For indoor operation on tile floors you will be able to carry a lot. But if you want to operate the bot on shag carpet the motors can bog down significantly with too much additional payload."

Here is not probably the best place to ask on this but could I get some other (better) motors included in Johnny 5 kit?

Adrenalynn
04-13-2009, 01:41 PM
Included in the kit? Naw. You can buy better motors and void your warranty though. You'll need to cut and drill *very* carefully, as I know from experience. They must be placed EXACTLY if you have any hope of getting it working.

Clime
04-14-2009, 05:28 AM
Well, I was thinking about those:
Gear Head Motor - 12vdc 50:1 152rpm (6mm shaft)
* Weight = 7.15 oz
* Reduction = 50:1
* Stall Torque = 231.5 oz-in
* Length (motor and gear) = 2.33" (5.92 cm)
* Length (shaft only) = 0.89" (2.26 cm)
* Diameter (motor and gear) = 1.45" (3.68 cm)
* Diameter (shaft) = 6mm
* Outside Diameter = 37mm
* Current (at 12v no load) = 145mA
* Current (at 12v locked shaft) = 3.8A

They have all dimensions the same except motor and gear length so that no cutting/drilling needed and they are beefier. But I don't really want to buy two motors for 21 USD (both 42 USD) and then just throw them away.

Anyway, what do you think about li-ion batteries? Are they dangerous to use? Battery packs from li-ions are much lighter and smaller.

lnxfergy
04-14-2009, 08:15 AM
Anyway, what do you think about li-ion batteries? Are they dangerous to use? Battery packs from li-ions are much lighter and smaller.

LiPo is definitely more common in the hobbyist arena (most RC racing, and humanoids, are using LiPo these days), so you'll probably find that locating a charger and battery is gonna be cheaper using LiPo.

-Fergs

Adrenalynn
04-14-2009, 12:45 PM
Not to mention the max current draw will be gihugically higher.

4mem8
04-17-2009, 07:49 AM
Hi all, My J5 with all the mods to servos and more is at 8lbs and trundles around quite nicely, but have since dismantled part of it to do a couple of minor mods to the rotating base.