PDA

View Full Version : [Question(s)] Writing an educatonal book on robotics



darkback2
07-11-2008, 09:29 PM
I'm thinking I'm blow hardy enough to write an eductional book on robotics.

anything you think I need to put in it?

DB

Alex
07-11-2008, 09:34 PM
introductions to everything robotics:

motors
servos
microcontrollers
servo controllers
linear actuators
all the most common & simple to understand sensors

just to name a few:)

Adrenalynn
07-11-2008, 09:41 PM
An exploration of navigation concepts would be key, I think.

Good idea, go for it DB!

darkback2
07-11-2008, 10:13 PM
OK...as I write I'll post it here.

openmindedjjj
07-12-2008, 12:02 AM
i think in the intro of your book you should put how the robot works like how the brain of the robot sends and recieves signals... plus i would totally buy your book of you explaned how artifical inteligence worked.. i've allways wanted to know how that was possible..

Adrenalynn
07-12-2008, 12:52 AM
An exploration of AI/NN/Genetic Algorithms isn't a beginning level sort of topic unfortunately. Doing it any real justice requires a tremendous amount of math... For simple AI, you should look into game programming books.

4mem8
07-12-2008, 01:41 AM
Maybe I should join you darkback2 in the mechanical side of robotics as this is my forte in robotics.I have lots of hi res pics of robotic construction of different projects and different materials.

sam
07-12-2008, 01:38 PM
Introduction to making a robot brain with components.

asbrandsson
07-12-2008, 03:16 PM
Hello,

I think that the hardest part about making a book about robotics is to cover a section of robotics that is interesting and has not already been covered by a million other books.

How to make robot soccor players or how to make humanoid robots with topics like using a gyro or accelemeter or controlling your robot with a PDA would sell better then writing another book about stepper motors. Especially in a day and age when there is a lot of this information available on the internet about servos and what not.

Asbrandsson

darkback2
07-12-2008, 11:47 PM
4mem8...I have an outline of sorts...I'm on the road right now, and typing with my big fat fingers on this tiny keyboard is killing me. I'll post my ideas tomorrow.

Asbrandsson...I totally understand. I probably won't come at it from the typical angle, because I can't. I just don't have the right background...I'll post my outline. I know I haven't read that many books on robotics...but I think/hope this will be little bit different.

DB

Adrenalynn
07-13-2008, 02:08 AM
Why are there literally thousands of books on C programming, and thousands more on C++, and thousands more on C#?

There are nuggets of wisdom, a different slant, a unique take in every one of them.

Robotics doesn't have nearly enough books written.

One more is one more step in the right direction.

LinuxGuy
07-13-2008, 02:40 AM
I'm thinking I'm blow hardy enough to write an eductional book on robotics.

anything you think I need to put in it?

DB
One thing I have notices on various forums, is most people don't realize the need some sort of microcontroller if they want a robot to run around on its own. So I would have a section on the basic structure of an autonomous robot and a tethered robot. Don't forget to add lots of pictures! :veryhappy:

8-Dale

Adrenalynn
07-13-2008, 02:55 AM
Unless it has an onboard PC, then it doesn't necessarily need a microcontroller...

LinuxGuy
07-13-2008, 05:51 AM
Unless it has an onboard PC, then it doesn't necessarily need a microcontroller...
I'm talking about stuff for an autonomous robot, which does not have to have access to a PC or use a PC type motherboard on it. A robot does not need an on board PC to work. This is kind of where I start to feel a bit out of place, with all the emphasis on PC based robots. There is a whole world out there that doesn't have to use PC based robotics or even need to. Unfortunately, this seems to be easily lost here.

8-Dale

Adrenalynn
07-13-2008, 09:47 AM
Since when can an autonomous robot not use a PC onboard!?!? Who made that rule?

Sure, one can build their widdle bitty line following photovore and watch it dance a'round and round. Woot! Go little 2kbit processor!

But you can put a 1Ghz PC with 1GB of RAM and 1TB of onboard storage on a 'bot that's about the size of a pack of cigarettes. I have embedded microcontrollers that are 16x the size of a PicoITX...

Maybe the reason it seems so "easily lost" here is that folks here have figured-out that they can solve gihugic problems with an adequate application of processing power?

Adrenalynn
07-13-2008, 10:18 AM
As an afterthought - it might be worth noting here that every DARPA Grand and Urban Challenge autonomous vehicle that I've seen, read about, or heard of, employs PC(s) as their processing power. Why is that? Because it's a difficult problem. Which is what PCs were created to solve.

They also have microcontrollers. And I think this is the immediate future of autonomous robotics. Fast little microcontrollers interfacing to the real world with all their spiffy analog inputs, and preprocessing some of the data, then packaging it up and sending it off to the real processing power to do real work.

Have you seen the Urban Challenge? It was created because the original "Grand Challenge" got to be too "easy"... That's not because of the growth of microcontrollers, which is realistically negligible. It's due to the growth of cheap massively powerful PCs that are continuing to shrink in both size and power requirements.

http://www.darpa.mil/GRANDCHALLENGE/

Then it turned out that even autonomous urban driving with real traffic and urban obstacles was "too easy"...

"As the day wore on, it became apparent to all that this race was going to have finishers, and that it was going to be a close one. At 1:43 pm, Stanford’s entry, “Junior” crossed the finish line first with a run-time of just over four hours. A minute later, Tartan Racing’s “Boss” crossed the finish line. It was a scene that would be repeated over and over as six robots eventually crossed the finish line, an astounding feat for the teams and proving to the world that autonomous urban driving could become a reality. " - DARPA.mil

LinuxGuy
07-13-2008, 10:37 AM
Since when can an autonomous robot not use a PC onboard!?!? Who made that rule?
I never said an autonomous robot couldn't have an on board PC. I said it didn't have to have on on board PC. BIG difference.


Sure, one can build their widdle bitty line following photovore and watch it dance a'round and round. Woot! Go little 2kbit processor!
I suppose this is what you think of W.A.L.T.E.R. too then, since he does not have an on board PC.


But you can put a 1Ghz PC with 1GB of RAM and 1TB of onboard storage on a 'bot that's about the size of a pack of cigarettes. I have embedded microcontrollers that are 16x the size of a PicoITX...
I know you can do this, but it is not required, in order to have a fully autonomous robot. Just because you can, doesn't mean you always should.


Maybe the reason it seems so "easily lost" here is that folks here have figured-out that they can solve gihugic problems with an adequate application of processing power?
You've missed my whole point entirely.

Adrenalynn
07-13-2008, 11:11 AM
I suppose this is what you think of W.A.L.T.E.R. too then, since he does not have an on board PC.

I thought you were running the Hammer on Walter? What OPERATING SYSTEM does it run? What OPERATING SYSTEM LIBRARIES are you calling?

Are all interrupts real-time? Does it have an external data or address bus?

The Hammer is not a microcontroller. It's a PC. Guess what OS runs on an ATTINY or an ATMEGA? On a PICAXE or a KicChip? On a C-Stamp or on a BASIC Stamp?

Microcontrollers don't boot off of thumbdrives. They don't "boot-up" at all. You put power on them and they run code. Realtime, interrupt-driven code [ISR's], [generally] without external data or address busses.



You've missed my whole point entirely.

I thought your point was that one could build a truly autonomous robot of any complexity utilizing only a single chip? Where'd I miss it?

More to the point, where did MIT, Berkeley, CalTech, Stanford, Virginia Tech, SUNY, GM, University of Utah, NASA, JPL, et-al miss it?

ScuD
07-13-2008, 01:16 PM
If I can 'butt in' on this discussion, while i agree that using onboard PC's or SBC's does have great advantages, I feel that there is at this point the problem of 'over-complicating' robotics, as there has always been and always will be in any engineering problem.

There must be simpler solutions to the balancing of a humanoid robot than thousands of lines of code, maths etc.

The interaction between sensors and actuators is getting too complex to be "worth the trouble".

Offcourse that's all great and stuff, but it's not that easy to achieve, it needs people thinking out of the box.

Eg. the Beam bots by Mark Tilden, very simple electronics, yet it achieves it's goals of walking quite nicely, those things even 'adapt' their strides to the terrain.

The thing is, it's massively difficult to make something simple... but that doesn't mean it can't be done

Adrenalynn
07-13-2008, 01:54 PM
It's massively difficult but technically possible to drive a 16p nail into an 8" thick piece of ironwood with the handle of a Jeweler's screwdriver. It's technically possible but massively difficult to rip a sheet of plywood with a plastic paperclip. Although both are very simple tools, I think we'd all be in agreement that that is rather stupid.


While the arrogant but ignorant illuminati and his caveman workforce are spending years trying to drive said spike into said hardwood with said "tool" and sawing away with their sharpened paperclips, the intelligent person has built an entire city with his nailgun and his tablesaw.

The Right Tool For The Right Job.

ScuD
07-13-2008, 02:33 PM
Following that train of thought, one could also use a sledgehammer to put a tack through a piece of paper.
A wrecking ball can be used to crush an empty coke can.

As you say yourself, the right tool for the right job, I'm just argueing that I'm not so sure anyone is using the right tool.

darkback2
07-13-2008, 02:41 PM
There is no reason why the book can't have a chapter on both. I see this book as being more theoretical in the first half, and project based for the second half.

To be honest, I was really impressed by the line followers from the mexican team at robogames.

I do feel that in order for robotics to really take off you will need real processing power. Take for exmple a vision system, or object recognition. The ER-1 for example...it takes processing power to do that.

DB

Adrenalynn
07-13-2008, 02:47 PM
Following that train of thought, one could also use a sledgehammer to put a tack through a piece of paper.
A wrecking ball can be used to crush an empty coke can.

As you say yourself, the right tool for the right job, I'm just argueing that I'm not so sure anyone is using the right tool.

Ok, so let's reduce it to practice.

Would you rather have a vacuum that gets stuck, every time it's used, between the sofa and the wall in the exact same place - or would you rather have a vehicle that can successfully travel an undefined course at 45mph obeying all motor vehicle codes and interacting with both human and non-human drivers in a densely populated urban area?

Why do *you* think the brightest minds in the world apply PC's to the problem instead of microcontrollers?

I think it's because they're trying to advance the state of the art rather than cripple it.

metaform3d
07-13-2008, 02:56 PM
Sometimes I open Excel just to add a column of numbers. Sure, it's overkill for that purpose but the point is it's convenient. And if I find I want to do more I can; I can get mean and std dev, I can do a running average, I can plot them. Just because I find I don't need to do that, it's useful to know that once I have my numbers in Excel I have those options.

Same thing with PC-based robotics. The development environment comes with everything standard so it's there if you need it, and if you want to do more your robot brain has some headroom to grow into. That situation is far more flexible I think than having to think about cramming everything into a "right-sized" microcontroller.

Not that there's anything wrong with that alternative -- I just prefer the PC approach.

Adrenalynn
07-13-2008, 03:16 PM
Good point, Meta.

I thought of one more analogy I wanted to explore...


I'm just argueing that I'm not so sure anyone is using the right tool.

If I want to travel from California to Belgium to have this debate in person, obviously the airplane isn't the "right tool" Digitizing my subatomic structure and flinging my quanta faster than light through subspace is probably a better tool. So since the airplane isn't the "right tool", I'm going to doggie-paddle across the ocean.

That'll teach those physicists!

darkback2
07-13-2008, 03:32 PM
does the lego nxt count as a micro controller or computer?

What about the bobot thingy?
DB

Adrenalynn
07-13-2008, 03:46 PM
Does it have external data and/or address busses? Does it run a full-fledged operating system? Do you need to call OS libraries to get work done?

darkback2
07-13-2008, 04:12 PM
Its funny...I have one...but as an end user I am really far removed from how the NXT works. The bobot thingy falls more under the micro controller category. You actually have to tell it what pins to turn on.

DB

ScuD
07-13-2008, 04:13 PM
So a washing machine with several programs should have a quad-core CPU?

I completely understand what you're saying, and I agree that it is a lot more convenient to have large amount of processing power, including overhead. But it's not a universal solution.

On research platforms it is indeed good to have excessive processing power, since you're trying to develop new techniques etc.
But then why would anyone that writes code take on the hassle of code optimizing, just take a bigger cpu!

In any case, I don't think this is a debate with any right answer, so I'll stop wrecking this thread (but I have to say I enjoy it :veryhappy:)

Adrenalynn
07-13-2008, 04:19 PM
Some of us optimize code on a bigger CPU. Some of us have been pushing the limits of processing power for thirty years, and continue to do so regardless of the CPU.

For me, it's more about whether or not the problem can be solved. As my whining roomba adequately demonstrates when held up to my small research platform: For inconsequential cost difference, the problem of autonomous navigation in an undefined environment can't be adequately solved with anything approaching the simplicity of a PC by utilizing a microcontroller.

darkback2
07-13-2008, 04:28 PM
Scud...you aren't ruining the thread. I find the discussion helpful...I'm writing a freeking book. I need to know about all of the landmines that I'll need to avoid.

That said...could more processing power make a washer better?

How about one that senses what kind of clothing you put in it, and released just the right amount of soap, or bleach? Or hell...go online and order more soap when it runs out? Or even better...order new clothes when the ones I have are out of style?

sorry...typimg with my thumbs from the highway.

DB

ScuD
07-13-2008, 04:49 PM
Here's one for the book: The more parts, the faster it breaks!

Seriously though, I've not input on a book i'm afraid. My brain works too erratically (is that spelled right?) to give you any information on the knowledge i've acquired over the years - or at least a comprehensible list of words resulting, mostly, in sentences.

Having said that though, if you have any questions regarding electronics, simple dumb microcontrollers (no pun intended, 8-bit experience only),or something about basic mechanics, feel free to ask.

darkback2
07-13-2008, 05:14 PM
funny that you say that, the second chapter is on what it would take to make a robot that could follow the three laws of robotics.

One of the charts would be ab out the complexity required, compared with the potential cost of ownership.

DB

ScuD
07-13-2008, 05:26 PM
all i can say is, our mechs will violate those laws in every meaning of the word...

darkback2
07-13-2008, 05:28 PM
no...not really.

Except for the last one.

DB

ScuD
07-13-2008, 05:36 PM
A robot may not injure a human being or, through inaction, allow a human being to come to harm.



A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law



1. BB's will bounce off in any direction. checked and covered by Tyb's work on the arena
2. Bad packet/interference/et al results in firing, while it was not ordered. Could occur during testing, while not in the arena, thus breaking rule 1.
3. If I run over my mech with my car, it won't dodge. But you stated that one allready :)

darkback2
07-13-2008, 05:52 PM
ok...you win.

DB

sam
07-13-2008, 06:08 PM
I thought of one more analogy I wanted to explore...

If I want to travel from California to Belgium to have this debate in person, obviously the airplane isn't the "right tool" Digitizing my subatomic structure and flinging my quanta faster than light through subspace is probably a better tool. So since the airplane isn't the "right tool", I'm going to doggie-paddle across the ocean.

That'll teach those physicists!

It's a easily debatable point, not that doggie-paddling across the ocean, but more the aproch to a problem thing.

I have to say that when size is not an issue, I would prefere to use an onboard PC, but a lot of things are possible at a hobby level with a couple microcontrollers. (This is not a mean statement) : When does a robot needs a PC to get by the problem. Most of the rovers that navigated thru a room can get by with a microcontoller (if it isn't mapping the place at the same time) Hobby humanoids use microcontrollers with preprogammed software. Usualy it's the high end project (unmanded cars, ASIMO) that need a lot of processing power because of all the (expensive to get the best resolution, thus needing more processing power) sesors they use.

Anyway. I could see that both are valid choices for hobby projects. I don't think I will ever spend colse to a million to make a automous car anyway, don't have the money ;)

Adrenalynn
07-14-2008, 12:30 AM
Why - SBC-PC's are the same size as fast Microcontrollers. The PicoITX is the same size as the Hammer RDP, give or take a new nanometers, and at least 15x more capable. And the price difference is inconsequential in quantity.


Most of the rovers that navigated thru a room can get by with a microcontoller

Yeah, the ones that get stuck between the sofa and the wall repeatedly... I love microcontrollers. I own just about any microcontroller you can name for the last 25 years. I just see them in the purpose for which they work well - preprocessing/interfacing to sensors. Everything else are toys. Bold statement, but I'm going to make it right here. Everything else are toys.

Even assuming that every rover made won't get stuck in a corner with nothing but an 8bit 16-20Mhz processor with a few KB of RAM onboard - that's great. I can word-process just fine using wordstar on my Z80 luggable Kaypro I. I can run spreadsheets fine in Supercalc. I vote we toss all modern > 8 bit computers in the ocean. Better yet, let's just go back to drawing on cave walls with charcoal. Better yet: That whole language thing is kinda inconvenient. Let's just unroll 20,000 years of advancement because we simply don't believe in PROGRESS...

Advancement in computers was stagnent for the first several years plus in the 80's. The first microcomputers were more of the same. We got on a roll when we took a page from the mainframes and went to the >8 bit word. We *really* started to roll when we figured-out offchip memory and storage.

Yes, there are microcontrollers with a couple hundred k of program memory > 8bits. And you get to start all over from scratch every time because you don't inherit all the work that went into the OS and tools.

This whole discussion is what's wrong with robotics today, imho... We're blindly having exactly the same argument that crippled microcomputers from crawling out of the primordial mainframe ooze. I watch it the first time and was too young and inexperienced to complain. Certainly I get to throw a fit this time, watching us blindly making the same mistakes and the same excuses for them.

My washing machine has over 400Mhz of total processing power, btw... Stock. And it cleans better and more gently than anything I've ever seen... Correlation? I leave that to the interested reader...

darkback2
07-14-2008, 01:24 AM
Here are the chapter concepts. I'll try to flesh out the what would it take section over the next few days. I'm going to the library tomorrow to get some examples from the robot books. I haven't read them in a while.

Where are all the robots?
Sci-fi robots vs reality

What would it take?
Issac Asimov’s three laws...what would it build a robot capable of following the three laws?

A robot ma not injure a human being or through inaction allow a human being to come to harm
A robot must obey orders given to it b human beings except where such orders would conflict with the first law
A robot must protect its own existence as long as such protection does not conflict with the first or second law

What makes a robot a robot?
- Sensor/Input
- Processing
- Actualization

Actuators
Different types of actuators

Sensors
Different types of sensors, how they work

Processors

Frame materials

Types of robots (each of these will be accompanied by a project that can be done in the home.

- Robots in the Home
Rovers
Work Cells / Arms
Walkers
Humanoids

Culmination]
- Given what you have learned...are you still interested?

darkback2
07-14-2008, 01:24 AM
Here are the chapter concepts. I'll try to flesh out the what would it take section over the next few days. I'm going to the library tomorrow to get some examples from the robot books. I haven't read them in a while.

Where are all the robots?
Sci-fi robots vs reality

What would it take?
Issac Asimov’s three laws...what would it build a robot capable of following the three laws?

A robot ma not injure a human being or through inaction allow a human being to come to harm
A robot must obey orders given to it b human beings except where such orders would conflict with the first law
A robot must protect its own existence as long as such protection does not conflict with the first or second law

What makes a robot a robot?
- Sensor/Input
- Processing
- Actualization

Actuators
Different types of actuators

Sensors
Different types of sensors, how they work

Processors

Frame materials

Types of robots (each of these will be accompanied by a project that can be done in the home.

- Robots in the Home
Rovers
Work Cells / Arms
Walkers
Humanoids

Culmination]
- Given what you have learned...are you still interested?

Adrenalynn
07-14-2008, 01:29 AM
>> Issac Asimov’s three laws...what would it build a robot capable of following the three laws?

For all the humans to be extinct. That's the only way it could ever happen. The moment that everyone agrees universally not to build robots to harm other humans is the exact moment I'll become Supreme Benevolent Dictatress For Life with my army of Killer Robots and Jack Russell Terminators.

darkback2
07-14-2008, 01:41 AM
sorry...not exactly the angle I'm going for. I was thinking more along the lines of...what would a robot need in order to follow the rules. For example, a robot would need to be able to differenciate a person from a dog...tree...door...garbage can. What would it take to make a robot that could universally identify a person?

A deep understanding of physics. My dad operated on a guy who ran himself over with his car. The man was driving down the driveway, he hit his garbage can with his car. He got out, and pulled the garbage can from under the front bumper. The car then ran him over.

A robot would not only have to be able to recognize that the man was a human, but would also have to understand that in pulling the garbage can out from under the car the man would allow the car to roll forward and run him over...and then in addition to all of that, be able to stop the car.

Ofcourse the easiest course of action would be to drive the car for the man, and not hit the garbage can.

Please though everyone...understand...I''m not in love with the three laws of robotics...I'm just using them as an example of a robotics problem...and a brainstorming activity.

DB

Adrenalynn
07-14-2008, 02:22 AM
I understand where you're going. Just observing human nature. And if we're going to have service robots a deep understanding of human nature is a requirement.

I'm also cynical in exploring the odds of our creating a creature better than ourselves. I haven't seen any examples where outlawing stupidity and a lack of foresight has ever been locally, let alone universally, successful.

We lack any great depth of insight or foresight in general - given that we haven't begun to master it in ourselves, how is it we expect to synthesize it?

The pervasive nature of the average persons' stupidity never ceases to amaze me. How is it that someone that survived to adulthood couldn't have the foresight to think "if I unstick this thing that is holding back my 3000lbs of steel and iron whilst standing behind it, bad things are likely to happen" ?!? *I* couldn't be a trauma doctor or EMT. I'd have to laugh and watch them die, then light a candle in homage to Charles Darwin.

Understand, I don't question your story at _all_. It's not even surprising - my opinion of humanity as a group is low enough that that story doesn't effect me at any deep level. It's just one more example.

It's only a matter of time, imho, Chuck will have the last laugh...

To bring it back to point, though, your example is only one teeny little sample. Can *you* envision the logical ruleset that answers *every* such technical and/or moral constraint? I know I can't. I'm egotistical, but not THAT egotistical... The logical and moral constraints that we face are NP Complete. (For the non-math-peeps - the solution requires non-polynomial time. There are no "right" answers, only "less wrong". The more time you spend the [hopefully] less wrong you get.)

If I have a securitybot protecting me and my property and someone points a firearm at me, I suspect I'd like to see the "rules" get thrown out. Any good bodyguard wouldn't hesitate to "solve" that NP Complete problem. Harming people is wrong. He is morally, professionally, and legally bound to protect me. Harming the person he's facing in my defense is wrong under the "rules". But it's "less wrong" than permitting the perp to kill me. The Bot's inaction would cause me harm. His action would cause someone else harm. Most likely you could iterate until long after the sun burns out and that problem will be no closer to a perfect solution. The "laws" are full of logical fallacy.

metaform3d
07-14-2008, 02:27 AM
That said...could more processing power make a washer better?Great question -- but the wrong one. The real question should be: could more processing power make the washing machine designer come up with a better design? I think the answer there is an unqualified yes.

The iLush for example is little more than a glorified vending machine. Most of what it does could be accomplished with a fairly low-end microcontroller, and yet I chose to drive it with a $2K laptop. The advantage for me is that I can explore a lot of software options very quickly. At RoboGames I blew out a motor controller and had to integrate a totally different one. Due to the modular software design and the fact that the same computer that compiled the code is running the code, it took about 15 minutes to write a new actuator class. I also made some other on-the-fly changes that would have been impossible with the arm's length approach of running on a microcontroller.

If I wanted to go into production there would be some serious cost/power/weight tradeoffs to consider, but that's not an issue doing development. Don't make decisions from a mass-production mindset so early that it hamstrings your creativity. Seems to me that in robotics right now development talent is the bottleneck, not the hardware. No reason to put artificial limits on it.

darkback2
07-14-2008, 02:31 AM
If I have a securitybot protecting me and my property and someone points a firearm at me, I suspect I'd like to see the "rules" get thrown out. Any good bodyguard wouldn't hesitate to "solve" that NP Complete problem. Harming people is wrong. He is morally, professionally, and legally bound to protect me. Harming the person he's facing in my defense is wrong under the "rules". But it's "less wrong" than permitting the perp to kill me. The Bot's inaction would cause me harm. His action would cause someone else harm. Most likely you could iterate until long after the sun burns out and that problem will be no closer to a perfect solution. The "laws" are full of logical fallacy.

Ok...what if the robot could stop the person who is trying to shoot you without harming them. For example...grab the gun out of their hand, or put itself between the gun and you...laws not violated...person protected. (ofcourse it would be easier to just shoot the bastard.)

Again though, even in your example, the robot would still need to recognize that you are a human being, the person aiming the gun at you is a human being...the gun is a weapon...the gun can kill you...the gun is actually pointed at you...

Another way...what if all guns were fitted with a robotic chip. I point the gun at a person, the gun won't fire. I point it at a rabbit...the gun does fire. (poor rabbit) Then the robot wouldn't have to do anything.

Same goes for the guy with the car...what if the car was a robot. Then it wouldn't have hit the garbage cans to begin with. or maybe it would have...if it was controlled by a microcontroller.

just kidding.

DB

Adrenalynn
07-14-2008, 02:40 AM
What if the person removed the microchip from their firearm? The robot that made the assumption that because he couldn't shoot me with it, I just got shot with it. Boom. Logical Fallacy. Game over.

It's an NP Complete problem. There is a matrix of "yes, but" on both sides approaching infinity. For every counter I give you can counter, then I'll counter, then you'll counter, I'll counter, [...]

During the '07 DARPA Urban Challenge there was the first known robotic fender bender. One vehicle got confused at an intersection, and (I think it was Stanford) the autonomous vehicle behind it got ticked off and decided to stop waiting. It went to pass the first vehicle at the same moment that the first vehicle made up its mind and stepped on the gas. Wham! That was the most human mistake I've ever seen. One 'bot couldn't make up its mind at an intersection (Tourist-Bot anyone?), and the SUV driver behind it got testy and impatient. How many thousand times a day does that happen? Why did it happen? Because we create life in our own image. The only image we can possibly BEGIN to comprehend.

Adrenalynn
07-14-2008, 02:47 AM
The "laws" are fiction. The sooner we geeks accept that fiction is fiction and that Asimov just used them as a vehicle to underscore human nature, the less time we'll waste thinking about 'em.

Again - I understand that you're using that as a sample problem - I just think it's ill-advised to continue to propagate fiction and ruin future roboticists. How about solving a practical problem like the two-robots-at-an-intersection problem described above. It's also NP Complete, and a *very* real problem we'll all be facing, requiring no flexible morality issues.

The obvious answer is: Don't pass other vehicles. Alas, the vehicle in front might have been stalled or experienced a mechanical failure. In that instance passing it is a requirement - when "safe to do so". But then that sets up the robotic-fender-bender. NP Complete. So iterate that problem and explore the potential for solution...

metaform3d
07-14-2008, 02:49 AM
Please though everyone...understand...I''m not in love with the three laws of robotics...I'm just using them as an example of a robotics problem...and a brainstorming activity.I think the best thing to learn from T3LoR is that they represent very old-fashioned ideas about Artificial Intelligence. Asimov, like everyone else at the time, thought that robot brains would be "logical" in the sense of slavishly following logical rules. His positronic brains were the technological maguffins that enabled something like real-time hypothetico-deductive reasoning. His stories were all essentially closed-door mysteries, where all the information is known to everyone involved and they just have to reason through the consequences of applying T3LoR to the given situation.

We now understand things a little better. A.I.s won't be any more rational than we are because they can't be. Real-time responses require dealing with incomplete information. Heuristic programs and best-guessers are always going to outperform deep thinkers; good-enough is the rule.

Sorry, more to say but I have to get ready for a trip.

darkback2
07-14-2008, 03:07 AM
part of why I'm bringing up the three laws of robotics is precisely the point all of you have made. More over to point out how difficult even the most basic parts of attempting to follow the laws of robotics would be, while also giving a bit of my own background. My first experiences with the concept of robotics was reading robot dreams. That was what led to me getting my omnibot. (I still have it though its not functional.) I honestly believed that I would buy a robot that i would train, and which would then go and work for me so that I wouldn't have to.

I was maybe in middle school, though probably younger when I read that book.

The funny thing is that I loved the stories because they were really mysteries. Mysteries about robots! Now look at me. I'm arguing with people over the internet at one in the morning about a chapter in a book that hasn't even been written...a book about robots!

DB

Adrenalynn
07-14-2008, 03:20 AM
There's a saying about arguing with people on the IntarWeb. But I've probably offended enough people for one day.

Oh - hey, look at that! It's after midnight! A whole new 22:40 to offend people! :D

darkback2
07-14-2008, 03:45 AM
No offense taken, and I love the fact that people are talking this much about this. The laws of robotics...hmmm. I guess its good that my sisters roomba doesn't try to kill my niece in her sleep...but the robot doesn't do that on purpose...

Oh...and I'll get to the "better problems in the chapter about sensors. Especially the section on robot vision.

Like in the first year of the darpa grand challenge when a car got stuck by a shadow.

DB

Adrenalynn
07-14-2008, 04:30 AM
The Grand Challenge/Urban Challenge has probably a thousand years of engineering problems every year. It's a great place to look for hard problems...

LinuxGuy
07-14-2008, 12:39 PM
The Hammer is not a microcontroller. It's a PC. Guess what OS runs on an ATTINY or an ATMEGA? On a PICAXE or a KicChip? On a C-Stamp or on a BASIC Stamp?
This is just silly, and it surprises me coming from you. The Hammer is NOT a PC. Just because a microcontroller runs an OS, it does not mean it is a PC. The amount of processing power Hammer has, along with other resources, is very different from what a PC has. When was the last time you saw a PC with an analog input or SPI? Hammer is still a microcontroller, not a PC.


Microcontrollers don't boot off of thumbdrives. They don't "boot-up" at all. You put power on them and they run code. Realtime, interrupt-driven code [ISR's], [generally] without external data or address busses.
Every microcontroller boots up. They all have to initialize registers and I/O ports. Hammer can be programmed bare-metal, without an OS also. Try doing that with a PC and see how far it gets you. Oh, it can be done, but I wouldn't want to deal with it. That's what OS and kernel developers do. In any case, all microcontrollers have to boot up to be ready to do wha they have been programmed to do.


I thought your point was that one could build a truly autonomous robot of any complexity utilizing only a single chip? Where'd I miss it?
This was never my point in this thread. You've apparently either badly misunderstood something or made some assumptions as to what I or somebody else meant.


More to the point, where did MIT, Berkeley, CalTech, Stanford, Virginia Tech, SUNY, GM, University of Utah, NASA, JPL, et-al miss it?
Just because these "big boys" have lots of money and supposedly the best minds available to them, it does not automatically follow that they can and/or create the best software or other things for a given purpose. Very smart people can miss the boat sometimes too, albeit hopefully far less than others.

8-Dale

Adrenalynn
07-14-2008, 12:54 PM
So it doesn't have external address and data busses?

What is *your* definition of a microcontroller? If a PC is really slow it's a microcontroller?

Alas, the Hammer is not a microcontroller. It's an SBC - a Single Board Computer - which is just a PC with an A/D.

And, incidentally, I've yet to see a PC that doesn't have a SPI bus lurking somewhere on the board - it's just not exposed to the casual user.

Initializing registers != booting an operating system. And it's no more difficult to boot a Wintel PC from the bare metal than it is the Hammer. In fact, it's easier - the process has been well known for 20+ years longer. I was writing my own OS' for PCs in high school...


Just because these "big boys" have lots of money and supposedly the best minds available to them, it does not automatically follow that they can and/or create the best software or other things for a given purpose. Very smart people can miss the boat sometimes too, albeit hopefully far less than others

If you would, please give us an example of accomplishing the same task they've accomplished without utilizing the methodologies they've employed.

LinuxGuy
07-14-2008, 04:52 PM
So it doesn't have external address and data busses?
The CPU the Hammer uses, the Samsung S3C2410A, does indeed have an external data/address bus. However, it is not accessible on the Hammer module due to pin count issues - it has only 40 pins (standard DIP 40).


What is *your* definition of a microcontroller? If a PC is really slow it's a microcontroller?
My definition really shouldn't matter. However, for the sake of discusion:

Microcontroller: Can stand alone and perform all functions required of it. It may or may not be capable of running an RTOS or OS such as Linux or Windows CE. It does not require support chips to perform it's functions and has on chip RAM and Flash (for program storage). Software for it is generally cross built on a much faster host system with more resources. Building software natively on a microcontroller is either impossible or not recommended due to resource restrictions (RAM and Flash).

Now, tell me when Intel last put A/D, DAC, and/or GPIOs on one of their PC procesors. When you talk about a PC, you are talking about much more than a single processor, and that processor requires much more support circuitry than an embedded microcontroller would ever require. You just can't equate a PC to an embedded microcontroller or SBC. Now, if you want to compare any Intel PC type proessor to a microcontroller, we can continue this, but otherwise I think it's pointless.


Alas, the Hammer is not a microcontroller. It's an SBC - a Single Board Computer - which is just a PC with an A/D.
Hammer is not an SBC. It is a microcomputer module, and requires external components to interface to the outside world. This may or may not include level shifting to convert its 3.3 I/O to and from 5V external devices. An SBC does not require additional circuitry to interface to the outside world and is completely self contained - it does not require external video, memory, etc. In many cases, RAM and and Flash can not be expanded.

Your definition of SBC is also flawed. SBCs are basically microcontrollers or microcontroller modules with support circuitry to interface to the outside world. This also may or may not include level shifting for interfacing, connectors, etc. SBCs generally don't require as much external support circuitry as a PC would. SBCs also may include additional data (RAM) and/or program (Flash) memory. If an SBC contains an x86 type processor, it has everything required on a single board.


And, incidentally, I've yet to see a PC that doesn't have a SPI bus lurking somewhere on the board - it's just not exposed to the casual user.
This is exactly what separates a PC from a microcontroller - things such as SPI and I2C are many times NOT made accessible to the outside for many reasons. Just because they are there, it does not make the PC comparable to a microcontroller as far as functions that are available. The microcontroller, or microcontroller module, has ALL of its functions (of those that are designed to be used) exposed for use in whatever way the user chooses. Not all microcontroller modules have all functions of its processor exposed though, due to design considerations or constraints.


Initializing registers != booting an operating system. And it's no more difficult to boot a Wintel PC from the bare metal than it is the Hammer. In fact, it's easier - the process has been well known for 20+ years longer. I was writing my own OS' for PCs in high school...
I never said booting up had to have anything to do with booting an operating system. Bare metal means you aren't using any operating system. Have you programmed an Intel CPU this way, with no OS at all? Personally, I wouldn't want to! However, you are still talking about an OS, not programming the processor with no aid from an OS. Program a robot using an x86 based board with NO OPERATING SYSTEM and tell me how far that gets you. Try programming a robot or other embedded system using an x86 processor all by itself - it does not have everything required on the processor. It is not a System On Chip.

Oh, and I never once indicated that any robot of any complexity could be created from a single module.

8-Dale

sam
07-14-2008, 05:21 PM
Lynuxguy's definition makes a lot of since to me :rolleyes:

All you talk if somewhat making me disy.

It's interesting points on the 3 laws of robotics.

Adrenalynn
07-14-2008, 05:38 PM
I'm going to let this go as it rampages off into silly-land...

[great big post deleted]

You do know that your Intel chipset has A/D exposed to everyone, right? More channels of A/D than the hammer, most likely. And you can get to them without taking the cover off.

Just thought I'd check...

DresnerRobotics
07-14-2008, 05:50 PM
Ofcourse the easiest course of action would be to drive the car for the man, and not hit the garbage can.

Why stop there? Why not just pacify all humans in stasis chambers so we can't fight and bicker among ourselves? :D

LinuxGuy
07-14-2008, 07:07 PM
I'm going to let this go as it rampages off into silly-land...

[great big post deleted]
Uh huh, OK. Silly land, eh? :P So when is defining something rampaging off into silly land? You asked for my definition and I gave it.


You do know that your Intel chipset has A/D exposed to everyone, right? More channels of A/D than the hammer, most likely. And you can get to them without taking the cover off.
The entire point is that an Intel CPU requires additional support chips, where a microcontroller does NOT require such chips, but certainly may include them on an SBC for a given application.

Why not just address the additional questions I've asked and the definition(s) I've provided? An Intel x86 class CPU simply can not be used by itself, but a microcontroller can be used by itself with no additional support chips required.

BTW, I can point to several ARM7 processors that have external memory and data buses as well as advanced memory controllers..

8-Dale

Adrenalynn
07-14-2008, 07:28 PM
How many support chips are on your hammer?

darkback2
07-14-2008, 08:29 PM
Sorry...I'm thinking a new thread is in order.

DB

Adrenalynn
07-14-2008, 09:39 PM
nod. Agreed.

LinuxGuy
07-14-2008, 10:02 PM
How many support chips are on your hammer?
There are no additional processor support chips on Hammer. It's processor an SoC.

8-Dale

Adrenalynn
07-14-2008, 10:05 PM
We'll split this thread out once I have perms and take it up there. DB has asked us to stop polluting his thread.

scorch
07-15-2008, 01:04 PM
Hey,
Can I buy your book when it comes out? It sounds like I can really learn something from this book you want to write. I am only a novice when it comes to robotics, and the more I learn, the more confident I will become. It also sounds like many people are going to back you up and give you many other ideas. With all of their help, your book will be famous! If I order a book from you, then I want it autographed okay?

Good Luck!

P.S. If you need someone to back you up, you can always come to me. I may not know a whole lot about robotics right now, but I can always do my best to help.

darkback2
07-15-2008, 02:51 PM
Hey scorch,

I'm going to be publishing each chapter as I finish it on this thread long before I ever approach a publisher. The way I see it, so much of what is going into the book will end up coming from the people on this site that I could hardly call it "mine". So you won't have to buy a copy.

DB

ScuD
07-15-2008, 03:53 PM
No matter how much of the chapters have been on here before, I'm still buying that book :wink:

Though the internet is an insane portal of information, nothing beats good old paper and ink for a quick reference or night-time reading

Adrenalynn
07-15-2008, 04:28 PM
Absolutely. And texture and smell... Books are about the EXPERIENCE.

ScuD
07-15-2008, 04:32 PM
easy there, tiger :p

darkback2
07-15-2008, 04:46 PM
great, so I'm sitting here in a coffee shop, barely three pages into the second chapter which I am writing first, and I already have orders! Do you know how much pressure that is. Especially given this crowd, and how knowledgeable everyone is.

AHHH the pressure!

ScuD
07-15-2008, 04:59 PM
So, when are those first three pages up?


:tongue: just messin' with ya.

Take all the time you need, I just meant to say that I will buy that book. I want a library of technical/robot/mechanical books, but I haven't gotten far yet..

scorch
07-15-2008, 05:10 PM
I am going to have to agree with Scud on this one. For one thing, I can carry I book everywhere and the pages are all together in one binding lol. I also agree with Jodie, books smell good lol. So, when it comes out, I still want to buy a copy from ya and I still want an autograph

darkback2
07-16-2008, 02:30 AM
OK, here is what I have so far. My wife has only managed to read the first 8 pages, and there is a whole story about a fictional robot called a Plooby that still needs to be written. I'm also thinking some pictures and diagrams are in order. Oh yeah...and the formatting is all messed up. When I'm reasonably done with this chapter I'll upload it as a PDF.

PS, I know there are simpler solutions to some of the problems presented here. I'm using these solutions for educational purposes only.

Enjoy

DB

Chapter 2
The "Laws" of Robotics

A robot may not injure a human being or through inaction allow a human being to come to harm
A robot must obey orders given to it by human beings except where such orders would conflict with the first law
A robot must protect its own existence as long as such protection does not conflict with the first or second law

Issac Asimov developed these laws with the help of John Campbell in the early 1940s1, and used them in a series of short stories and four novels about robots. While roboticists argue about the necessity, value, and plausibility of the above laws, I feel that these laws represent a noble goal for roboticists to aspire toward, even if battle bots or mech wars seem like more fun.

So, with all that said, let us explore what it would take for a robot to be able to follow the three laws of robotics. Even at the most basic levels, a robot would have to be able to store the above information, check each of its own actions and the actions of those around it against the three laws and formulate the correct response. That alone would require stack upon stack of if-then-else commands and libraries worth of information on any host of subjects. That said, let’s break down the rules even further, and first look at the third law which in my opinion is probably the easiest of the three to follow so long as you ignore the first and second laws.

A robot must protect its own existence

What would it take for a robot to protect its own existence? This is easy if you’re a small household robot floating around in someone’s living room. Simple: don’t allow any other object to occupy the same space/time as itself, don’t fall off of any cliffs, and don’t run into the fireplace. Provided our little friend follows the above directions, it should be able to maintain its own existence for a reasonable amount of time.

One question we should ask though, is if a robot that simply avoids contact with much of anything, is really preserving its own existence? My niece has a Furby, a robotic children’s toy.2 The Furby wiggles around, and even makes angry noises if it is handled roughly. To be honest, it is pretty darn cute. That said, when my niece left her Furby outside and it was caught by a hugely unseasonable Los Angeles summer shower, the Furby suffered a rather pitiful death. It squeaked and croaked until finally it wouldn’t work at all.

Of course, the Furby is a really bad example because the poor thing couldn’t move to avoid its own destruction. So what about a robot that can move? What would it take for a mobile robot, lets say a children’s companion robot, to be able to protect its own existence?

Enter the fictional robot, Plooby. Plooby rolls around on tank treads and has a furry outer coating. It’s cuddly and cute, and has a trunk with four degrees of freedom which it uses to hug and play with the children it looks after. The robot has pressure sensors in its outer skin, employs sweeping infrared detectors, ambient light sensors, moisture detection, an advanced mapping algorithm, a speaker, microphone, and a self locating docking bay for recharging its batteries. Of course, Plooby would have to be waterproof to avoid the fate of the firby, and have a shock resistant coating as well as self-righting capabilities. All in all, it would take a servo controller capable of controlling 6 standard and 2 continuous rotation servos and a micro controller capable of handling all of the input and formulating outputs.

Insert Plooby Story

One of the things that Issac Asimov’s robot stories point out so eloquently is that a programmer/ operator cannot possibly think of every contingency. Take for example the robots competing in the DARPA Grand Challenge. In its first years robot cars were required to navigate a 150 mile course.3 Simple algorithm conflicts ended a robots potential for completing the course. Take for example a robot using LIDAR to map a region. LIDAR uses lasers to create a three dimensional map of an area.4 LIDAR can be fooled in a number of ways including interference from other LIDAR detectors, dark colors, and odd angles to name a few. The infrared sensors used in Plooby work in a similar fashion, and can be fooled in similar ways.5 Plooby’s failure resulted not from a failure in the equipment, but as a result in the failure of his programmer to account for all of the possible problems/interactions of environment and robot. Another point is that it would be impossible for a programmer to account for everything.


(WHY IS THIS INFORMATION HERE? wife annotation)

Realistically this isn’t that hard of a challenge. Most people’s homes are relatively safe. Robots generally don’t eat, so we don’t have to worry about them getting into something under the counter.

Again we have to ask ourselves if Plooby or any other robot could accurately protect itself from harm, and what is involved completing this task. How can a robot account for the behavior of other objects around it, and does it need to? Plooby could just as easily be demolished by the family dog, or by causing someone to trip while carrying something heavy being crushed in the process. Plainly put, it would be incredibly difficult for a robot to actively avoid all possible negative outcomes without the robot having some control, not only over itself, but also over the things around it.

Lets look back at our Furby example. Furby was simply left outside where it encountered a harmful substance in the form of rainwater. What if the Furby had been able to remind my niece not to leave it outside?

This brings up what I suppose is the fundamental problem faced when automating any robot: how to tell the difference between two states. How might a robot know the difference between being outside and inside, for example? What are the real differences between inside and outside? Light, sound, and even wind currents are not consistently different between the two places. Color scheme might be different, but again, robots deal in constants. While the walls in my house are painted yellow, a commercially viable product would have to work in a myriad of environments. What about a house with green walls, or wall paper? While vision systems are incredible, they are not the be-all-end-all of robotic sensors. So let’s explore some other ways for a robot to differentiate between being inside and outside.

You could put a radio transmitter in the charging station, and have it emit a signal that only travels a short distance. The robot looses contact with the transmitter, and it believes it may be outside of the house. Unfortunately this creates a problem as quickly as it solves one. What happens when the robot looses contact with the transmitter because it has wondered into a part of the house that the signal can’t reach? For example, there is a place in my kitchen where I simply do not have WIFI access. The transmitter is on the second floor directly above the kitchen, but for some reason the signal can’t get through the floor. Similarly, if the radio signal is strong enough to reach the entire house, it will invariably be accessible from outside the house as well. While I don’t have access in my kitchen, I do have WIFI access on my uncovered deck.

You could use an invisible fence system. My wife and I tried to use one of these with our cats. Simple...you wrap a wire around your house and plug each end into a transmitter. As the robot approaches the wire it receives the signal from the wire and believes it may be outside. Again here you have the same problem you had with the radio signal. For my cats we set up an invisible fence outside surrounding our yard. If we turned the signal up high enough that the cats could not leave the yard, and there were places in the house where their collars would pick up the signal. More dangerously, there were places in the house where the cats would become stuck because the transmission field created a pocket that the cats could enter, but only had a very narrow place through which they could leave. This left the cats stranded without food or water.

Unfortunately, turning the transmitter down low enough to solve the above problem resulted in the cats walking right through the invisible fence at weak points in the transmission field. Also, if the cats were going quickly enough, they could get through the entire transmission field without its registering that they had left.

How about another approach: Northstar is a robotic location system that projects two invisible dots on the sealing of a room. A robot can key into the location of those spots on the ceiling to identify its position in a room.6 This requires that the robot have an unobstructed view of a flat ceiling, and know which room it is in. When the robot does not see the spots on the ceiling, it knows it is outside. While mass production would significantly reduce cost, the Northstar projector/Receiver kit costs $1750.00, and extra projectors are $595.00. Perhaps we should look for a more reasonable solution.7

We could accomplish something similar using a modified Wiimote and two IR- emitters. These could be mounted in the ceiling in similar fashion to a North Star system, and are available for a lot less. The Wiimote is a camera, coupled with a three axis accelerometer.8 The camera picks up light emitted by two infrared bulbs, and compares the relative location of the Infrared signals with the information from the accelerometers to position a curser on the screen, or control a character through a video game. In this case, what happens when the robot wonders under the kitchen table and looses the signal, or picks up the infrared light emitted by candles?

Simple fact is, any method of controlling/manipulating/informing a robot will have its shortcomings. Any form of input can be fooled. Even our own eyes coupled with our highly powerful object recognition and edge detection software can be fooled. Hence, optical illusions. The programmer cannot account for every contingency, and there is no perfect system for robotic navigation.

I currently employ a lot fewer sensors than Plooby, and I am usually pretty aware of when I am outside compared to when I am inside. Not only that, but I can differentiate inside from outside in a great variety of places. Now if only I could figure out how I do that.

So instead, Plooby will not be able to tell inside from outside on a consistent basis. In reality, all Plooby needs to know is when Plooby is getting damaged from being outside, and have some way to correct the situation before major damage is done. Plooby will have to rely on his temperature and moisture sensors, meaning it may complain when it is near the bathroom when someone is taking a shower or is too close to the drier or stove. We will have to remind my niece to be careful where she leaves her toys.

Now let’s add the second law of robotics, and again, we will leave out the whole part about conflicts with the first law. What would be required of a robot so that the robot could obey the orders given to it by human beings?

While for now, we will set aside the human beings part and simply leave it at obey orders, a robot would have to be able to understand those orders and formulate a proper response. Suppose I’m sitting on the couch and I say, “Robot...get me a beer.” In order to follow those orders a robot would need to:

Understand spoken English and be able to differentiate my voice from the clutter of background noise and the television.
Know what a beer is.
Know what a me is.
Know where to get a beer.
Know how to get there.
Know when it “has” the beer.
Know how to get a beer from where it was to where I am.

To be honest, this again is another difficult task. So let’s start by tackling the vocal command. How might a robot differentiate between background noise and the actual command? One method is to continuously sample or record the ambient sounds in its environment.9 The computer identifies the frequencies around it, and subtracts those frequencies from what it is listening for. One of the problems with doing this is that if the robot subtracts too much of the sound around it, it won’t be able to hear when a person gives the robot a command. That said, my wife and I have trouble understanding each other in the car when the radio is on. This is especially true when my wife is asking me to take care of things around the house. The task of separating out background noise is incredibly difficult to do accurately and is processor intensive. This is why a lot of voice recognition software requires the user to use a high quality headset.

How do we keep the robot from obeying commands it hears over the television? What happens when you are watching a detective show, and the bad guy yells “get down!” or there is a restaurant scene where the characters are ordering food? In this case, a television could encode the sound emitted with a high frequency pulse. This could be done at a high enough pitch that people could not hear it, but our robot could. In this way we would avoid false positives.

We still have the problem of the robot not being able to distinguish commands from background noise. So for now, we will also agree to mute the television and speak in a clear voice when talking with our robot.

The robot will also need to constantly scan for its name, or some command that would make it ready to receive commands. We could use contextual clues. For example if you are inside of a home (check the previous pages for that one), and you say, “Robot, get me a beer”, the robot won’t come back with a bear, or jump out from behind a corner flashing lights and screaming “boo!”, thus giving you a scare.

Over time you could read the robot pre-programmed scripts to help it to “learn” your speech patterns, and we could use a system of “rewards” for when the robot correctly pairs your words with the commands that you give it. In this way a robot could be trained to listen to vocal commands, and in the least would work some of the time.

The above problem is one reason why I believe we do not have more voice activated devices in our homes at present. There is nothing more frustrating than when you call a company and have to repeat yourself over and over again only to have the computer on the other end of the phone say, “Please repeat.” or some other such nonsense.

So we now have an acceptable voice recognition program, and the robot can hear you say things like “Robot, get me a beer.” Hearing “Robot” gets the robot ready to receive a command. Next comes the hard part. “Get me a beer.” Now there are several ways for us to approach this programming challenge. The robot could have a large database of commands that it recognizes. That method has its limitations though. For one thing, the robot will not understand the command if you change it slightly. For example if you say, “bring me a beer” instead of, “get me a beer”, the robot may not be able to follow the command. The programmer will have to think of all of the possible things that a user might ask the robot for, and program in all of those commands. This also requires the user to learn which commands will work, and which commands will not be understood.

Another approach is to have the robot utilize a large database of single words. The robot would then have to figure out what the command is based on the definition of each word. Most programs that employ this technique look for patterns of words that commonly go together. You see this if you try to send a text message with your cell phone. The phone tries to predict what word you are trying to type based on the letters that you have typed in so far, and on letters that commonly go together. You can also see how often your cell phone gets things wrong.Pairing several techniques leads again to the common problem of false negatives or positives depending on the sensitivity of the algorithms being employed.

Now once we get passed the problem of telling the robot what we want it to do, the robot still has to do what we asked it to do. So you’re sitting on the couch. You say, “Robot, get me a beer.” The robot now has to figure out what a beer is, and where it might find one.

For the fun of it, we will stick with our fictional robot Plooby, which we have now fitted with voice recognition software. Plooby hears the command, and begins searching its database looking for an image that is associated with the word beer.

Again, here the burden falls on the programmer to have preprogrammed a robot with an image associated with the word beer. Without this image a robot will not be able to identify a beer when it “sees” one. Unfortunately our programmer will have to not only create an incredibly robust vision system, which in turn can be easily fooled, but will also need to include a huge variety of images associated with the word beer. The set back here is that the more images a robot has in its database, the more likely we are to get a false positive, and have the robot return with a bottle of Worcester sauce.

The easiest solution would be to employ “preprocessing”, or “off bot processing”. We could simply have a smart refrigerator. It would catalog each item placed within it, and not the location of each item. This could be done using barcode scanners mounted over the door of the refrigerator. Further, the refrigerator could be equipped with an internet connection, and a subscription database of common groceries and their barcodes. This database would periodically update itself. In this way the refrigerator would not only know what a beer is, but would also know when you were about to run out. Your refrigerator could even generate a grocery list for you, or simply order your groceries from your local grocery store. The refrigerator would then be able to feed our Plooby the location of the beer, or if it is really automated, pass off the beer bottle to the plooby as it comes over.

Now of course, on top of the expense of our Plooby we are requiring a user to purchase a robotic refrigerator. That said, we probably shouldn’t limit the abilities of our Plooby given that things change, and people will have to replace appliances at some point. So in the case of this solution we will assume that robotic appliances are available, and will become a regular part of the home.

Now the Plooby just needs to find our refrigerator.

For this we already have a solution. We can use a Northstar system, and we will preprogram our plooby with a map of our home. The robot would then be able to identify certain places, and “know” where on its map certain things like the refrigerator can be found. Our programmer will have the burden of preprogramming associations such as “beer” and the location “refrigerator”. Again this can cause problems. The more we have preprogrammed, the more our robot will have to sort through before it can begin to formulate its response to a command. Also our programmer simply cannot account for every eventuality.

Again here we should resort to off bot processing. The smart home could maintain a database of the location of various items. As various items are moved around the home, bar codes of RFID tags would be scanned and entered into a database. In this way almost all of the items in the home could be readily located. Similarly our Plooby could be fitted with a means of communicating with the house. It could say for example send out a query. “Beer, location = 1$” or in english, “where is the beer?” The refrigerator would then respond, “Beer, location = refrigerator” or again in english, “The beer is over here, stupid!”

Now our little plooby will have to navigate his way to the refrigerator. This can be accomplished if Plooby knows where he is using the Northstar or another similar system, and knows where the refrigerator is. If both points are listed as x,y coordinates on a map of the house, then Plooby should be able to plot an efficient course from point A to point B. Unfortunately we are not using Plooby in a lab where the room is relatively square, and the map of the room is constant and laid out on a grid. In our world there are children’s toys left round, and chairs are not always pushed in.

So our Plooby will have to rely once again on his IR and touch sensors in order to avoid obstacles. Say plooby is at point 3,8, and the door to the refrigerator is located at point 9,14. Plooby could simply move, and then check to ensure that the difference between point A, where plooby is, and point B, where the door to the refrigerator is, has decreased. Now Plooby does not want to have to randomly guess a direction and go, so Plooby would begin by plotting a line between the two points and attempting to follow that line. But in a real home Plooby would soon discover that there are plenty of things blocking the simplest line between itself and the refrigerator. Plooby would therefor have to plot courses around these obstacles in order to reach the refrigerator. In my home this could be dangerous depending on where Plooby starts. If Plooby was in the laundry room, and had to go to the kitchen, the straightest line would take plooby into a bathroom to which the way in was the only way out. Plooby’s programmer would have to have given Plooby the ability to backtrack in order to find a solution to that problem. Similarly Plooby would be doomed to repeat the same mistake every time that situation arose.

Better yet would be to use a node method. Where the map would include possible pathways from one point to another. Now Plooby knows where he has to go, but is limited by his node map in getting there. Again we have the problem of unexpected obstacles. Our programmer would have to have included a very robust system that would allow Plooby to leave the pre-planned course in the event of an obstacle blocking it, and return to the pre-planned pathway at a later point. Again here we encounter danger. What does Plooby do if its path around an obstacle leads it too a trap? Again, lets assume Plooby is in the laundry room, and is using the node method in trying to get to the front of my refrigerator. Plooby encounters an obstacle just outside the guest room. (our guest has dropped a towel in the hallway outside of their door. In avoiding the towel, plooby is steered into the guest room, another situation in which the only way out is the way in. Fortunately plooby picks up the guest rooms northstar system, recognizes that he has entered the guest room, and plots a course back to the last node...which is still behind the towel. Plooby could get stuck in this situational loop forever, and at some point I would get tired of waiting, and go get the beer myself.

The above means that we now not only have to give Plooby a node system, but our programmer must also allow for Plooby to recognize that it is stuck in a loop, and attempt different solutions to the problem. So, when Plooby tries to get around the towel, and ends up stuck in the guest room the first time, Plooby will then block off that pathway so that the next planned solution is different.

Here we again encounter the same threshold problem we have encountered, and will continue to encounter over and over again. If Plooby has too many options when it comes to solving a problem, then Plooby may never solve the overall goal, because it runs the risk of creating more problems each time it solves one. Plooby may end up taking such a circuitous route to get to the refrigerator that we simply give up and get the beer ourselves. On the other hand, if Plooby does not have enough possible solutions, Plooby won’t ever be able to find his way, and our user will have to ensure that the house is completely clean in order for plooby to complete our request for beer. Which is important, because we are thirsty!

What if we again applied our “off robot processing” to the problem. What if the house simply plotted the course for Plooby. Now in addition to Plooby our user has to purchase a new refrigerator, and a new house. Ok, that is an exaggeration. While this would be easier to install in a new house, this could all be accomplished through the installation of a smart home system which would include the various sensors and control units required. If the house system employed video cameras mounted in the ceiling of each room, it could employ the same method we used to remove the background noise in our voice command system. The user cleans the house and puts everything where it belongs prior to installation. The house then takes pictures of all of the rooms. These pictures are then subtracted from the video feed. In this way the house is completely blind to anything that hasn’t changed since the original installation of the system. This blindness coupled with a map of the house would allow our system to efficiently provide our Plooby with an almost fool proof way to go from point A, to point B...which is where the beer is, and again...we are thirsty.

Now Plooby has gotten to the refrigerator. Here again we have to hope that our programmer was using all of his grey matter when programming our Plooby. Remember those sensors Plooby has for avoiding obstacles, and ensuring that no other object and Plooby occupy the same space-time? Lets say we are using infrared sensors for this purpose, and our programmer has instructed Plooby not to come within X number of inches of any other object. How does Plooby then violate that rule in order touch the refrigerator to get the beer out? So now our smart refrigerator and Plooby will have to have an exchange of sorts.

Plooby: I believe I am in front of the door of the refrigerator.
Refrigerator: I believe you are outside of the door of the refrigerator.
Plooby: I am going to break rule number 3-098734 which states “do not bump into the refrigerator” in order to open the door.
Refrigerator: I am now waiting to see when you open the door.
Plooby: Opening the door...now.
Refrigerator: Confirm door status = open.
Plooby: Where can I locate the beer?
Refrigerator: The beer is on the bottom shelf.
Plooby: Is this the beer? (Plooby uses his trunk to hold up a bottle.)

While to the casual reader new to robotics the above seams like a simple exchange, each of the above challenges and responses would require a great deal of programming. For example, how does plooby know how to hold a bottle? For you or I it may be as easy as closing our fingers, but we have a highly developed sense of touch, and highly dexterous fingers.

The cost of our Plooby has now reached astronomical levels. Again, there are simpler ways to accomplish these tasks, but we are looking for an exceptionally high success rate. And lets not forget the original point, which is that the robot needs t be able to obey orders given to it. What if we asked for a towel or more toilet paper? Each of these new problems would send our poor little Plooby and his programmer into hysterics.

Also we are beginning to rely more and more on off robot processing. This creates a dangerous situation in which our robot requires more and more sacrifices on the part of the user. Does our user want a robotic refrigerator that keeps track of his alcohol consumption? Will our user be willing to have cameras installed in the ceilings of all of their rooms? Will our user be willing to have a Northstar system in every room?

The point is that while all of these things are possible, at some point this become so ridiculously prohibitive that our user would rather just get the beer themselves, or better yet install a dorm fridge next to the couch.

All of these problems, and we still haven’t gotten to the first law.

scorch
07-16-2008, 11:31 AM
Okay,

I printed out the whole entire few pages that you wrote and they are amazing! I now have a better image of how complicated robots really are when it comes down to asking for a beer lol. I thought everything was written very clearly and everything made perfect sence. I really enjoyed it! I hope you write more of your book soon! Awesome job!

Matt
07-16-2008, 11:42 AM
Awesome project DarkBack :) I too will print this out and read later. Looking forward to when it's done. Have you looked into print on demand services like LULU? Great way to self publish.

(Um: Has anyone noticed that Educational is misspelled in the title of the thread? LOL)

darkback2
07-16-2008, 11:42 AM
Thanks for the kind words. I have my pot of coffee, and provided I can avoid having too many distractions, I hope to have the first law part done today.

Adrenalynn
07-16-2008, 11:54 AM
Awesome project DarkBack :) I too will print this out and read later. Looking forward to when it's done. Have you looked into print on demand services like LULU? Great way to self publish.

(Um: Has anyone noticed that Educational is misspelled in the title of the thread? LOL)


Another brilant mined runed by edjumacashun

I'll find some time late this week to read it over as well!

darkback2
07-16-2008, 01:48 PM
thats why I teach biology, and not english. My wife is the english teacher. She must have let that one slip past. I'll yell at her for it. :p

darkback2
07-17-2008, 12:29 AM
I have the rest of chapter two complete with the exception of the story about how Plooby gets destroyed by my niece. My wife the english teacher has forbidden me from posting any part of the book without her pre-reading it. So...you will all have to wait a bit longer.

Tomorrow I will probably attack chapter one...but then again given the wonderful thread on what a robot is that Matt posted so kindly maybe I'll tackle chapter three.

DB

darkback2
07-19-2008, 10:29 PM
I'm a bit stuck. I'm writing the third chapter, the one about what makes a robot a robot. The problem is that I read Matt's thread on the subject, and I think my mind has been polluted by all of you people.

DB

metaform3d
07-20-2008, 12:37 AM
I'm a bit stuck. I'm writing the third chapter, the one about what makes a robot a robot.Recall there's no right answer. In some ways pondering the question is more important. You could simply try out some possible definitions and show how one that's too open allows everyday appliances to be robots, while one that's too strict excludes things that we already think of as robots. For that purpose the thread is a goldmine.

darkback2
07-20-2008, 12:42 AM
Yeah, Thats sort of what I have been doing. I'm just suffering from writers block because I can't get all of the sections of the chapter to "glue" together. I am covering everything from robots that make pizza to light switches at current. Oh...yeah, and putter bot comes along and totally ruins my train of thought.

I actually went out to my shed and started a putter bot of my own.

darkback2
07-20-2008, 07:22 PM
I think chapter three is done. I'll have to have my wife tear it apart before I can share it with you. This one was hard. Ok...your first experience with robotics. Think hard now. I need some good stories.

DB

darkback2
07-20-2008, 10:26 PM
Ok...here is chapter 03

“Here we are in the third chapter of this book, and amazingly enough we haven’t tried to define the very thing this book claims to be about. So let’s do some exploring and see if we can’t figure out exactly what it is we are talking about:

What is a robot?

When I was much younger I would have had an easier time answering that question. I would have drawn you a picture of a boxy looking thing with wheels and arms and a block for a head, maybe with a couple of antenna sticking out the top. But now? That question which on its surface seems so simple becomes more and more difficult to resolve the closer you look.

“A robot is a mechanical or virtual artificial agent. It is usually a system, which, by its appearance or movements, conveys a sense that it has intent or agency of its own. The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots to differentiate.1”

The above is the definition of a robot taken from Wikipedia, an online source in which users generate the definitions of words through general consensus. If all we were looking for was the definition of the word robot, then we could end the chapter here. What if instead we were trying to figure out what a robot was good for? There are some things that we could easily program a robot to do for us and others, where the solution proposed by the robot would be more complicated than the problem that the robots solved. By discerning which tasks are at the boundaries of this new question, we can help to better define what makes a robot a robot.
BOLDED PARTS ARE CONFUSING

When I was younger I had the greatest vision for the future. I had begun reading Isaac Asimov’s robot stories, and realized very quickly that for the most part they were written from the standpoint that robots should be feared. I watched movies like Terminator, and Alien, even books like 2001, and all of them had robots that went crazy and killed people or through the their indifference and “logic” ended ruining everything. Near the same time, we began debating about using robots in automotive plants. I remember people feeling angry that robots were replacing factory workers. At some point a study was done showing that robot factory workers made fewer mistakes than their human counter parts and that automobiles manufactured by robots were less likely to have mechanical problems than their robot made counterparts.

Reading that article set a lightbulb off inside of my head. I had a dream that when I grew up I would not have to work. In my dream, when a company hired me to work for them, I would buy a robot and bring it to work with me. I would then train the robot to do my job for me, and once it was trained, I would not have to go to work. I would be free to pursue artistic endeavors- free from the tyranny of labor. Free!

I envisioned all of humanity living this way, where people would only work as long as it took to pay for and train a robot. Then everyone would be free to explore the betterment of themselves. Unfortunately, my vision has not yet been realized. I go to work almost every day, just like everyone else. Unless of course this book becomes a best seller and I make millions of dollars, and can retire to my house, and life of leisure.

There is of course a major flaw in my vision. Why would the company hire me in the first place? Why wouldn’t the company simply purchase a robot? They could hire a small number of people to train the robots, and would never have to hire another person again. Everyone except for the few who trained robots would be out of a job, leaving us all starving in the streets.

Now there are some things that we do not think robots can do well. For instance, it would be incredibly difficult to imagine a robot that could write a meaningful book about robotics that we wouldn’t find boring to read. Hurray, I’m safe! So perhaps instead of writing a chapter on what a robot is, maybe I should write this chapter on what a robot isn’t, or at least isn’t good at.

When I was in the second grade I travelled with my family to Nigeria. Shortly after our arrival I met some children who were about my age. I began experimenting with ways to communicate with them. I would listen to the sounds one child made when talking to a second child. I would then try to mimic those sounds with with a third child, and listened to the responses I got back comparing them to the responses made by the second child in my little experiment. Invariably the responses not only did not match, but also led to some pretty colorful responses on the parts other the children I was trying to communicate with. The lesson learned: Communication requires understanding. A robot or computer has very little capacity for understanding which limits its ability to communicate.

In college I experimented with creating robots and software applications that would compose music. The programs would use sets of rules to randomly create a musical composition. People regularly reported feeling that the music created by these programs lacked “feeling”. That is to say that there is something in the way people compose and perform music that other people can key into. Art is, after all, a form of communication.

Understandably, most people didn’t find the music my robots created much more pleasing than they would have the sound of a cat walking across a piano. Some unlucky listeners couldn’t tell the difference.

Lets look at a classic example from the discipline of artificial intelligence known as the Turing test.2 A man is locked in a room. Periodically people insert cards through a slot in the door. On the cards, the people have drawn pictures. The man has a set of rules that he uses to draw his own pictures on new cards which he then passes back through the slot. If the rules are sufficient, the man could carry on whole conversations in a pictographic language such as Japanese without being able to speak a single word.3 In this case does the man become a robot? The man is taking the place of the mechanical aspects of a robot. To the user, (the person slipping the cards through the slot), the box in which the man is locked could seem to have an agency or intent of its own, and in the same sense, there would be no understanding. Eventually no matter how great our rule set, our turing machine would fail to be able to trick the user into thinking that it was a person who actually spoke the language. One reason is that language changes over time. Another reason is that people are inherently unpredictable. Remember our Plooby example, where Plooby’s only mission was to stay out from underfoot. That system would not work all of the time. If our user got up in the middle on the night, Plooby would have to choose between the user going to the bathroom, or going to the kitchen. What happens when the user goes to the car to get something they forgot the night before? Plooby would have no way to process this unexpected development. Similarly our Turing machine would have only a finite number of possible responses. Eventually the illusion would break down. Suppose our programmer simply increases the number of rules from which our Turing machine develops its responses. This would only increase the delay time between submitting a card and getting an answer, and still our user would stumble onto a subject that our programmer had failed to think of, or state something with a syntax that our machine simply would not understand. So, it is safe to say that there are certain tasks that robots simply aren’t good for, and communication may be one of them. Just think of how you felt the last time you tried to contact a company and a robot answered the phone.

Are there other things that robots would inherently struggle with? Could a robot prepare a meal for example?

When you ask that question, people invariably think of Rosie from the Jetsons. That said, robots probably prepare a lot more meals than any of us would like to admit, especially because some robots do a better job than we do. Take for example the selections of partially or completely prepared foods in your local grocery store. If you were to travel to the factories in which these foods are made, you would see an ever growing number of robots taking the place of the people who used to work there.4 Is there a difference between what a robot does in a factory, and what a chef does in a restaurant? Which is more appealing, and which is more practical? Factories churn out food with so little human involvement, that in some cases entire factories can run interaction of only a handful of people.5

So, what would it take to have a robot that would be capable of preparing a passible meal? Enter Plooby wearing a chief’s hat.

Our human user leaves for work, fully expecting to come home to dinner warm and on the table. While it would be much simpler to create a robot that can prepare one single set dish, or a couple of different meals, our user would become quite bored eating the same thing all the time. It would be much more interesting for our robotic chief to create a myriad of dishes based on the available materials. In a smart home, the refrigerator could even keep track of what things are available. Our refrigerator could even keep track of which foods are nearing their expiration dates so that we wouldn’t waste food unnecessarily.

Plooby interrogates our smart refrigerator for a list of possible ingredients. Plooby then looks up what is available compared with a table of meals that he knows how to prepare. Then using a set list of rules Plooby could begin making our user a meal.

If you read the above paragraph, this all sounds incredibly easy. Cooking is no more complicated than mixing a few things together and heating them up. To be honest, pretty much anyone can follow a simple recipe, but not anyone can be a successful cook. One major problem is that a robot lacks a sense of taste. We could have sensors that could analyze the chemical content, and PH of the meal being prepared, but what would all of this information add up to? How might we define “tastes good”? What set of rules add up to “Mmmmm”?

What would it take for our robot to be able to do delicate things? How for example would we get Plooby to make eggs, sunny side up? Think about the complexity involved in holding an egg without breaking it. Think about the complexity, and sensitivity required to then take that egg and crack the shell without breaking the delicate yolk inside. Now perhaps I am being a bit cruel to poor Plooby. I fail to keep the yolk intact about half the time I crack an egg.

Now suddenly cooking a meal becomes something wholly different than just cranking out frozen foods. There are creative decisions that need to be made throughout the process of preparing food that would be difficult to sum up in a set of rules that a robot could follow.

In Japan roboticists are employing robots to take care of the ever aging Japanese population.6 The robots do everything from helping to feed people to serving as nurses that remind people to take their medications on a regular basis. What if we looked at the other end of the spectrum. What would it take for Plooby to serve as a companion for a child?

A baby sitter’s job at its surface seems simple. Take for example a child old enough to walk, but young enough that it could not be left alone for any reasonable amount of time. The parents hire a babysitter so that they can go out for a few hours. While there, the babysitter plays games with the child, gives them something to eat, and then dutifully puts the child to bed at a reasonable hour, checking in every hour or so to ensure that the child is okay.

This seems like a perfect application of technology, but would most likely fail miserably. Children bond with human babysitters in ways that they do not bond with machines. A human babysitter will respond to a dynamic situation differently than a robot. A programmer can only think of a finite number of contingencies for dealing with a given situation. A human can creatively come up with solutions to problems that a robot cannot. Further, a human being is physically and emotionally alive. A robot could mimic emotional responses but only through a suspension of disbelief are we able to accept those emotional responses as real.

When presented with a Sony Aibo, a robotic dog, Children were able to recognize that it was not alive.7 The children also suggested that when the robot dog broke, it should be recycled or thrown away, and not buried. While this doesn’t speak to the original question, what a robot is good for, it does suggest that children would not respond to robots in the same way they would respond to a human caretaker. A human child in the care of a robot may not be able to see a robot as a disciplinarian. For the child, there would be little downside to disobeying a robot because the child would recognize the lack of true emotional response on the part of the robot. Without immediate human intervention, a robot would quickly loose control of a child. Take for example PaPeRo. PaPeRo is a childcare assistance robot designed by NEC’s robotic development. PaPeRo is an extention of the R100, a robotic platform designed for use around the home. PaPeRo is small and colorful, and has big eyes that can track and recognize faces. Children may care about a robot such as PaPeRo, but the underlying fact that children will not perceive the robot as a living thing meaning that they will not follow the advisement of the robot when the advisement conflicts with the child’s own desires. YOU JUST SAID THIS In the future, robots like PaPeRo may function to lower the staff to child ratio at childcare centers,8 but will not replace the childcare worker.

Lets look back at Plooby, and consider how Plooby could respond to a growing child. Plooby’s programmer would have to set up rules through which Plooby could deal with the daily changes that young children experience. Young children are constantly discovering new behaviors to explore. I have a nine month old nephew. He is now learning to crawl. Oddly enough he currently crawls backwards. Poor little guy, he works as hard as he can to go in one direction but his movements add up to the exact opposite of what he is aiming for. How might Plooby respond when my nephew suddenly begins moving in the right direction? If Plooby were expecting my nephew to move backwards as usual, he might check to make sure the wrong direction is “safe for the baby. While an intelligent programmer would program the robot to ensure the baby remains a certain distance from any other object, this could prove to be an impossible task given the unpredictable nature of the babies motion.

Robots do a good job of dealing with expected input. A robot would lack the programming do deal with constantly changing behaviors.

This brings to light some of the inherent differences between people and robots. People do unpredictable things which sometimes lead to beneficial outcomes. A robot following its program will not do unpredictable things, because the things the robot does are all the natural outcomes of its programming.

So now, we have to add to our definition of what a robot is...or at least what it is not. A robot is an artificial construct which responds to changes in its state or the state of its environment. Robots inherently lack individuality and are replaceable.

Let us take a moment to look at how that replaceablility makes a robot a desirable construct for completing a dangerous task.

A high replaceability factor makes robots desirable to the military. Robots are disposable, and a damaged robot does not need to go into any fatality reports. The United States military will be expanding to a ratio of one robot per twenty-nine soldiers by the year 2014, and the robotization of the military does not end there.9 There are militarized robots being tested for everything from unmanned aerial assaults to robot mules that can carry munitions into dangerous areas. Here we arrive at another problem in our definition of the word robot. A large percentage of the constructs the military calls robots do not operate autonomously. Unmanned Aerial Vehicles (UAV) are all under the control of an operator. These robots can operate in a range of modes from fully automatic patrolling moving from one preprogrammed waypoint to another to fully remote control operation during engagement. Does this mean that during times when a UAV is patrolling on its own it is a robot, and then when the operator takes control it ceases to be a robot?10 It would be silly to refer to UAV as a robot one instant because it is flying on its own, and not a robot the next instant because it is being controlled by an operator. Some roboticists have suggested that we view robots as existing in a continuum of robotiness.11 Robotiness is an attempt to describe a machines ability to act on its own. Most machines would fit somewhere on the spectrum of robotiness, while most are not robots. Machines that are fully autonomous would be considered to have a high level of robotiness, while machines that are controlled through remote operation would have a low level of robotiness. In this way a UAV remains would just be a robot that exhibits more or less robotiness at any given time respective to the amount of autonomy it is displaying.

If you look at the above passage, we have described a number of machines as robots, and still not defined the word. The word “robot” invokes images of blocky mechanical things that move around in fits and starts. To call someone a robot is to say that the person lacks feeling, or does things in a precise manor, without error.12 So socially we expect robots to both do things perfectly, and to do them without feeling. In our collective opinion, a lack of feeling is one factor that makes a robot a robot.

We have been looking at one extreme of what a robot might be. PaPeRo, Plooby, and even a UAV are all machines that fit very nicely into what most people would consider to be robots. These machines fit our collective mental image of a robot. What if we look at the other end of the spectrum? Lets take some time and look at things that don’t fit our mental image of a robot. How simple can a robot be, and still be a robot? How little robotiness might a robot feature and still fit into the category. We could argue that a light switch is not a robot, because a light switch does not appear to have agency of its own. So what about an automated light switch, which has an infrared sensor, and turns on the lights automatically when a person enters a room? If you have ever been sitting still in a room with an automated light system and had the lights turn off on you because you hadn’t moved in too long, you know that these light switches appear to have an agency of their own.

An automated light switch does not fit our collective image of what a robot is. I suppose I am arguing that the word robot conjures an image in our collective schema, and the image is probably not that of an automated light switch. We are surrounded by automachina such as this, but not many of us think of these things as robots. We as a society have perhaps unconsciously defined robots as something that moves under its own power. An automatic light switch does not move, therefore according to our collective psyche it is not a robot. Similarly, the roomba robotic vacuum is.

Perhaps this collective definition is based more on the roots of the word “robot”. The word was first used in the play Rossum’s Universal Robots.13 The word comes from the czech word for forced labor or slave, robota. In the play robots which can be mistaken for humans work as servants. The robots later rise up against their human oppressors causing the end of humanity.

This play both speaks to our definition of the word robot, and our fear of them. We think of a robot as something that does work for us. While current robots seam to excel at doing simple tasks, tasks that are too simple do not fit our mental image of a robot.

For example, a toaster turns off automatically when the bread has reached golden brown. Some toasters use thermostats to accomplish this task, others use timers. In either case, the toaster is meant to turn itself of when the bread has been toasted. We do not see a toaster as a robot because the task it completes seems too simple. If we look back at the official definition of a robot, the toaster fits. A toaster appears to work through its own agency. A toaster is constantly measuring something, be it time, or temperature, and based on the processing of that information eventually turning itself off.

So what do all of the things we have described so far have in common? Simply put, all of our “robots” are artificial constructs that have the capacity to take in information about the world around them, process that information in an appropriate manner, and then act on that information through their own agency.

An automatic light switch takes in sensor data from a motion detection sensor, processes that information by deciding if there is someone in the room, and then acts on that information by turning on or off the lights. Similarly, in the Turing test, our man in the box takes in cards, consults his rule book, and creates new cards. If we replace the man with a machine capable of printing pictures, our Turing machine becomes a robot.

What about our more simple battle field robots? For the most part, these robots are being controlled remotely by an operator At what point do they exhibit so little robotiness that they are no longer robots at all. I sit behind the wheel of a car. I use the pedals to make the car go forward and stop, and the wheel to make the car change directions. I would have a very difficult time selling people on the idea that a car is a robot. It has so little robotiness, that it would make our definition all too broad. By extension a simple battle field robot being controlled by an operator has about the same level of robotiness as my car does. A battle field simply fits our mental image of a robot better than a car does because we cannot always see the operator of the battlefield robot thus extending the illusion that the battle field robot is moving under its own agency.

Let’s consider the behavior of the Packbot by iRobot. The Packbot does take in information. The Packbot takes in user input from the operator remote controlling it. The Packbot then interprets that information, converting joystick commands into movements, then the Packbot acts on those inputs. The Packbot while fitting our ental image has a very low level of Robotiness. Unfortunately if we loosen our definition to include the Packbot, then we have to ask ourselves about a simple light switch. A light switch takes in information from the user, and then uses that information to change the state of the light bulb. While the appearance of self-directed agency is lost in the light switch, and remains in the Packbot, a Packbot is really just a more complex version of a light switch. We therefor have to conclude that while a Packbot gives the illusion of being a robot, it is not, or in the least, it has as low a level of robotiness as a car, or lightswitch.

So now automatic light switches struggle to fit our collective mental image of a robot while exhibiting more robotiness than a Packbot which appears to fit our mental image much more strongly.

How can we rectify this discrepancy? Do we throw out our entire definition and start over? Or do we simply expand our definition of a robot?

Perhaps it would help for us to begin by figuring out exactly what our collective mental image of a robot is? Let us consider how we formed our own personal mental image of what a robot is. For me it probably started watching cartoons such as The Jetsons, later it was things like transformers. For people like my father their mental image of robots were formed by movies like the day the earth stood still, where an alien comes to earth and tells people they have to live peacefully or be destroyed by a giant robot.14 And let us not forget one of the greatest influences of all, the Star Wars movies. Over one billion people have seen the movie “Star Wars” in a theater. That doesn’t include all of the people who have seen the movie on any of the other myriad of formats in which it has been produced, not to mention the video games, cartoons, comic books, fiction novels...I could go on for a while with this one so I’ll just stop here. These movies shaped our collective idea of what a robot should be so much so that the field of home service and hobby robotics was greatly effected by it.15

So if someone said to you, “close your eyes and think of a robot.” What might the picture in your head look like? Also let us consider the similarities between what you see, and what someone else might see. For me the quintessential robot can move. If I had started out chapter 2 talking about how a smart house is a robot, you may have closed the book then, and moved on. “Real” robots move. My vision of a robot has a head. Usually blocky-er than not, and it would have some sort of eye or eyes that glowed or lit up. No matter how hard we try, our collective mental image of a robot has been shaped by media. Lets remember that the origin of the word comes not from science, but from a play. Certain things just don’t make for good movie characters. R2D2 and C3PO are both incredible movie characters. An automated light switch on the other hand...Because of this discrepancy, Our attention has been drawn to only one aspect of robotics.

Does a robot have to be automated in order to be consistent with our definition? Remember earlier in the chapter we talked about robotiness, and a Packbot, which is controlled by an operator having a low level of robotiness. Interestingly, a Packbod displays a great deal of animation. Packbots carry the illusion of being independent agents. Therefor, Packbots fit our definition. Is a Packbot any different than a remote control car? Both are controlled by an operator. Some people would argue that the skills required to use both things are similar. But here the illusion is mildly tempered because a remote control car looks like something we normally wouldn’t think of as a robot. Now lets take that a step further, and consider a tethered remote control car. That is one of the cheap kind you probably had as a kid, that had a wire going from the controller to the car. These used switches in the controller to directly control the voltage going to the cars motor and steering. In this case, the switches could be said to be no more different than manual light switches.

Now I believe we are all in agreement that a light switch falls outside of our definition of a robot. So how is it, that an RC car, and by extension, a Packbot, something that operates in much the same way is considered a robot?

Look back at one of the original concepts presented in this chapter. A robot making a meal. I stated that robots were taking the place of people on assembly lights, and made the claim that robots made a large amount of the precooked food we enjoy so much of. The robots working in say a pizza factory do not operate much differently than a light bulb and switch. An operator presses a button, or turns a knob that starts the factory line running. The robots on the factory line will continue doing the same repetitive task over and over churning out one pizza after another, until they run out of ingredients. Many of these robots are no more aware of their environment than a light bulb is.

Is it simply the fact that these robots do work that would normally be done by a human being? A Packbot being manipulated by a soldier to diffuse a bomb, is taking the place of a human being. The fact that a different human being with different skill sets is required to operate the Packbot makes no difference. A light switch and bulb have a one to one relationship. There is no other way to turn on a light other than to close the circuit on which the lightbulb is. Flicking the switch and closing the circuit are the same thing. Manipulating a robot and diffusing a bomb are two different tasks. There is not a one to one relationship. Therefor the Packbot is a robot, and the light switch is not. Similarly Turning on the Pizza making robots and making pizza are two different tasks. This again is what makes the factory line robot different from a light switch.

Can we draw a firm boundary? Can we fit everything into being a robot, or not being a robot? Fortunately I think not. I think there will always be a grey area. There will be things that some people feel are robots, and others don’t, and neither person could give you a firm reason why they are right or wrong. As more and more of the things we use become in some way automated, I believe this line will only get grayer. So...on that note...Good luck.

metaform3d
07-21-2008, 02:29 AM
Just a couple of small notes...


Lets look at a classic example from the discipline of artificial intelligence known as the Turing test.2 A man is locked in a room. Periodically people insert cards through a slot in the door. On the cards, the people have drawn pictures. The man has a set of rules that he uses to draw his own pictures on new cards which he then passes back through the slot.

I think you're confusing -- and indeed conflating -- the Turing Test (http://en.wikipedia.org/wiki/Turing_test) with Searle's Chinese Room (http://en.wikipedia.org/wiki/Chinese_room). The former is designed to be an objective test of conversational consciousness in machines, while the latter is a thought experiment intended to refute the very possibility of same. It's a bit of a dog's breakfast to combine them seamlessly in the same paragraph.


Robots do a good job of dealing with expected input. A robot would lack the programming do deal with constantly changing behaviors.

Why's that?

Adrenalynn
07-21-2008, 03:00 AM
I concur, Meta. Turing had several thought experiments, and that certainly wasn't one of them in any of my past readings - and it's key to Searle's Room.

darkback2
07-21-2008, 10:46 AM
Two things,

After looking up conflate, I would have to agree...that said, I don't think I'm the first. I've read the two things merged together else where first. I'm working to extract the two as I write this.

Second, Adrenalynn is offline? She is never offline. The sky must be falling!

DB

Adrenalynn
07-21-2008, 11:53 AM
I do get to sleep occasionally ;)

darkback2
07-21-2008, 12:13 PM
Hey...you're back.

Thanks for reading through the chapter. I'll keep posting the as I get them done. Actually, I think based on the outline, I have one more piece of candy (a chapter that seams easier to write.). The rest will be pretty hard to maintain interest in / make interesting.

I think I'll work on sensors next, though I'm thinking maybe what a hobby robotics shop might look like.

DB

ooops
07-21-2008, 05:13 PM
DB,
Ch#3
Just for fun I looked up "robotiness" ... couldn't seem to find it. You may want to send that over to Webster. You have used it in such a way that I am confident that it should be added to the dictionary:)
That was about as good a splanation of "what is a robot" and what is not, that I have seen. Probably should link this to the other thread on the subject.
FYI - I will buy the book, but I want mine autographed!
Ed

darkback2
07-21-2008, 06:44 PM
Can't take credit for "robotiness. That came from Stuart Ferguson's (Metaform3d) post on the other thread. though I wish I could take credit for it.

The sources that I'm using are in the footnotes which you don't get when you copy text into a post.

Hopefully you can get that autograph at Robogames 2009.

DB

metaform3d
07-22-2008, 01:49 AM
It's just the adjective form of the word "robot".

On re-reading I see where you were trying to go with the Chinese Room. It was just confusing because it seemed as if you were saying that was the Turing Test.

I think the second part of your chapter 3, once you get into the meat of the discussion, reads well. The first part I think suffers from a bit of an unfocused argument, as well as some unjustified preconceptions about the limits of electronic brains (for lack of a better word). God knows it's easy enough to get unexpected behaviors from a robot -- that's what we spend all our time trying to reduce. Someday, perhaps when we have the basics down a little more pat, we can tease out that element of the unexpected to help our robots deal with novel situations.

darkback2
07-22-2008, 02:32 AM
This one was hard to get into...I feel like I did come into my stride in the end there.

Also, I wasn't suggesting we don't get unexpected behaviors from robots. I was suggesting that robots are not likely to do things they are not programmed to do, and no programmer is likely to think of every possible eventuality. I realize it isn't impossible, PaPeRo, one of the robots I talk about will be employed in Japan taking care of actual live children. Heck, a Teddy Ruxpin could probably do a decent job of taking care of a child for a little while...

Good thing I don't have kids.

DB

ooops
07-22-2008, 10:29 PM
a Teddy Ruxpin could probably do a decent job of taking care of a child for a little while...

I don't know about Teddy R. But an Xbox / PS / Wi will keep the average 8 - 80 year old kids out of trouble for hours:)

darkback2
07-22-2008, 10:32 PM
There was a pretty cool art installation at California Institute of the Arts involving about 4 hacked teddy ruxpins. They said all kinds of bad words in response to a dirty movie. It was quite funny.

As for game consoles...you can get into quite a bit of trouble with one of those things.

DB

scorch
07-25-2008, 11:06 AM
So, when are you going to finish chapter 4. Sorry, lol. I just love your book so far hehe. No rush hehe. I will be waiting to read your next chapter.

Good Luck!

darkback2
07-25-2008, 01:18 PM
I think I'm going to skip to chapter 6 or so...There was a post about solar power, and we all seamed to sort of blow off the idea...then I was thinking about it, and the mars rovers are powered by solar power...so I figured it was possible...way out of any of our budgets...but possible.

The main problem I'm having is that those chapters are really the part of the book that will be like every other robotics book out there...and I don't want to do that. So how do I make something that is fun to do...but boring to read about...fun to read about?

That is the problem I'm having right now. I'll probably start actually writing today...but my best friend from highschool's sister is coming to visit this afternoon...

Oh...and I spent a few days trying to get my hexapod to walk without training wheels...

DB

DB

scorch
07-25-2008, 03:19 PM
Well, my advice to you is be creative and remember that some of your book will be like other books out there. The important part is that most of it is different. Maybe, you can make this one chapter like other books. Then the rest you can create yourself. Anyway, can I see pictures of your Hexapod? He sounds cool hehe

good luck again!

P.S. Remember there is no rush. I was just kidding around with ya hehe.

revoltage
08-10-2008, 12:19 PM
Wow Db good luck with your book. It's awesome. We had quite a debate in this tread :) It's a good mind exercise. This forum is awakening my long dead neurons guyz:)

archcvd
08-21-2008, 03:49 PM
Maybe I should join you darkback2 in the mechanical side of robotics as this is my forte in robotics.I have lots of hi res pics of robotic construction of different projects and different materials.

I can never get enough of seeing pictures of people's builds; I think that would be great. I'm especially a big fan of the small details like how everything is fastened, the type of materials used, why things were placed in such a way and the kinds of tools used. :veryhappy:

darkback2
09-01-2008, 06:13 PM
I'm going to post what I have so far of chapter six. This chapter needs "A" a lot of work, and "B" to be edited by my editor...my wife... It also needs diagrams and pictures to better explain what I mean. This chapter took me a really long time to get to this point because it had the unsettling problem of sounding either like the stats from a pin up, or a step by step how to. I believe it makes reference to the appendix's which have only been written in part at this point.

Enjoy,

DB

Chapter 6

At the time of writing this book, two rovers, or wheeled robots, have been exploring the surface of Mars for over four years.1 I look at the previous sentence, and then I look over at my own personal rover collection and wonder where I went wrong. Four years is an incredible amount of time for a robot to be running around. To be honest the major differences between the mars rovers, even the rovers developed by the military, and anything you or I could build are money and man power. The mars rovers are tended to around the clock by a team of expert engineers. My robots are tended to by...well...me. So I suppose I should give myself, and you a bit of a break.

Before you read the rest of this chapter do yourself, and those around you a huge favor. Turn to appendix A, and build the robot presented on those pages. If you don’t find that design appealing, then pay a visit to the websites listed in the links section of this book. The people who run those sites will be more than happy to get you started building a robot. Throughout this chapter I will make reference to different tests and projects that you can do using your robot. Having a robot on hand will make the reading much more enjoyable. On that note...Here we go.

Looking out at nature one wonders why there are no animals with wheels. Think of how fast a cheetah could “run” if it were on a set of low profile Michelins. Of course the eventuality of an animal evolving wheels is highly unlikely.2 In the world of robotics though, wheels come at a great advantage to the builder. Building a robot with wheels gives a roboticist a number of advantages over other designs. With a wheeled robot, weight is not as much as an issue. This means that a rover designer is free to carry a larger amount of power, sensory, and other equipment on board. Rolling robots can be easier to balance than other designs, and due to the one to one relationship between actuators and movement rovers can be relatively inexpensive to build while maintaining a high level of reliability.

For the purposes of this discussion, we will reduce a robot to something that consists of the following: a power source, sensory input, data processing, actuation, and a structure or framework to contain all of this. Analogous parts can be found in living things, like you. Your power source would be the food you eat, and your digestive system. Your sensory input comes from your nervous system, your eyes, ears, nose, mouth, and skin. Your data is processed by your brain, Your actuators are your muscles, and finally the framework that contains all of the above is your skeleton. So, if you have had children, then you’ve already made your first robots...now if only you knew where the manual on programming them went.

At the jet propulsion Lab in Pasadena California, Engineers and scientists have been working with two rolling robots for the past several years. The robots are named Spirit and Discovery, and are currently exploring the surface of the planet Mars. These six wheeled robots while much more complex, operate in much the same way as the little rover you may have built using appendix A. The power module in the Mars rovers is based on a solar array coupled with two lithium ion batteries.3 The solar array is designed to produce about one hundred and forty watts of energy per hour of sunlight. Part of the energy produced by the solar array is used to run the robot, and the remainder is used to charge the batteries.4 This allows the rovers to function on cloudy days and at night.

Spirit and Discovery use six motors to turn the wheels, and four steering motors.5 The steering motors can turn the front and back wheels allowing the robot to turn in place and make gentle S turns around obstacles.6 The front wheels can operate independently, and the two rear wheels operate in unison. This is so that the front two wheels can be “dug in” to increase traction, or on loose debris. Due to their suspension system, the rovers can function at an angle of forty five degrees without flipping over.7 Given that having one of the rovers roll over when the nearest hands are thirty five to two hundred and fifty miles away they would stay upside down for quite a while, the rovers are limited to thirty degrees by their software.8

The brains of Mars rovers are small on board computers that had to be hardened against radioactivity.9 The computer while slow when compared with your home computer or the one I’m using to write this book, are incredibly expensive because of their reliability. The robots receive commands from JPL scientists. The computers then decide the best course through which to comply with the commands they have been sent. The computer manages obstacle avoidance and controls all of the robot’s actuators.10 This is especially important because of the lag time between sending a command and the robots being able to carry out the command. When mars is at its closest, it takes a little over three minutes for a signal to get from earth to mars.11 So, Let us say an engineer is looking at a video feed being sent from one of the mars rovers. The engineer is seeing what happened a little over three minutes ago. The engineer then sends a signal back to the robot, and the robot doesn’t get the signal for another three minutes. Even if we eliminate reaction time, the engineer is driving the robot based on what happened almost seven minutes before. Imagine trying to drive a car to the grocery store that way. I hope you are going very slowly.

The mars rovers feature a large array of different sensors.12 Some of the most important sensors on the mars rovers are based on proprioception. Proprioception is a fancy word that describes the sensation of where your body parts are in relation to each other.13 For example, if you close your eyes, and hold out your hand, you can tell if your hand is open or closed. Having a strong sense of proprioception helps the scientists driving the mars rovers know what is happening with the robots they are driving. It is important for the scientists and engineers to know this because of the long delays between sending information to the rovers, and getting back the reaction to that information. These sensors include internal motion sensors, as well as the ability to count wheel turns to calculate distances traveled.14

So lets look back at your little rover, and get a sense of what is involved in the process described above. Using a ruler measure the diameter of one of the wheels. If we multiply the diameter by pi we can get the circumference of the wheel. This is how about how far the wheel will move the robot each time it turns. Now lets measure the distance from where the right wheel touches the ground to where the left wheel touches the ground. Multiplying this new number by pi will give us the approximate distance a wheel travels when the robot turns around in a complete circle. By dividing distance of a complete turn by the the circumference of the wheel we can determine how many wheel turns it will take for our robot to turn around in a complete circle. Fractions of this number will give us different partial turns. By using all of the above information we should be able to predict about how your robot will behave. You can use this information to help program your robot to move around on its own, much in the same way the mars rovers do.

A rover that strikes a bit closer to home is the ER-1 by Evolution robotics. This robot may not have stayed around for long, but during its short run proved to be an incredibly capable rover.15 The ER-1 used stepper motors and a highly tuned vision system to both navigate, and recognize objects that it had been “trained” to recognize.16 A stepper motor operates in a slightly different manner than either a regular direct current motor, or a servo motor.17 A direct current motor as you will remember from chapter four uses a coil to create a magnetic charge which pushes the shaft around in a circle. When the motor has made a half turn, a switch called a split ring commutator reverses the magnetic charge in the coil so that the shaft can continue spinning. A stepper motor uses a series of coils. In a ring around a magnetic shaft. The coils are alternately charged pulling the magnets into alignment. This forces the shaft to move around in discreet amounts or steps. By counting the number of steps you have applied to the motor you can approximate the distance the robot has travelled. The ER-1 utilized this method to achieve remarkable accuracy in determining its location.18

That said, ER-1 users did note “drift”. While the ER-1 software would keep track of the number of wheel rotations, and from that could determine its distance, the software failed to not when the wheels turned, but the robot did not move. This is one of the major short comings of any proprioception system. To deal with this the ER-1 had another trick up its sleeve. The robots vision system could be used to establish waypoints. Users could post pictures or place objects in and around the robots work space. The robot could then be programmed to recognize the pictures or objects. The objects, and the robots location in relation to them could be used to determine the robots location and heading. Using a combination of Proprioception and visual way points, ER-1 users were able to program their robots to move about the home, and even locate charging stations and dock when batteries ran low.19

A wonderful aspect of rolling robots is their ability to carry so much weight, relative to the size of the robot. A rolling robot does not have to lift the weight it is carrying. The force required for a rolling robot to move its own weight is a lot like the force required for you to push a shopping cart filled with groceries. On level ground it isn’t really the weight of the groceries you are pushing, but the resistance in the shopping cart itself. This resistance comes from things like the wheels being misaligned and friction between the wheels and the bearings they ride on. Excluding the factors presented in the previous sentence, the relationship between the weight being moved and the force required to move it is a relationship of about six one-thousandths to one. This is why a relatively small child can push a shopping cart containing a very large adult.

This weight to force relationship allowed the ER-1 to be based around its ability to carry a computer on its back. The computer acted as the brain of the robot. The laptop processed all of the video information, took readings from the infra-red sensors, and utilized obstacle avoidance software to plot directions along the robots course. Giving a robot the strength to carry an entire computer onboard is of immense help to the home roboticist. It gives the roboticist the freedom and computational power to complete incredibly complex tasks. The majority of my robotic designs have incorporated this strategy.

The ER-1 also featured a strong frame constructed from extruded aluminum and plastic connectors. These allowed the robot to be reconfigured multiple ways. This encouraged owners to personalize their robots.

Unfortunately the makers of the ER-1, evolution robotics chose to end the robot line. One member of the Evolution team noted that the company actually lost money for each of the robots that they sold. The robots were meant to help market the companies highly advanced robotic vision system. Since Evolution Robotics stopped making the ER-1 their software has appeared in several other applications such as personal robotics, cell phone shopping, and theft management.20

Roving robots carry appeal for more than just the home roboticist. Military interest in robotics dates back to World War Two.21 The German army used a robot called Goliath as a robotic mine.22 The treaded vehicle carried About one hundred and twenty pounds of explosive.23 An operator maneuvered the robot into the path of a tank or personnel and then detonated it.24 Goliath’s great downfall was the fact that it was controlled through a tether cable.25 Cutting the cable disabled the robot. The robot was also incredibly slow and heavy.26 While the Goliath was sean as a failure, the concept of having robots fight our wars for us has hardly died.

For the military, the idea of automating a rolling platform is incredibly appealing. This is the application of proven technology, and has allowed militarized rovers to have been developed that can breach entryways, search caves, patrol perimeters, and disable land mines. Tread designs have been developed that can operate in incredibly difficult terrain. The simple relationship between actuators and action has allowed roboticists and scientists to create a number of battle field ready robots in a very short amount of time. One recent example is a line of military robots by I-Robot Corporation.

Oddly the company I-Robot borrows its name from a series of short stories by Issac Asimov. In the series of stories robots follow three rules of robotics, the first of which is that a robot will not harm a human being, or through inaction allow a human being to come to harm. What makes this funny is that the I-Robot Corporation has a line of military robots which by their nature may prevent one sides soldiers from coming to harm by keeping soldiers out of harms way, at least some of the robots can be weaponized thus breaking the first law.

The Packbot probably follows Sir Asimov’s doctrine the closest of the lot. This robot is controlled by a remote operator much like the old Goliath was, only through the eyes of modern technology. The Packbot weighs a mere forty pounds, and unlike the Goliath, the Packbot is meant to be used to disarm explosives and land mines and not as one.27 The Packbot is the epitome of form following function. The robot is sleek and low to the ground. Its body is dominated by its wide tank style treads and treaded flippers. These flippers allow the robot to climb stairs, and even swim under water.28

One major advantage that the packbot has in attaining its superior mobility is its low center of gravity. While balance isn’t as much of a problem for rolling robots, it still is an issue when the robot is traveling over rough terrain, or on steep inclines. Using a ruler measure the distance between your robots contact points. That is any point at which your robot touches the ground. Now mentally draw an ark passing from one contact point to the other. Mass that is outside of this ark will apply torque rolling your robot over when it is not on a level surface. Mass inside of the ark works to hold your robot down, or has no effect at all. Good robotics design takes these factors into account. Roboticists tend to store heavy object lower on a robot whenever possible. This will help keep the robot well balanced.

Now take a minute with your rover. If you used the directions in appendix A, then your robot is controlled using a radio. This means that the brains of your little rover are located securely between your ears. One of the major challenges faced by a roboticist is translating the thoughts and ideas inside their own mind into instructions that a robot can follow. The second part of Appendix A shows you how to use a micro controller to control your robot instead of a receiver. This will allow you to program your robot to move around on its own without your help.

What would the ultimate roving robot look like? Looking back through the robots described in this chapter We can take ideas from all of them, and apply them to our own creation. Each of the robots selected for this chapter have different strenghts to offer. The Packbot is the utmost in mobility. While it does not operate on Mars, it is able to move through almost any terrain that the average user is going to encounter. The ER-1’s vision system, and proprioception make for a very intelligent system. The ER-1 proves the cruciality of giving a robot the ability to detect its surroundings, and navigate from one place to another. Finally the Mars rovers show that a robust power system is within reach. To be honest, the ultimate rover would be the one best suited for the task at hand. Each of the robots presented in this chapter so far have been well suited for the tasks for which they are designed. So let us begin by choosing a task, or series of tasks for which we would like to use a rover.

When an engineer begins designing a robot, he generally begins by looking at the task that the robot is meant to accomplish.

At this point I should probably tell you that I am about to take you through the design process. In the design process an engineer begins by listing his goals. He then analyzes objects that complete the same task, and makes note of the available materials. Taking elements from these solution, the engineer then makes a drawing of the product he would like to develop. He then uses the drawings to create a prototype. Finally he tests the prototype, and uses the prototype as the basis for the next design cycle.29

While I suppose I go through the above process when designing a new robot, I suppose I do so on a more subconscious level. I feel the design process is great for problem solving, I’m just not sure that is how things work at a realistic level. Engineering is like art. An engineer walks around with design ideas in their minds taken from a vast variety of places. Personally I believe this is why when you look at some products, you can recognize their inherent beauty. Look at the subtle curve of a motorcycles gas tank, or the beast like design of a large SUV. The things we love are based not only on their ability to complete a task, but on their ability to do so while looking good.

TASK 01: An engineering firm has hired you to build a robot capable of carrying hazardous material from one location to another within a ten foot square enclosure. The material will be loaded onto the robot from various locations within the enclosure, and must be transfered via robot to secondary locations for processing. The materials are held in a pyrex container that is six square inches. The container and its content weigh twelve pounds. The robot will be controlled by a remote operator.

While this task may seam simple, it will require us to utilize a lot of the design skills necessary to for making a successful rover. This project does not require us to construct a brain for our rover. The rovers brain will remain tucked neatly in the head of its operator. Instead, lets look at how the rovers design will be determined by its function.

The first thing we need to consider is how much of a load the robot will have to carry. This will help us in choosing the materials that we will use in constructing this robot. Earlier in this chapter we discussed the relationship between the torque output of a motor compared with the amount of weight the robot can push on a level surface. While the force required to push a significant weight on a level surface is small, the mechanical strength required of a robots frame in order to be able to carry a significant load is the load itself, and therefor is relatively large. So while in theory a fairly week motor can push a relatively heavy robot. The materials we use to construct the robot need to be strong enough to support the weight of the robot and its load.

We also need to consider the capabilities our robot will be required to have in order to complete its task. Our robot will be required to move around in a chamber. It will not simply be moving forwards and backwards, but will instead be required to both move forward, and backwards and turn as well. In order for a robot using differential steering to be able to turn the wheels must be able to turn at different rates if not in different directions. While for some this concept is probably incredibly obvious, this is a common mistake made by my students. Any two wheels connected by a solid axle will resist the rotation of the robot. If two wheels are connected by a common axle, if the axle is turning towards the front of the robot, then both wheels will turn towards the front of the robot. If the axle is turning away from the front of the robot, then both wheels attached to it will turn towards the back of the robot. This is true, even when you are not talking about the drive wheels of the robot.

At the same time as having a split axle aids in its allowing our robot to turn, it also presents us with a difficulty. In the same way as the robot has a center of rotation, so does the wheel on its axle. As downward force is applied to the robot, the axle will be forced to rotate up bending the axle. It is crucial to the performance of any robot that the axle be strong enough to support the load the robot must carry. While several other robotics books might include a table indicating minimum axle thickness in comparison with load carried by the robot, I have found none of these tables to be accurate, or useful. For the most part when someone is building a robot, trial and error will give them a much better sense of what is required to complete a given task.

Now lets look at this from the robots point of view. When you send a signal from the radio, the robot has to translate that signal into voltage levels sent to the motors which in turn, turn the wheels moving the robot in the desired direction. Now this is fairly straight forward if we are using one distinct input for each of the motors various controls. For example if channel one on your controller, the right hand joystick’s Y axis, controls the right side motor on your robot, and channel three, the Y axis of the left hand joystick, controls the left motor on your robot, then controlling the robot is a matter of balancing these two controls. If we move both joysticks forward, the robot will move forward. If we move both joysticks backwards the robot will move backwards. If we want the robot to turn, we move the joysticks in opposite directions. In this case, while the controls are relatively straight forward, all of the thought processes in controlling the robot are taking place inside of the operators own mind. Using this control method is quite difficult for the user. Consider trying to have the robot make a gentle right turn. In this case, the operator would have to move the left joystick to its upper most limit, while simultaneously gaging the proper position of the right joystick in order to achieve the desired motion. A different method is to mix two controls, and have the robot interpret the desired output. If instead we used two axis on one controller as the control method of our robot, the Y axis controlling moving forward and backwards, and the X axis turning the robot left and right, then our robot will need to combine the two inputs. While this does mean that the operator is still in control of the robot, the controls are one step removed, and some of the processes involved in controlling the robot must be considered by the robot. In this way we are in essence taking the thought process out of the mind of the operator, and placed it in the processors of our robot.

Here again lets consider how an input from the user would be interpreted by the robot. Our user moves the joystick to the maximum upper left hand corner. In this case the user is pointing in the direction that they want the robot to go, the X axis, and giving the magnitude with which they want the robot to go there, the Y axis. If we then assign numbers to each axis say zero to one hundred and eighty, ninety being the center position, then the robot receives two numbers, one hundred and eighty being the magnitude, and zero being the direction. The robot would move forward, but the robot would have to temper the magnitude of each of its motors. In this case the left motor would receive about fifty percent of the power supplied to the right motor.This would cause the robot to make a gentle right turn. The closer the joystick is to the center of the X axis, ninety, the more evenly powered the robots motors would be. In the case where the robot receives ninety for its Y axis, and zero for its X axis, then the robot would turn directly in place requiring that the left wheel turn towards the back of the robot, while the right wheel turns towards the front of the robot. The significance here being that given those input values we would be directing the robot to turn without moving forwards.

Lets take a minute to extend this task. What if instead of using a remote control system to control our robot we were to program a micro controller to move the robot throughout its various stations within our enclosure. Here we could simply feed the micro controller a list of Y,X commands followed by durations. If we programmed the micro controller to output the values one hundred eighty, ninety, one thousand, the micro controller would direct the robot to move forward for one thousand milli seconds, or one second. If we know the speed with which our robot moves, and its starting orientation, we can create programs which will move the robot through its environment with a fair amount of accuracy.30 Note that this process would not take into account any random elements in our environment. Using some sort of feed back system such as counting wheel turns, makes the process much more effective.

If instead of using time and direction, we use wheel turns, we can then send a number of wheel turns for each side of our robot. If this is expressed left, right, then a command to move forward would be one, one, While a command to turn left would be negative one, one. Here again we see that the processing takes place in the mind of the operator. The operator must convert the distance that they want the robot to travel or the direction they want the robot to turn to, into the number of wheel turns desired. If we were to have the robot do the conversions then again we could tell the robot to move a certain distance, or turn a certain number of degrees. In order to make these calculations. Here we would give the robot commands having the following arguments either forward and a distance ((Forward, 34 inches), or direction and angle (right 45 degrees). Here we lose the ability to make gentle sloping turns, unless we also include a radius which would make a turn instruction direction angle and radius (right, 45 degrees, 6 inches. We could further simplify the instructions by pre-calculating the path between any two waypoints or stations in our enclosure. Here over time without some method of correction we would notice drift occurring in the system. Over time the robot would be less likely to reach the intended target because of factors that we cannot account for.

They never told me to hesitate, they just make me do it, because the tapes are sure, they are so damned - sure, and nothing in the real world ever is-31

TASK 02: You have been asked to design and build a robotic lawn mower. It has to be able to move in difficult terrain, and navigate around obstacles. The border will be marked with a Perimeter wire which the robot will be able to detect using sensors mounted in the front and back casters. The center of the blades will be at the center of rotation. The drive wheels will be located on either side of the mowing deck.

When you begin designing any robotics project it is a good idea to spend some time thinking about all of the challenges your robot will face. This robot will have to be able to operate in a relatively uncontrollable environment. Traction will have to be considered, and we should expect a significant amount of drift with even the most robust drive trains. Other significant factors include exposure to elements, power and weight distribution, price and safety.

To begin with lets think about what it takes for a normal human being to mow a lawn all on their own. To begin with a human has the ability to determine what is lawn, and what is not. Determining the border of any object is difficult for a robot. We can be sure that determining the location of a seemingly random and constantly changing border is exceedingly difficult. Consider this, all of the sensors that we discussed in chapter five could fail leading to disastrous results. For example using a infra-red, or sonar obstacle detection systems may fail to detect shrubs or flower beds if set too insensitive, or detect natural undulations in the grass if too sensitive. Neither would work at detecting a void in the grass say for example a driveway, which could lead to the robot escaping from the area. Using a visually based system would be equally problematic. Detecting the difference between a dry patch of grass and a flower bed, or children’s toy could prove quite difficult. Therefor we need to create a different system through which our robot can detect the boundaries of its environment.

There are several ways in which we can mark a boundary. In this case we will use voltage sensors and a wire which the operator can stake to the ground creating a circuit through which we can send a mild current. In this way a clear boundary can be created. While no method is perfect, and this system will inevitably create both dead spots which the robot cannot reach, and the potential that the robot will overrun its boundaries.

We should begin mowing our lawn in a similar manner to the way in which most humans would approach the task. By defining and mowing the border of our lawn.

Now that we have a clearly defined border, we can set about having our robot follow it. Our user should begin by centering our robot over some point on the boundary wire. The exact point does not matter. We should position current sensors on either side of the robot. As the sensor on the right side of the robot detects the approaching current in the wire, the robot should reduce power being sent to the left motor thus causing the robot to turn to the left, and return the wire to the center of the wire. If the left sensor approaches the wire the robot should do the opposite. In this way the robot should be able to travel around the parameter. One major problem with the above described method is that while it allows the robot to follow the line, the robot will do so swinging wildly from side to side. As the right side approaches the wire the robot will swing wildly to the right, bringing the left side towards the wire. This would cause the robot to turn radically to the left. This might cause an oscillation pattern to form which in the least would create a ragged edge, and at worst may result in the robot leaving the path all together causing unwanted damage to property.

One way to solve the above problem is to narrow the distance between the two sensors. If the sensors are placed to just the left and right of the center wire any oscillation effects would be greatly mitigated resulting in a much neater edge. Here though we encounter another problem. If the sensors are too close together then sharp turns could cause the robot to stall. As the robot approaches a sharp left turn for example the robot will then reduce the power to the left motor. This will cause the robot to turn to the left. The problem here is that turning to the left will result in drawing the right sensor towards the wire. This will cause the robot to reduce power to the right motor. The robot would not be able to increase power to either motor because doing so would cause the right or left sensors to come closer to the wire.

Suppose instead we have an array of sensors across the front of our robot. We could then amend our program so that the robot would recognize sharp turns. Suggesting we have at least four sensors across the front, far left, center left, center right, and far right. As the robot approaches a gentle right turn, only the center right sensor would detect the wire. The robot would then follow the wire gently turning to the right. Now lets consider what the robot would detect as it approached a sharp right turn. In this case, the center right, and far right sensors would detect the wire. In this case the robot would enter “sharp turn mode” and would not only reduce power to the right side, but would reverse the right motor causing the robot to turn in place until the center left sensor detects the wire. In this case the robot would exit “sharp turn mode”, and return to “edging mode”.

Now our operator may need to include islands in the lawn surrounding things that we do not want damaged by our lawn mowing robot. Here we could require our user to make smaller loops of wire that only surround the desired “safe” areas, and give each island its own power supply, but this would be hugely inefficient. Instead we can simply have our user loop the existing wire around the desired protected zones. When our robot approaches one of these zones this will again create a problem for our sensors. Depending on the nature of the island, it may create a “Y” or “T” junction in our wire which our robot should not cross. These junctions would trigger the sensors on either side of the robot nearly simultaneously. As the robot nears the T junction, we should default the robot to turn in one direction. Here if the robot encounters a T junction, the robot should follow the right branch. Here we will have to have the robot purposefully cross the wire with the sensors on the left side. If we have the robot expect this crossing then we can use the relative position of the crossing to determine both the type of junction, and when the robot has completed the turn.

We may also consider having a sweeping sensor in the center of our robot. We could use this final sensor to locate the relative position of the wire. This final sensor would serve to greatly reduce oscillation in our robot.

Here again, all of the above is predicated on several factors. One is that the sensors are not so sensitive that they constantly believe they are approaching the wire yet sensitive enough that they detect the wire before crossing it. We should note that the sensors are omnidirectional. Unfortunately due to the nature of sensing current, the sensors do not know if they are approaching a wire from in front or behind. One way to aid the robot in behaving appropriately would be to have a calibration period at the beginning of each run.

Once the robot has completed the outer circuit, we would need to begin what is in some ways a much more difficult task. The robot would have to mow the interior of the yard. Here we need the robot to operate relatively blindly. We cannot ask our user to string wires back and forth across the lawn because that would both present an undue burden on our user, and in some ways defeat the purpose of having a lawn. Similarly we cannot ignore the possibility that children may leave toys out in the yard and animals both wild and domesticated may be about, We would need a method to both detect these obstacles and or alert the operator to their presence.

Now that the edging has been completed, the robot should turn out into the field. At this point there are several things we can monitor to check the robots progress, and attempt to ensure that the robot covers the entire field. The robot will need to know when it has crossed the field, have some way to measure direction of travel, and approximate distance traveled. We would also have to monitor the path in front of the robot for unmarked obstacles. Finally we could monitor the resistance of the mower’s blade. In this way if the robot does run over something the blade can be stopped, and if done properly we can avoid bogging down the mower in wet grass.

One major challenge that we face with a project such as this one is that we cannot use a system such as northstar for absolute positioning. Unfortunately GPS is also too inaccurate in such a small area. We could consider using a visual system, perhaps having two colored targets at either end of the yard, and having the robot use these as a visual reference to locate its relative position, but we would have to ensure that the colors used were sufficiently odd enough that the robot would be unlikely to become confused by a woman’s dress or child’s toy. We should also consider what the robot should do if line of sight is lost because of a tree in the middle of the yard.

So our robot sets out in its task of mowing the lawn. The center current sensor sweeps back and forth tracking the wire making minor adjustments to the robots path. The tire rotation sensors monitor for differences in tire rotation between front and back wheels. The differences are accounted for as tire slip, and the robot both reduces torque to the slipping wheels, and continues to monitor its progress. The robot approaches the first sharp turn, a turn to the right, and as the current sensors on the right side of the robot sense the approaching wire it dutifully turns, the blades cutting away at the grass. The infrared sensors on the front of the robot sweep away ensuring that the path is clear. The robot makes turn after turn moving dutifully around the yard, when the infrared sensors detect an obstacle and the robot stops. In edging mode, the robot is programmed to not leave the wire under any circumstances. If it were to leave the wire, there is no way to ensure that the robot would be able to return to it. So an alarm is sounded and the operator is called.

When the operator arrives he finds that a shrub has grown out across the wire. He takes control of the robot, drives it around the shrub, and steers it back to the wire on the other side. The robot continues on completing the circuit without incident.

Now the robot sets out across the yard. The robot’s two vision sensors track our markers, and it travels across the yard. The infrared sensors detect another obstacle. The robot stops. Now in this case we will make an assumption. Any living thing unless incapacitated would move away from a loud lawn mower. So we can assume that the obstacle is inanimate. After a brief pause to check the object for significant motion, the robot determines that it is safe and begins to navigate around the obstacle. Using its vision sensors the robot determines its current path checked against a compass, follows the edge of the obstacle, never coming closer than eight inches, The robot makes a compete circle, and then continues on. It marks the location of the obstacle using relative positioning.

Our lawn mowing robot reaches the opposing edge of the field, its current sensors detect the approaching wire, The robot stops, takes a reading of its location, compares it to the expected location of the wire, turns, and continues mowing the lawn.

Now to be honest, a vision based lawn mower is probably a bit on the over done side. A lawn mowing robot could probably get away with a couple of current sensors, and a well shielded IR sensor or two for obstacle detection, and a bumper. Does a lawnmower really need to know where it is at all times? If the robot is capable of traveling in a relatively strait line, turn, and travel in the opposite direction, with a small amount of overlap, then it should be able to cut an entire lawn. A robot could accomplish this using a simple compass provided it can remain within the perimeter wire and avoid obstacles. Most difficult would be avoiding an obstacle and not leaving a shadow of uncut grass behind it. But even here, the robot could use dead reckoning to find its way back after leaving its intended path.

One method for ensuring the robot mows the entire lawn would be to have the robot calculate an estimate of the size of the lawn, and then estimate how long it would take to mow such a space. The robot then mows for that amount of time, and assumes that it has completed its task. While this method is definitely hit or miss, if the robot is set to mow the lawn with some frequency, then it should manage to do a reasonable job of keeping the grass evenly cut. In the least, we can bet the robot will not miss the same spot in the grass every time.

Now some would ask, how will the robot find its docking station if it doesn’t know where it is? What if we position the docking station so that it straddles the current wire. This would give us the advantage of being able to use the docking station itself as the source of the current for the perimeter wire. When it is time for the robot to return to its docking station, it simply returns to the wire, straddles it, and runs the perimeter backwards until it comes in contact with the docking station.

The real point that should be taken from the above solution is that increasing complexity does not necessarily increase a robots chances of successfully completing a task.32

TASK 03: You have been asked to design and build a robot which can deliver drinks to guests at a party. The robot should patrol the party carrying drinks, and should stop when a person steps in front of it. The robot must return to the bar when its tray is empty for more drinks, or when a party guest presses a button or icon on the robot because a particular drink is not available.

In approaching this task I would look at it from two standpoints. First I would look to design a stable mobile platform that could move around a home, and second I would design the tray system for carrying drinks that could also serve as an interface through which party guests could place orders.

Lets begin with the stable platform. Looking to other vehicles that work inside the home, My attention is brought to a mechanical wheel chair. While we could simply take an electronic wheel chair and strip off the part where the user sits, we have to not that an electronic wheel chair has its own major disadvantages for the task at hand. First of all an electronic wheel chair while strong, is also large. Many people who use them are forced to live in homes with wider hallways and doors. To ensure that our robot can maneuver around in a standard home, and more so in a crowded room, we must compromise greater stability for a smaller foot print. I propose a circular base one and one half feet in diameter. I also suggest that like a powered wheel chair the base have two powered wheels, with four caster wheels, two at each end. In this way our robot can employ differential steering, and remain fairly stable in a crowded room, while not taking up much more floor space than a normal sized person. I have proposed a circular base in the hopes that removing corners will prevent our robot from accidently hitting peoples ankles as it turns. The drive wheels should be recessed inside the circular design so to avoid catching peoples feet.

Mounting the batteries, commonly among the heaviest components, at the center point between the wheels in the base of the robot should help with stability issues. The motor controllers should mount near the batteries creating a neat and compact design. Next we could mount a touch sensor behind a bumper on the front of the robot, and couple it with bumper mounted Infrared sensors to help the robot avoid coming into contact with obstacles. The robot could utilize a North Star system so that it could always now where it was in the room.

On top of this I would mount a stand which would in tern carry the tray and interface. I would aim to have the tray stand at about four feet high and have the stand be shock mounted to help stabilize the drinks. One way to make sure the tray stays stable would be to mount a spinning metal disk in the underside of it. In this way, we could use a gyroscopic effect to maximize stability. Touch censors could be mounted in the trey, or radio frequency identification tags could be mounted in the bottom of the glasses so that the robot could sense which drinks were being removed by party guests.

I would mount the interface at the back of the tray. I personally would consider a vertically mounted LCD touch panel. I would have a picture of a persons face, perhaps a stately looking butler, or an attractive young woman depending on the crowd. A camera mounted just above or below the LCD panel could help the computer use face tracking to align the digital eyes. A row of virtual buttons down one side of the screen would allow users to select different drinks that were not on the trey. The robot could then either send a message to the bar wirelessly, or create a waypoint of its current location, navigate to the bar, collect the drink, and take it to the guest who ordered it.

While all of the above is actually possible, I believe the main problems stem with the level of perfection and accuracy involved. Lets consider an actual situation and all of the things that may go wrong.

Its nine o’clock, and guests begin to arrive. They are all impressed by our Butler bot as it cruises around its pre-programmed course. Some guests even venture over and remove drinks from the tray the robot is carrying. Later the house is much more crowded. The music is playing, and some guests begin to dance. As the crowding increases the IR sensors in the base of the robot detect the close proximity of the people, and the robot stops. While a human server would say excuse me and politely cut through the crowd, the robot is programmed to stop when it detects an obstacle due to its inability to differentiate between people and other things that might get in its way. Unable to find a way through or around the crowd, the robot simply stops and sits in the middle of the room. Eventually you decide that the robot is more hurt than help, and in frustration you turn off the robot and leave it standing in a corner.

No discussion of roving robots would be complete without at least a mention of the DARPA grand, and urban challenges. In these competitions teams designed robotic cars capable of navigating a course all on their own. A part of me feels like this is the first step towards the future that I was promised as a child. A car capable of driving itself...Why not? Teams, mostly from universities, but some private enterprises, outfitted cars with various long and short range sensors, and loaded their trunks with banks of computers all in the hopes of collecting the two million dollar prize at the end of the competition.

To be honest, this is one of those situations in which science fiction makes me angry. When I was a kid, TV shows like Night Rider made the idea of a car capable of driving itself with no driver seam imminently possible. While cars have gotten smarter over the last twenty plus years, cars that drive themselves becoming commercially available still seam a long way off. The latest challenge involved cars driving through a course designed to mimmic an urban environment. The cars had to avoid parked cars, and follow traffic rules such as stopping at stop signs and signaling before changing lanes.33 The cars and trucks entered in the competition were covered in various sensors; radar sensors in the grill, LIDAR on the roof, and multiple GPS antennas to determine everything from the cars position along the course to how best to merge with traffic.

With robotic cars, as with most roving robots, the robot is only as good as the software driving it. These robots featured a wide array of sensor types, and computing power, and the number of sensors, or money spent on the robot give little information on the robots likelihood of success.

So what can we take away from all of this? I suppose its that we all have to start somewhere. The technologies presented here will dribble down into our everyday lives. Consider that at the time of this writing, several robotic lawnmowers are commercially available.34 We have robots to vacuum our floors, and even our cars can detect when we have drifted out of the lane we are in, and steer us back on track. Someday we will not need to drive at all. But then again where’s the fun in that.

darkback2
08-20-2009, 09:05 PM
Ok... Completely unedited chapter 8. Sorry about the mistakes and stuff, My wife hasn't gotten a chance to look this one over. Also, The chapter sort of wanders around a bit...Finally, In the rover chapter Plooby can roll up stairs, and suddenly in the walker chapter he can't. Wonder about that? Well, I needed an excuse to give him legs. I might change this to a camping trip or something...but I'm not sure that would justify the switch either.

For the quads chapter, chapter 9 I don't even try to pretend anymore. I think there are some decent discussions and arguments in here...Hope you enjoy it.

DB

One problem Plooby faced in the home that he lived in was that the home had a staircase. Plooby’s owners had solved the problem by purchasing Plooby a second body that resided on the upstairs floor of the house. Which was all fine and good, and for the most part worked. There were things that plooby felt left out of. For example when the kids took the dog for a walk in the local park, plooby was limited to the paved pathways, and often couldn’t follow. Or when the laundry was being done in the basement, Plooby couldn’t use his arm to help unload the machine. There were also the family trips to the summer home on the beach. It didn’t make sense to bring both of Plooby’s shells, they took up too much room in the car. And while his treads could take him up and down stairs, Plooby often found that going up and down the stairs consumed too much battery power forcing him to recharge too often.

One Day Kiera Decided to go over to her friends house to play a new video game that she had gotten for her birthday. Plooby had become quite adept at the game. He had learned most of Kiera’s tricks and could often beat her at the game.

Kiera’s friend Nelly lived in a house at the end of a long winding driveway. There were several slate steps leading up to the front door. Plooby Extended his front paddles and prepared to climb up the stairs. As his paddles pulled him up on the first step his right tread began to slip on the slick surface. The stone steps were to slick to ascend directly. Plooby made a second attempt, Trying weaving from side to side in the hopes that he could go up the stairs in an S pattern to reduce slippage. In this case the stairs were set at too steep an angle, and Plooby’s Gyroscopes balked at each turn.

Kiera became worried, and offered to help Ploopy by either pulling on his arm, or pushing him from behind. That idea went against everything in Plooby’s programming. What if he slid back and either pulled her down the stairs behind him, or worse yet rolled over her as she pushed him from behind. It was simply too dangerous.

Kiera sat with Plooby on the steps for a while. She finally went inside to play with Nelly. Nelly’s mom was kind enough to bring out an extension cord so that Plooby could use it to charge up his batteries.

Back at home that evening the family sat down to watch the news. An archeologist had made a discovery of some fossils in a local rock quarry. The team of archeologists was using a robot to pick through the rocks at the bottom of a small crater. Plooby sat in awe watching the huge hexapod robot delicately picking its way through the debris. It had an arm, or seventh limb with cameras and end effectors mounted on its underside that could pick through the rocks looking for more fossils.

Plooby thought about his challenging time trying to get up the stairs at Nelly’s house. “I want that!”

Walking on legs is nothing new. Fossil tracks from legged creatures have been found that are about 570 million years old.1 Our current understanding of the fossils found suggest that the earliest tracks were probably just a legged worm-like creature. The fact that the trait has stuck around for such a long time suggests that legs are a wonderfully effective form of locomotion.

There is a long standing debate among roboticists comparing legs and wheels. While some argue that following nature is the best path, others see the inherent stability and efficiency of wheels. In different situations both sides of this argument can be said to be true.

Legs are very inefficient. Legged animals waist a tremendous amount of energy simply holding their bodies off of the ground. As proof of this lets consider a short mental experiment. Lets say you had to move a fifty pound sack of gravel from one end of your driveway to the other. So, You lift the sack, carry it a few feet, and set it back down. Now consider doing that repeatedly from one end of your drive way to the other. As you can imagine, that would be quite tiring. Next consider moving the same sack of gravel using a cart or wagon. In this case, it would be much more efficient to pull the wagon because you would be working against what is known as rolling friction, as unposed to repeatedly lifting the weight against gravity. Now your thinking, “when I walk, I do not have to repeatedly lift and lower my entire body.” And here you are correct, when you walk your body remains about the same distance from the ground, and maybe only bobs up and down a few inches depending on how animated your movements are. But you forget that you are raising and lowering your legs. Your legs compose as much as forty percent of your total body weight.2 When you walk, you have to repeatedly lift that weight in order to move your body forward. Now this could be said to be a partially facetious argument because part of the weight of your leg is the muscle being used to lift the rest of your leg. Even so, you can begin to see how inefficient legged locomotion is.

While wheeled travel is much more efficient than legged travel, it does come at its own costs. Wheels are most effective in situations where they have a relatively smooth surface to roll on. In a word, Wheels require roads.3 Of course there are exceptions to this concept. Rock crawling trucks with hugely articulated suspensions can drive through boulder fields with rocks the size of Volkswagens.4 Those same vehicles become dangerously unstable at high speed. Other wheeled vehicles could easily out run any animal on a smooth stretch of road, but would be left behind when the going gets rough. In fact, More than half of the worlds landmass cannot be accessed by existing wheeled or tracked vehicles.5

Walking robots (and animals) use various gates to produce motion. A gate is a pattern of leg motions resulting in moving in the desired direction. For Plooby to take a step in a given direction, he must coordinate the movement of the servo motors which control the various parts of each leg. As a person, we learn to do this in a subconscious way. You don’t have to think about tightening the various muscles in your legs to move your center of gravity over one foot, and then tightening an entirely different set of muscles in order to lift the other foot off of the ground. All of the required calculations and commands are completed for you by your motor cortex.6 A robot does not have a motor cortex unless we give them one.

There are two major methods of creating robotic gates. The first and easiest method to implement is posed based. In a pose based method a series of motor positions are preprogrammed into the robot. Stepping through these poses results in the robot moving in the desired direction. For example the first pose for a human might be to stand with both feet together and your body weight centered between both feet. Pose two might have you shift your body weight over your left foot, and bend your knees a bit. Pose three would have you lift your right leg moving your arms to a balancing position and so on. In the above case we could use a simple loop function once we have programmed in all of the various poses.

The Second and more difficult method would be to use an inverse kinematic approach. As you may remember from chapter seven, Inverse kinematics is the mathematics of determining the appropriate movements and joint positions in order to reach a desired pose. Using this approach we give the robot the ability to move its legs in such a manor as to derive the desired motion as a result. In this case we would have to give the robot a set of mathematical rules to help it develop its various gates. This second approach has the advantage of making the robot more capable. A robot that dynamically generates where it places its feet can avoid obstacles that a robot with predisposed pose based gates cannot. This ability comes at a high computational cost. Consider the complexity we experienced when we tried to program our robots arm to move in a straight line. Now consider trying to do the same for six legs that are capable of moving independently.

A third method would be a hybrid of the previous two. In the hybrid method we give the robot a series of poses as described in the first method, but allow inputs for manipulating the poses to adjust them for different situations. Consider how it feels to walk barefoot on gravel. You use a skill known as motor planning to predetermine the positions your muscles will attain at each point as you walk.7 As you shift your weight from one foot to the other the muscles in your legs make micro adjustments to your center of gravity to avoid applying too much pressure to any one point on the bottom of your foot. While the pattern of muscle movements involved in walking is relatively normal, your brain is making adjustments to the muscle positions to avoid injuring your feet. When our robot is using a static pose based gait we can set our servo controller to adjust the servo positions based on sensor data that we collect as each pose is attained. By using this method we have the ability to choose which senses we give our robot and how our robot uses the data attained. We can have our robot do anything from using simple pressure sensors in its feet to a full three dimensional map of the environment created using LIDAR.

One of the more stable platforms we can take from nature is the hexapod. Hexapedal structure is one of the defining characteristics of insects.8 There are distinct advantages to having six legs. Three points of contact are the least number of points that can create a plain. A six legged Plooby would be able to remain stable through maintaining three points of contact at any time. Think of a three legged stool and a four legged chair. A three legged stool might not be level, but it will not rock back and forth because all of three legs will remain on the ground. In the case of a four legged chair one of the legs can be lifted off of the ground by the other three legs causing the platform to tip from side to side. A six legged platform gives Plooby the ability to use what is called a tri-gait. This allows him to move three legs at a time, while the other three remain lowered holding up the body with three points of contact. In the same manner the weight of the robot is distributed over the three points that remain on the ground. Hexapedal structure gives us the advantage of being able to retain a stable platform with half of the available legs lifted in the air. This eliminates the need to shift its weight around in order to maintain its balance.

Because a hexapod does not need the ability to shift its weight around the leg structure can be of a limited number of degrees of freedom. Limiting the number of points at which a leg can bend adds stability. In robotics it is often important to limit the number of degrees of freedom in leg design. Having a large number of degrees of freedom requires more engineering, Structural mass, and computational time. Some hexapod robots use as few as two degrees of freedom per leg and still achieve robots with a very high level of mobility.

Another issue we should consider is the symmetry of our robot. In nature most animals fit primarily into two groups, Those with bilateral symmetry, and those with radial symmetry. Life forms exhibiting bilateral symmetry look similar when viewed from the right and left side. Animals that exhibit radial symmetry seam to do just that, radiate out from a central point. While true symmetry in the strict sense is rare, there are different advantages to each design. An animal with bilateral symmetry will exhibit advantages when moving in or sensing in particular directions. Animals that are radially symmetrical show no preference toward movement in any direction.

Bilateral symmetry is much more common in nature than radial symmetry.9 This is especially so when we exclude animals that are passive in nature such as sponges, starfish, and jelly fish.10 This leads us to believe that bilateral symmetry gives an animal some advantage when it comes to locomotion. Bilateral symmetry allows for an animal to have directed movement. It makes it easier for an animal to move faster than would otherwise be possible in a few select directions. For example it is easy for a horse to run forward, but it is much more difficult for a horse to side step. An animal with radial symmetry shows no preference as to the direction of travel. The problem with this structure is that movement in any direction is somewhat more akin to side stepping then to directed movement.

A primary example of nature inspiring robotics comes from the Center for Biologically Inspired Robotics Research at Case Western University.11 The earliest attempts from this department use oddly shaped wheels to move and culminate in a series of robots aptly named robot 1-5 the team at Case Western attempted modeling robots after the Blaberus Cockroach.12

Whegs, continuously rotating wheel/legs, are one of the simplest adoptions of natural elements that can be implemented in robotics. In a traditional roving robot wheels are used for locomotion. Whegs take the place of wheels and behave in a similar manner. The original idea was first implemented in the Prolero robot developed by the European Space Agency for exploration on the planet Mars. In the Prolero the wheels of a traditional rover were replaced by stalk like legs with ball shaped feet at the ends.13 The legs were turned in the same way wheels would be, and propelled the robot forward in jolting bounds. 14 Later implementations included three legs per wheel. The legs or “spokes” were separated by 120 degrees. Opposing wheels were offset by sixty degrees so that one spoke would always be on the ground to support the robot. Thus when the whegs rotate the robot is operating using an offset tripod gate, much like a cockroach does.15 The whegs also feature compliant spring design which allows the legs to come into and out of phase depending on the presented obstacles.16 This simplistic solution results in a very capable robot that can carry a heavy load at a high rate of speed.

Robot 1-5 more closely modeled the structure of the Blaberus Cockroach in both form and function. Robot 1 is most closely aligned with the whegs robot design, While robots three four and 5 most closely mimic the Cockroach. At each step we can see the growth and development of this design structure in its metamorphosis.17 Most interestingly each progressive robot not only more closely mimics the cockroach in terms of form, but also in terms of function. For example as leg segments and mobility change to more closely mimic the cockroach, the inclusion of pressure sensors at the ends of each leg allow the robots to feel and adapt its various gaits to its environment.18

The leg structure of robots three, four, and five take more and more from their natural cousins. Insect legs consist of four segments. The coxa is similar to the ball and socket joint in the human hip. It allows a great range of motion. Try kneeling on your hands and knees. Shift your weight so that you can lift one leg. Begin by twisting your hip. This is one degree of freedom. Next move you knee up and down to the side. This is the second. Finally move your knee forward and backward in line with your body. This is the third. Attached to the coxa is a leg segment known as the femur.19 The thigh bone in your leg is also known as a femur, and carries the same responsibility. The next segment is the tibia. The femur and tibia are joined by a joint with a single degree of freedom that behaves in a similar manor to your knee. The tibia is followed by the tarsus which is much like the bones in your feet.

The robots presented by Case Western are clear examples of a bilateral symmetry, and the advantages that it presents a walking robot. Each of these designs moves forward very well, but has a much more difficult time moving from side to side. In some ways this works to the robots advantage by making it more stable during forward motion. One draw back is that in order to change direction the robot must turn. The longer the robot, the more difficult this is to accomplish.20 Radial symmetry works to address this issue. In a robot with radial symmetry each of the legs is equidistant from the axis of rotation. Think of a rectangular car wheel. As you can imagine the ride would be intolerably bumpy because the vehicle would lift up as the long end passed underneath it, and then slam back down on the narrow end. The vehicle would then have to overcome the resistive torque in order to get up over the long end of the rectangle again. In a normal wheel because each successive point along the wheels edge is equidistant from the axis of rotation the resistive torque of moving from one elevation to the other is nonexistent. In a radially symmetrical robot each of the legs is applying the same effort in order to make the robot turn in place. This allows the robot to more easily complete the turn.

Any discussion of nature inspired hexapod robots that does not at least mention A-pod by Kåre Halvorsen, Zenta.21 This Ant inspired hexapod is composed of three distinct and actuated body segments.22 One of the major achievements of this robot is its ability to move its body segments to offset various payloads and maintain its balance.23 Zenta’s kinematic algorithms allow the robot to isolate various body segments as the rest of the robot moves about.24 In one example video A-pod’s body sways from side to side and lifts up and down while its head remains virtually still.25

For the sake of this discussion we should probably change the way in which we talk about the various formats for hexapod robots. Robots with radial symmetry could be said to be round, while a hexapod with legs that point out to the sides follow more of a walking stick format. Compromises between the two designs could be said to follow more of a cockroach or oblong design.

In a robot that is circular travel in any given direction means that the kinematics for any one given leg needs to be calculated independently of the kinematics for the other legs. This is due to the fact that the robots legs will not travel the same distance when the robot takes a step. These differences force the robot leg geometry to differ from leg to leg in robots that do not follow a strict bilateral stick design. This makes movement in any direction dependent on the coordination of all six legs and their ability to move at different paces during a walk cycle. While this aspect is true, it also means that once the kinematics for walking in one direction have been worked out, we can move in any direction simply by assigning different legs to the various kinematic tables. With a walking stick design this is not the case.

A few Hobby robotics examples of strict radial symmetry are Lynxmotion AH3, BH3, and CH3 hexapod models.26 One place where these robots excel is in hexapod dancing.27 The nature of the platform lends itself well to the dynamic movements required for robotic dance. The robots are judged based on precision of movement, variety of movement, timing, and stylization. In this sense the radial symmetry of the robotic design gives the robots the ability to be omnidirectional. Throughout the short dance routines the robots gyrate about expressing all of their capabilities.

A larger scale robot that takes advantage of radial design’s omnidirectional nature is NASA’s Athlete Hexapod robot.28 Ranging between four and six meters in diameter the Athlete is designed to work as both a teleoperated robotic tool and a mobile platform capable of carrying a habitat across the lunar and martian surface.29 Each of the Athletes legs feature six degrees of freedom including a wheel a the end of each foot capable of continuous rotation.30 A power take off is available at each foot which allows an operator to add various end effectors which allow the robot to complete various tasks such as drilling holes and picking up rocks for further study.31 The athlete can travel at sustained speeds of up to ten kilometers per hour over rocky terrain, and carry 450 kilograms or over 800 pounds of payload.32

The Athlete robot features a ring structure as a major element of its design. This allows it to be nested with other athletes for storage and travel. Once the robots arrive at their location and are deployed multiple athletes can be coupled to carry larger loads and complete a variety of tasks. The athlete marks a huge leap forward in space robotics technology.

A hexapod with some remarkable abilities is the CNC hexapod by Micro Magic Systems.33 The CNC hexapod’s key feature is its ability to make the very parts that it is made of.34 CNC hexapod uses a computer chip called a P-brain to calculate the leg servo movements necessary in order to position its cutting end effector.35 In this case the creator of this robot, Matt Denton, chose a bilateral oblong design with three degrees of freedom per leg.36 This design maximizes on the robots stability which allows it to perform clean cuts.

Plooby’s family waited patiently for his new body to arrive from the factory. Kiera was most interested in seeing how her robot friend would have been transformed at the factory. She wondered, would Plooby look like a big scary spider? Some of the robots in the catalogue had. Soon enough the van from the factory arrived. A hatch in the side opened, and a staircase descended from the side instead of the usual ramp. It was dark inside the van, so Kiera could barely make out the glint of metal as Plooby waited for the stair case to lock into its fully extended position.

Plooby’s central processor was taking in data from the van. When he received the signal that the stairs had locked. The IR sensors in his legs and end effector stalk confirmed that the path infront of him was clear. The stereoscopic camera mounted at the end of his arm helped him fill in all of the pieces of his three dimensional map as he swept it from side to side. As he took his first steps down the stairs, He used the data from the pressure sensors in each foot, the servo positions of each legs, and a six axis Inertial Measurement Unit to level out his body and ensure that his footing was stable. If he were truly capable of wondering he would have marveled at the vast amounts of data that he now had to consider before taking each step. As Plooby concluded his decent stepping out into the sunlight to greet his new family.

Kiera paused for a second to take in what Plooby had become through his transformation at the factory. Plooby’s main body platform was suspended about one foot off the ground between his six legs. Each of his legs if fully extended would have been about three feet long, and curved in gentle arcs terminating in small triangular feet. His abdomen was slightly elongated and oblong in shape with a small sensor array mounted on the front. At its widest point His abdomen was 9 inches wide and his body was one and a half feet long.

Plooby’s original arm had been remounted and extended from his back. In its stowed position it looked a bit like the tail of a scorpion. Mounted over Plooby’s arm end effector was a small LCD screen interface and stereoscopic camera. Plooby looked up at kiera, his face recognition software worked his magic, and his text to speech engine formulated his greeting. “Hello Kiera, should we go inside?”

Inside the house Plooby experienced a much greater level of mobility than he had ever experienced before. Plooby found that by carefully placing his feet he could maneuver around obstacles that would have required remapping in the past. Plooby’s new body stood a few inches wider than his previous one. This forced him to move more slowly in sharp corners and when passing through doorways.

A few weeks after Plooby returned from the factory Kiera took him back over to her friends house. On the smooth sidewalks and streets Plooby was able to use his fully extended gate. This allowed him to walk fast enough to keep up with keira. At curbs Plooby no longer had to find ramps, or use the paddles on the front of his treads. He could simply step off into the road when Kiera signaled that the path was clear. When they reached her friends house Plooby simply walked up the front steps, and through the front door.37

dayton67
08-30-2009, 05:34 PM
Please don't forget the Hardware, like cable and wire management

darkback2
08-30-2009, 06:10 PM
Ok, so a few people have brought some things up, and I figured I should let everyone in on how things are going to be organized for the purposes of this book.

So far I've really been focused on the conversational points of the book. That is, the parts of the book meant to spark classroom conversations. In addition I'm going to have build chapters, where the user actually builds and programs a robot. These will usually have something to do with the discussion chapter. This will also be the part that looks like more of a traditional robotics book. Finally I'm hoping to have a robust glossary of terms.

DB

sam
08-31-2009, 07:40 PM
Ok, so a few people have brought some things up, and I figured I should let everyone in on how things are going to be organized for the purposes of this book.

So far I've really been focused on the conversational points of the book. That is, the parts of the book meant to spark classroom conversations. In addition I'm going to have build chapters, where the user actually builds and programs a robot. These will usually have something to do with the discussion chapter. This will also be the part that looks like more of a traditional robotics book. Finally I'm hoping to have a robust glossary of terms.

DB

:eek: That's going to be a very complete and well rounded book! Can't wait to see how it will continue

Sam

webgeek
05-24-2010, 11:33 PM
I know this thread appears dead but I just had to post what I hoped would be helpful. I've been involved in a fair number of published technical books on Adobe Flash and video game programming and such and I've got some good contacts at three large publishing companies that specialize in technical books. This is assuming the editors I've worked with are all still employed at those places. If you are seriously interested in getting this book put together, I could put you in touch with the appropriate people to make this a reality. In the past, pitching an idea to one of the editors has been all it's taken to get a book moving.

In the pitch, they will want to see a full table of contents (TOC) for the book along with your sample chapters already written. They will also want to know how this book differs from other robot books and in specific, what makes it unique. They will ask about who would be writing it (credentials in some form or another), will there be other authors, etc. They will also press for details on who would be interested in this book and why.

Lastly, assuming they like the idea, they will try and get your book to fit within an existing series. This is for a few reasons but the main ones are that it gives them a potential existing consumer base and a pre-designed format/style for the book. An example series would be the "Demystified" books. They cover a ton of different technical topics but all have the same layout and format.

Assuming it's a go, they will negotiate a schedule for the book and an up-front advance. If you are doing the full book yourself the advance is a nice little bit of money (for computer books at least) but far less than you'd actually charge to write a book and less than you might expect. If the book does well you make back the advance and more. If the book does poorly you still keep the advance and the publisher eats the loss.

Even if you are the sole author you can have others help you write the book. You just pay them outright for their chapters so you own them. Giving them credit in the book is traditional, but it's up to you and your deal with them. This is how I've been involved in most of my book projects - I write a chapter or two for an existing author and they pay me up front for the work.

Anyways, I'd love to see another robot book out there so let me know if you want to get in touch with the publishers and see if they will fund it. Have fun!

-Mike

darkback2
05-24-2010, 11:51 PM
Hey Webgeek,

The thread isn't dead, its just been dormant for a while. I'm a school teacher and writing during the school year is massively hard. Summer break starts in 17 days so things should pick up then. In actuality I've probably written about 2-3 more chapters that I haven't posted yet because they aren't quite ready.

As for publishing, I'm not really worried about it for now. The book isn't a technical book in the traditional sense, and perhaps that will be its downfall, or possibly will be what makes it work. I will be using it...or well...parts of it in my robotics class, which I will be teaching again starting in the fall.

What is left?

Chapter 1, where are all the robots
Chapter 5, Sensors (started about 6 pages in.)
Chapter 6, Acutators

Numbers have switched around a bit.

Also, Build Chapters 1-6. These are build chapters using 1, 2, or 3 specific platforms. I am hoping to show examples of lego bots, bioloid bots, and traditional hobby servo bots for the following. Hope to have the initials of this done by the end of the summer. The chapters on Rovers, Arms, and Hexipods are done, now I need to make the build chapters showing how to make them.

BC 1 rovers
BC 2 Pan Tilt Camera
BC 3 Robotic Arm
BC 4 Hexipod
BC 5 Quad
BC 6 Biped
BC 7 Robotic House

Thanks for the interest,
DB