PDA

View Full Version : robots who learn. what happened to them..



openmindedjjj
06-30-2008, 06:55 PM
remember when the sony Albo ERS-7 came out in like 2000 and he could learn from experiences and he was fully 100% autonomous. he could see in 3D and he could learn without haveing to mess with motion editor or behavior control. well what happened to that shit..
how come i dont see any robots that are autonomous i figured by 2008 all the robots would be way better than the old albo. but i notice that he is still more advanced than most if not all the other robots ive seen and he is still the only robot other than asimo who can learn by its self.
i thought all the robots would have this tech by now. dose any body know of any robot who also learns by its self?

DresnerRobotics
06-30-2008, 07:05 PM
AIBO wasnt as advanced as Sony would have you believe :P

He certainly didnt see in 3d, and all of his motions were pre-programmed. I'd wager Pleo is probably smarter in that regard, and neither truly 'learn'

Sienna
06-30-2008, 07:09 PM
AIBO was a hit!.... among research professors and their students. Commercially, it was a flop. And he didn't learn per se, I believe it was more a scripted set of behaviors. (I never played with Aibo Life, only the SDK.) And no, it wasn't 3D.

And you have to define "Learn". There are research bots that truly do 'learn' in the traditional, trial and error + feedback way that toddlers do. They are just not commercial products.

And there are many autonomous bots. For instance, mini sumo hosts many of them. iRobot makes a vacuum thats autonomous. DARPA's Grand Challenge saw many vehicles that were autonomous. So what exactly are you looking for?

LinuxGuy
06-30-2008, 07:32 PM
And there are many autonomous bots. For instance, mini sumo hosts many of them. iRobot makes a vacuum thats autonomous. DARPA's Grand Challenge saw many vehicles that were autonomous.
W.A.L.T.E.R. 1.0 was autonomous, and W.A.L.T.E.R. 2.0 will be also, as well as having much more that can be done. :) I only build autonomous robots.

8-Dale

openmindedjjj
06-30-2008, 07:43 PM
sorry what i ment was i dont see any fully autonomous huminoids or dogs or commerical products.. ive seen people make their robot autonomous by puttin a pocket pc on the robots back and runnin him through the pocket pc. im just suprised that their is not more commerical autonomous robots.. other than the sumo robots.
i wonder if i could get the program that made albo.. maby someone could re-modle the software to have it be compatable to other robots..

Adrenalynn
06-30-2008, 08:25 PM
You can do it with less/more than a pocket PC. My little junkbot under construction is capable of simple autonomy/carrying out his mission, running a little 8 bit processor with on 4k of RAM and only eight pins total.

For AI/NN, he communicates over a telemetry link to the PC network with forward/backward error correction, and hybrid hidden Markov model neural networking.

Really, I think the big question is "where are the service bots?" The robots that do tasks for us to lighten our load. Those are what will drive true learning behaviors.

Alex
06-30-2008, 10:27 PM
Really, I think the big question is "where are the service bots?" The robots that do tasks for us to lighten our load. Those are what will drive true learning behaviors.

Have you seen WALL-E yet? The robots that were in that movie were really um... interesting... They were all definitely service bots, but in an iRobot sense, not Jetson's sense. It was a much more believable yet attainable and foreseeable level for robotics. If anyone hasn't watched it yet, when you do, just pay attention to all of the robots (or more importantly, "types" of robots) that are in that movie.

Adrenalynn
07-01-2008, 12:22 AM
Alas, I haven't seen it yet. I got over-ruled and we went to see "Wanted" instead. The most aptly named movie of all time. It "Wanted" to be The Matrix - but wasn't. It "Wanted" to make a social commentary, but didn't. It "Wanted" to have ground-breaking special effects, but they put me to sleep. It "Wanted" not to suck, but couldn't make it there. [...]

I'd rather have seen Wall-E...

Although, I have to say, it might have been about artificial intelligence [Angelina Jolie], and that WAS the most robotic performance I've ever seen from Morgan Freeman... ;)

Alex
07-01-2008, 05:34 PM
Yeah, Wanted had like only one really great scene in it. Take a wild guess;)

jdolecki
07-01-2008, 06:05 PM
Want you robot to Learn just program it like a human brain.

brianwhitworth.com/braincomputer.pdf

metaform3d
07-01-2008, 08:13 PM
Autonomy is hard -- the Grand Challenge demonstrated that. The first year everyone thought that building the machine was going to be the hard part, and they all crashed and burned. The second year the Stanford team approached it like a software problem and cleaned everyone's clock with an off-the-shelf 4WD. There are very few autonomous applications that survive outside of the lab.

As for commercial service robots, they are hindered three things: power, power and power. Computational power is limited by the kind of brains they can carry, and very few customers could afford a lawnmower with a couple of Pentiums in it. Effective power depends on the types of motors and gearing, and again if you want to lift more than a few ounces you're talking about real money. Finally, limited battery power will kill you if you've solved the other two.

Hollywood (and therefore I assume the robot layman) typically gets all three of these exactly wrong. Robots in fiction are shown to be super-smart (although idiotically "logical" and emotionally retarded), super-strong relative to humans, and with nearly eternal power sources. Has any movie robot shown any of the limitations of real robots?

lnxfergy
07-01-2008, 10:11 PM
There is a difference between autonomous and smart. Roomba and Aibo don't need instructions from their owners - they are autonomous, but they are not smart. Neither knows where they are or what they are doing, they just use tricks and magic to look decently good at what they do.

The problem with intelligent, learning, or smart (or whatever you want to call them) robots boils down to one thing: funding. Intelligent robots are complex to program, will require great amounts of computing power (and thus batteries), and need good sensory. Current approaches to AI are computationally expensive: we either need faster, cheaper and lower-power chips - or we need a new approach.

However, with all of that said, the biggest problem I see is sensory. We have lots of great algorithms to solve problems intelligently. However, to even know what the problem is you have to know what is going on around you. Current sensors pretty much suck. Computer vision (hw & sw) is still unable to replicate the adaptability that animals and humans take for granted.

Funding is difficult to get for all of these projects because they are typically the type that will go on for a long time and the deliverables aren't going to be that flashy (in the case of a software solution, it won't even be that visible). Instead, funding has been poured into flashy projects like humanoids (which are almost all still RC for high-level control).

-Fergs

Adrenalynn
07-01-2008, 10:31 PM
I STRONGLY disagree with the points that have been made regarding computing horsepower.

Nearly every person on the planet has Cray-Crushing horsepower that is beyond underutilized, and nearly every person on the developed planet has massive amounts of always-on bandwidth.

We need to stop thinking "on the 'bot" and start thinking about how we get the data effectively off the 'bot and over to the real horsepower.

Broadband cellular is a good start... But it's not required by any stretch. For very little money you can get the data off the 'bot and within a few kilometers to a workstation or at least relay station.

Building systems that "plug themselves in" isn't all that tough, especially when you're leveraging PCs. In my neighborhood there are hundreds of houses with exposed outlets - and we roll our streets up and send 'em out to be dry-cleaned by 8pm. ;)

I can't help but think about "vampire bots". Sucking from unsecure WiFi and public WiFi, and feeding off of unattended AC outlets after dark...

How much processing and electrical power did you need again?

darkback2
07-02-2008, 02:53 AM
I know this isn't a big help, but I played around with this concept with gepetto and Vivian. I will also implement this in hailey, my new hexapod. I used a feedback system using a wiimote to decrees or increase the likelihood of an even happening. The biggest problem that I have sean with a learning robot has been battery life. Granted I'm a cheapskate, Vivian and Gepetto only got about 20 minutes to a charge.

It takes time to learn.

In anycase, the service bots are all around us. There is a robot for mowing the lawn, one for vacumming my floors...and even sex bots. The biggest problem is getting people to buy the robots first off, and then to let the robots do their thing. My sister has a roomba, and she watches it vacuum her floors, and then moves it around to make sure it gets everything.

DB

openmindedjjj
07-02-2008, 04:21 AM
how would i program it like a human brain.. what kind of software would i need..

JonHylands
07-02-2008, 08:39 AM
how would i program it like a human brain.. what kind of software would i need..

If anyone here knew the answer to this question, we wouldn't be here - we'd be basking on the French Riviera, spending some of our billions of dollars...

Adrenalynn
07-02-2008, 10:51 AM
The mechanics of learning are thought to be well-known. A WIPO search on my name will reveal patents in at least 19 countries involving neural networks for optimizing video compression (a system that actually watches video and learns the best compression method for types of video, sub types of video, etc, in training mode, and is then able to apply that in a production mode - or continue learning as it's producing)

Building a human brain, assuming it's simply the sum of its transistors, is still technically infeasable due to simple parallelization. I can't come up with 7+ billion processors today. I could reproduce a fruitfly, however, if that interests you and you have the budget.

Instead, we use neural networking and genetic algorithms for select problems. Learning to optimize video/audio compression, for example. Or learning cryptanalysis. Or learning what someone's face looks like.

What you're wanting to investigate is neural networking and genetic algorithms. Brush up on your math... ;)

metaform3d
07-02-2008, 12:33 PM
We need to stop thinking "on the 'bot" and start thinking about how we get the data effectively off the 'bot and over to the real horsepower.Agreed that moving computation off the bot will save on weight and batteries. However unless you have a supercomputer in your lab, we still have relatively weak processing power available.

You're also going to have to distinguish between the different levels of cognition. For high-level, open loop tasks like planning and analysis queuing up the jobs someplace on the net will be fine. For real-time, closed loop tasks like walking along a cliff or or not running over pedestrians network latency could be deadly.

metaform3d
07-02-2008, 12:42 PM
However, with all of that said, the biggest problem I see is sensory. We have lots of great algorithms to solve problems intelligently. However, to even know what the problem is you have to know what is going on around you. Current sensors pretty much suck. Computer vision (hw & sw) is still unable to replicate the adaptability that animals and humans take for granted.Agreed. The only high-bandwidth sense we normally have available is vision, whereas most simple life forms that we might copy depend more on touch and smell. I've often dreamed of a high-bandwidth touch sensor. Imagine a flexible strip with an array of hundreds of tiny flexible "hairs" that could detect deflection. You could interface to it like a camera and get "images" showing the state of each touch sensor as a pixel. Rub it up against something and read out the texture of the surface as a 3D FFT. Wrap it around a wire and you'd have a real feeler, not like the simple binary feelers that most insect bots sport.

A-Bot
07-02-2008, 12:44 PM
The mechanics of learning are thought to be well-known. A WIPO search on my name will reveal patents in at least 19 countries involving neural networks for optimizing video compression (a system that actually watches video and learns the best compression method for types of video, sub types of video, etc, in training mode, and is then able to apply that in a production mode - or continue learning as it's producing)


I've used similar methods when integrating with 3rd-party systems that have an unadvertised governor, sometimes with unusual properties. The trick is coming up with an algorithm that maximizes producer-consumer throughput over time. I think if these as meta-algorithms... programs that generate optimized algorithms.

LinuxGuy
07-02-2008, 01:15 PM
The problem with intelligent, learning, or smart (or whatever you want to call them) robots boils down to one thing: funding. Intelligent robots are complex to program, will require great amounts of computing power (and thus batteries), and need good sensory. Current approaches to AI are computationally expensive: we either need faster, cheaper and lower-power chips - or we need a new approach.
I disagree here. I don't think you need massive computing power to create a robot that learns as it roams around. It may not happen fast, depending on what processor you are using, but there is no reason it can't happen. I think a more real limitation are the amounts of memory and storage your robot has access to.
A robot can learn by creating a map of its environment, regardless of how fast its processor is. As it roams, it learns and therefore can know where obstacles are, what clearance it has to get in and out of places, where it can and can't go, etc.

8-Dale

LinuxGuy
07-02-2008, 01:29 PM
As for commercial service robots, they are hindered three things: power, power and power. Computational power is limited by the kind of brains they can carry, and very few customers could afford a lawnmower with a couple of Pentiums in it. Effective power depends on the types of motors and gearing, and again if you want to lift more than a few ounces you're talking about real money. Finally, limited battery power will kill you if you've solved the other two.
I don't agree here. Why would a smart lawnmower need two pentiums? It's all a matter of scale. Of course, if you want to lift more, you need better servos or motors. Smaller robots won't need as much power from motors and servos, but larger robots definitely would. I really think you are exaggerating quite a bit here and really need to define some limits before you make some of your statements. Of course, the bigger your project is, the more expensive it will likely be, the more computational power it might need, and the larger batteries it would need. When I have completed the current rebuild of W.A.L.T.E.R., he will be able to aid me in several ways, since he has a new flexible arm. Of course, W.A.L.T.E.R. will never get the coverage that the flashy robotics projects get because he just doesn't have that sort of appeal. There is nothing that W.A.L.T.E.R. can be compared to because he is completely unique, so the public can't relate to him, what he does, and how he does it.


Hollywood (and therefore I assume the robot layman) typically gets all three of these exactly wrong. Robots in fiction are shown to be super-smart (although idiotically "logical" and emotionally retarded), super-strong relative to humans, and with nearly eternal power sources. Has any movie robot shown any of the limitations of real robots?
I think one movie that came real close is Bicentenial Man. There were obvious limitations shown at each stage of Andrew's development, which he had to overcome to get to the next stage. That's called growth and development. Other than that one, I don't think any movie robots have been particularly realistic, although this might be changing slowly.

8-Dale

darkback2
07-02-2008, 02:03 PM
I STRONGLY disagree with the points that have been made regarding computing horsepower.

Nearly every person on the planet has Cray-Crushing horsepower that is beyond underutilized, and nearly every person on the developed planet has massive amounts of always-on bandwidth.

We need to stop thinking "on the 'bot" and start thinking about how we get the data effectively off the 'bot and over to the real horsepower.


I can't help but think about "vampire bots". Sucking from unsecure WiFi and public WiFi, and feeding off of unattended AC outlets after dark...

How much processing and electrical power did you need again?

I'm not necessarily sure you have to move the computing power off of the robot. Computers have gotten smaller and smaller, and at least lower level things can be done right on board.

Previously there was a thread about robots that self dock and charge themselves. It would be funny to make a humanoid that had an extension cord and could plug itself in...I could just imagine a little humanoid sneaking over to an outlet. Sort of like me and my laptop in an airport.

DB

Adrenalynn
07-02-2008, 02:08 PM
Sure, but can you compare the 1Ghz PicoITX to an 8 core Xeon with an equivalent of 25+ Ghz? And the power required to run it? In another five years we'll have that power on the 'bot - why wait?

Adrenalynn
07-02-2008, 02:10 PM
programs that generate optimized algorithms.
That's really still just algorithmic development. Learning systems are continuously adaptable - just like animals.

metaform3d
07-02-2008, 02:39 PM
Previously there was a thread about robots that self dock and charge themselves. It would be funny to make a humanoid that had an extension cord and could plug itself in...I could just imagine a little humanoid sneaking over to an outlet. Sort of like me and my laptop in an airport.That's a main gameplay element of Chibi-Robo (http://en.wikipedia.org/wiki/Chibi-Robo%21).

metaform3d
07-02-2008, 02:48 PM
I don't agree here. Why would a smart lawnmower need two pentiums?I have a robot lawnmower, and although I love it it's nowhere near autonomous. I have to help it around tricky corners when it's edging, and it often gets stuck and calls for assistance.

It might be that with clever software a microcontroller would be powerful enough to make it mow my lawn without help every time. The problem is we don't know how to write that software. The kinds of solutions that we know something about which can adds smarts and learning to a bot require a lot of number crunching.

LinuxGuy
07-02-2008, 03:07 PM
It might be that with clever software a microcontroller would be powerful enough to make it mow my lawn without help every time. The problem is we don't know how to write that software. The kinds of solutions that we know something about which can adds smarts and learning to a bot require a lot of number crunching.
We should try not to be limited by what we already know. Such limitations will stifle creativity and prevent us from making progress. It's not about what we already know. It's about doing what we don't already know how to do..

8-Dale

Adrenalynn
07-02-2008, 03:20 PM
The Virginia Tech Urban Challenge finisher was at Maker Faire - did you stop by and chat with 'em? The vehicle processing is done in two little Mini ITX machines in its entirety. They have a temporary P4 that goes in for UI and programming, but comes out for competition.

And there's really the key to it - the redudant GPS systems, the LIDAR, compass, etc are all preprocessed internally with their embedded microcontrollers. The PC is just ingesting, digesting, and regurgitating the predigested data coming in from the sensor arrays.

Your lawn mower is a good example. The "tough spots" don't change. That solution doesn't need a realtime OS behind it. Network latency is fine. The tough choices can be passed up to the PC and it can wait for the PC to get back to it with a decision. It can toss visual feedback up to the PC long before it gets to the tough stuff and the PC has lots of time to apply its infinitely greater processing capacity to solving the problem.

MiniPCs and microcontrollers just can't compare with the tremendous 32-64bit multi-gigahertz processing that you probably have in your house. And if you're like me, you probably have a network that can distribute processing over a multitude of machines - all having lots of spare cycles.

Let's face it - my machine I'm on right now (Quad Core 2.8) isn't working very hard while I'm typing this reply. According to task manager, all four cores are running 0&#37; peaking to 1%, and the biggest proc-sucker right now is task manager itself. Kinda a Schrodingers Cat there, but it's still illustrative. What would you rather run a learning system on: A Quad 2.8 or a 1Ghz PicoITX or a handful of 8 bit RISC chips clocking <20Mhz?

Frankly, I think the handiest thing about the PicoITX is its mastering USBs and the ability to run industry standard software and OS on it.

My junkbot vacuum will be running over serial telemetry and network camera for SLAM and adaptive learning. Its onboard processing is an 8515 and a couple ATTINY45's. The real brains are on the Xeons upstairs, beowulf cluster with a hybrid hidden markov model for AI/NN.

darkback2
07-02-2008, 03:38 PM
You remind me of an idea from a Neal Stephenson book in which people have a "librarian" basically a web browser imbedded on a chip in their head. They can access information when they run into a problem by asking the librarian. So...I need directions...Librarian. It seams to me that a robot could be no different. I could go about its business using whatever chip you have on it, and when the robot gets stuck, or needs a change of task, it could pass that off to a central processor.

Also...how smart does a robot need to be? If its a lawnmower, maybe it gets stuck in some tricky parts, but it seams like you could make some simple algorithm by which it can memorize a humans solutions to various problems in "training" mode, and then it would be able to solve the problem next time. Over time it would "learn" to mow the lawn. It could even be programmed to try your solution when it encounters a similar problem.

DB

Alex
07-02-2008, 04:00 PM
You bring up a good point DB. Seriously, why do we even have to "help" our domestic robots (roomba, robot lawnmower, etc.) when they get stuck? Why can't they help themselves? I mean sure, they are just a bucket of bump switches and/or proximity sensors. But, what if these robots could "look up" information when they get stuck and figure out how to free themselves?

Hear me out. The Roomba has an open SDK right? If not, still, just imagine that it did. What would be so hard for a community of developers to develop random sequences/motions that are stored on a central database server for the robot to check when it gets stuck? I don't know how many countless times I have found myself staring at my Roomba whilst it consistently tries unsuccessfully to get out of the area under my dining room table and next to the chair.

Adrenalynn
07-02-2008, 04:01 PM
Snowcrash is one of my alltime favorite books, btw...

Yes, that's a simple learning example. Really a memorization of a situation - kinda the way most of your students "learn" - rote learning ;)

Getting it to apply that learning in a similar but not identical situation is what I'd define as "real machine learning". That's "critical reasoning" and is likely not the land of the microcontroller.

DresnerRobotics
07-02-2008, 04:06 PM
Snowcrash is one of my alltime favorite books, btw...

Yes, that's a simple learning example. Really a memorization of a situation - kinda the way most of your students "learn" - rote learning ;)

Getting it to apply that learning in a similar but not identical situation is what I'd define as "real machine learning". That's "critical reasoning" and is likely not the land of the microcontroller.

God I loved that book. Was there ever a sequel or any continuation of the story? It seemed like he had just an awesome basis to write a series on.

Adrenalynn
07-02-2008, 04:08 PM
But, what if these robots could "look up" information when they get stuck and figure out how to free themselves?
[clip!]
I find myself staring at my Roomba whilst it consistently tries unsuccessfully to get out of the area under my dining room table and next to the chair.

Now take it another step further. You free your Roomba and pat it on the head "You'll do better next time". Your Roomba sulks a bit and then sends what it learned in that situation (and the situation that it was caught in) to a central distribution point. Other Roombas check that before complaining that they're stuck. "I was caught between these two vertical objects that were spaced about two feet apart, and when I turned 180deg, I got stuck again. My handler said "it's a chair, dummy, turn 90deg, go forward 6 inches, then turn 90deg again. Welcome to the land of chairs"

The next step is a neural network. At the simplest: 85% of the time I was in this situation this worked for me, 10% of the time, that worked for me. So I try this first, and if I'm still stuck, try that.

That's the basis of weighted pattern recognition. The more correct samples you are exposed to, the more likely you'll recognize the correct (but untrained) solution in the wild. The same way a baby human learns.

Welcome to SkyNet/The Matrix...

metaform3d
07-02-2008, 04:24 PM
We should try not to be limited by what we already know. Such limitations will stifle creativity and prevent us from making progress. It's not about what we already know. It's about doing what we don't already know how to do..Absolutely. That's part of what makes robotics so cool -- we've barely begun to explore the space of possible solutions.

All I was saying was that the solutions that have been discovered so far for learning in real-world situations require significant computing power to implement. Although we should definitely strive to find simpler approaches, there's no guarantee that any exist.

metaform3d
07-02-2008, 04:39 PM
MiniPCs and microcontrollers just can't compare with the tremendous 32-64bit multi-gigahertz processing that you probably have in your house. And if you're like me, you probably have a network that can distribute processing over a multitude of machines - all having lots of spare cycles.Wow, I hate to be the pessimist here, but you guys all seem to have a very different experience of computer power than I do. Maybe it's my day job. In CGI we can bring any computer to it's knees just by cranking up the image quality.

I was playing around with RoboRealm recently, trying to set up halfway decent object tracking in a noisy environment. By the time I had all the filters I needed in place I was getting about 1 frame per second. That was on a 2.2G dual core. Forget about playing soccer at 1 FPS.

This picture (http://www.transhumanist.com/volume1/power_075.jpg)is from Ray Kurtzweil, so take it with truckloads of salt. If his back of the envelope math is right (and if anything he's probably overestimating CPU power), $1000 of computer today could compare with the nervous system of an arthropod. A properly programmed spider could probably mow my lawn, but the brain in my lawnmower is probably closer to 1980's level tech just because of the low cost. I'm not sure a nematode would do so well.

Adrenalynn
07-02-2008, 04:46 PM
I really don't mean to offend, but it's awfully easy to overcomplicate your processing in RoboRealm. I had my laser-chaser crushing my quadcore until I went back and reoptimized. Now it's running full-framerate on a *pink* eee PC...

Mowing your lawn is a tougher challenge than navigating a 3000 lb vehicle through city streets autonomously? As I mentioned, Virginia Tech was successful with less than 2Ghz of crappy C3 processing power doing the "heavy lifting" [with pre-digested data from sensor arrays.] No neural networking there that I'm aware of, either.