PDA

View Full Version : Trossen Robotics System versus the competition?



Nammo
11-15-2007, 11:42 PM
Hi guys,

Today's video blog post on the Trossen Robotics System was fascinating -- well worth watching end-to-end. Thank you for putting this online!

I strongly agree that the hobby community needs _basic_ standards for interoperability, not middleware designed to lock us in to particular vendors and/or languages. (To continue Matt's analogy to the PC era, CP/M (and later MS-DOS) provided just enough to keep pace with hardware innovation, while forcing few design decisions that would frustrate software innovation. Contemporary vendor or language-specific OSes were more comprehensive and thus easier to use, but did not succeed. UCSD p-System anyone?)

Anyway, I was disappointed during the Q&A that nobody asked about the Player Project (also known as Player/Stage). It is one of my favorite efforts similar to TRS. Like TRS, it is intended to be OS and language neutral, and tries to provide a kind of 'device API' that allows software to be easily ported from robot to robot.

Judging from the documentation, Player adheres more to the Unix philosophy of character messages exchanged via pipes, whereas TRS adheres more to the 'web' philosophy of XML files and object models.

Besides cosmetics, what are the differences between the two? Does TRS handle things that Player fundamentally doesn't? Will TRS someday support Player as a back-end for its API?

OMG is also trying to be a robot object model. From my reading of their spec, they seem very focused on shoehorning robotics concepts into OMG's existing taxonomy (think CORBA and UML) and not as focused as Player and TRS on real working robots. There are some interesting ideas there, and a ton of conceptual overhead. I don't imagine it spreading to hobby robots very fast.

What other competition is there? What are some reasons that TRS is the one we should be throwing our energy into?

- NDM

Alex
11-22-2007, 08:37 PM
I'll have to look into the Player Project a bit more. I heard about it a while back, but don't have much info on it. Could you give us a rundown on what you know about it (also a few links)? We're also really interested in URBI from GOSTAI. They have built a pretty impressive system:

http://www.gostai.com/
http://www.urbiforge.com/

Droid Works
11-23-2007, 01:14 PM
I cant wait to see the bata of if, We are looking at the future of robotics and I for one cant wait to get a copy of it.

Nammo
11-23-2007, 04:04 PM
I'll try to summarize what I know, but I'm only just getting started with Player so I'd love to have a real expert come in and correct some of this:

Player works at a lower level than URBI. Like URBI, Player provides standard interfaces to robots. But, it does not make any assumptions about the behavior control software. URBI can drive Player-based robots. Player has nice simulation features, so the URBI/Player combination is sometimes used to test URBI behaviors in simulation.

I like Player because it is minimalistic, not too constraining, but still adds a lot of value. It is popular in the research community, so the base of Player-compatible robots and free software is growing. Finally, it's pretty well documented. The RETF put the Player protocol forward as a standard. Although the RETF has run into organizational problems, Player is really striving to be 'standards quality'.

Player enables software reuse on different robot platforms via protocols. For example, Player defines standard protocols for sending motor commands and retrieving sensor data. Typically, this protocol is mapped to an internet/TCP port, a local pipe, or a serial/Bluetooth/USB port. Recently, Player has started defining protocols around common services, such as localization and path planning.

That's a key difference between RTS and Player: RTS is an API or object model, while Player is a protocol (like HTTP). Of course, APIs for various languages are available to speak the protocol.

Player's emphasis on protocol allows a robot system to be naturally spread across multiple computers, OSes, and programming languages. For example: A robot may run obstacle avoidance behavior, or a 'survival kit', on a Linux-based robot controller, while running mapping software on a more powerful Windows server via WiFi.

Both the survival kit and the mapping software can be reusable software components. They don't know about specific motor controllers or cameras. They don't need to be written in a specific language or OS. They don't even know where on the network they are running, as long as they can connect to the robot's Player Port.

The same robot can be simulated in a Player-compatible simulator. There are two big ones: Gazebo (3D simulator with physics) or Stage (simple 2D simulator). The robot's software stack connects to the simulator's Player port, and the simulator replicates the robot's motors and sensors. Gazebo can simulate many kinds of motors and sensors, including gyroscopes, pressure sensors, and stereo cameras.

I'm still trying to fall in love with URBI. I like some of the simple URBI code I've seen, but I'd really like to see some examples of how URBI scales up to complex behavioral systems. Does embracing parallelism really make complex behaviors more maintainable? I realize this is still an area of active research. One reason I like Player is that it doesn't try to answer those questions -- you can write your behavior control in BASIC if that's your thing.

Fluffy Intro: http://electronicdesign.com/Articles/Index.cfm?ArticleID=17531
Serious Intro: http://playerstage.sourceforge.net/doc/Player-2.0.0/player/
Who's Using Player: http://playerstage.sourceforge.net/wiki/PlayerUsers

A number of robot kits support Player as the 'Linux alternative' to Microsoft Robotics Studio, such as the ER1 and 914 PC-BOT.

Matt
11-27-2007, 03:35 PM
Hey Nammo :)

Many apologies for the REALLY long wait for my reply, I had the show, then more travel, then the holidays, then catch up!

Thanks for the post and opening up this discussion. These are exactly the kind of debates that need to bounce around the community in order to figure out this whole PC based robotics thing :)

"That's a key difference between RTS and Player: RTS is an API or object model, while Player is a protocol (like HTTP). Of course, APIs for various languages are available to speak the protocol."

(I'm assuming you meant TRS) This sounds pretty on the button. It sounds to me like we can make a distinction between the following layers:

1) Protocol between external hardware boards
2) Protocol on/across computer ports (external hardware connecting)
3) API's / wrappers / drivers / Etc which expose/convert the hardware signals to various languages or environments
4) A Robot Object Model which can be used as a "home base" in your environment of choice.
5) The environment itself where programs are developed and/or executed

It sounds like you are saying player is concerned more with levels 1 & 2. At the heart of the TRS is the Robot Object Model (4) which provides the most advantages when developers have good APIs, wrappers and drivers to work against. The TRS promotes the use of an object model to abstract layer 3 from layer 5. That is what is missing in many robotic environments. The environment jumps right over an object model and talks directly to the hardware APIs. This is tying the code to closely to the hardware. Standardizing the protocols of the hardware isn't enough and doesn't solve the problem.

I believe it's too difficult to attempt to create standard protocols for talking to hardware. Let the hardware manufactures figure out how to get the signals from their boards to the API on their own. Trying to corral all those competitors out there is like herding cats. Besides, someone making hardware for NASA will never line up with someone making hardware for a hobbyist. This is why in the talk I point out that the more important areas are the protocol for the daughter boards being able to talk to each other and also a robot object model (ROM). Different hardware will need different APIs anyhow, different functions, different properties, etc. So what's the point in standardizing communication protocols when the API above them won't be the same?

What happens between the board connecting to the computer and the API is really of no concern to a robot developer. The companies building the boards that connect should simply take advantage of the many standard communication protocols already available to computer hardware peripheral manufactures. This is one of the wonderful benefits of adopting the computer as a standard platform, you get all the standards already there and can stop worrying about creating new ones in that area.

Now, when talking about the controllers and sensors, companies can benefit from releasing the protocols for boards to be able to talk to each other. I support the hell out of this area opening up to a standard :) A student should be able to make a sensor board that uses a standard I2C protocol for example and can plug it right into various bridgeware boards available on the market. Think of it like different lens makers being able to make news lenses that click right onto the standard fittings of camera bodies or rim makers making new rims that fit onto standard lug patterns.

Okay, whew, I'll take a breath... Having a standard robot object model helps further abstract proper layers and it is far more feasible. Defining a motor, joint, rod, or a sensor is far easier and can cover 95% of users with the same set of properties. Only the really high end players would probably need to break out of such a model. Someone like NASA building mars rovers for instance. 95% of other players out there can get by just fine with half a dozen basic properties defining their DC motor.

So here's the big message: When you have a standard robot object model like this all the people building hardware with their wrappers and APIs can create the code necessary to translate state changes in the ROM into messages that need to be sent out to the hardware or visa versa for sensors. Likewise, having a common ROM allows people to work in all kinds of different programming environments without worrying about the plumbing beyond the ROM. It acts as a neutral zone standing between the two worlds allowing everyone to play nice together.

If you are a software developer then you can develop a piece of software that will work in many different environments because you know how to talk to the ROM in each case, it's always the same. Viola! Transferability! Let the world rejoice :D

kdwyer
11-27-2007, 04:12 PM
Just a thought on 'standards'...
Standards work best with mature technologies. 'Computing' is still somewhat adolescent as technologies go, and robotics isn't even a toddler yet. I'm certain the last thing any of us would want is to get locked in to a 'solution', and miss out on some exciting new innovation that could revolutionize our way of thinking about the 'problem'.
That said, there is assuredly room to grow within flexible standards. The mechanical/computing interface does need some methodology applied, but we should be careful not to be too rigid about this. Innovation is our middle name (as hobbyists) and we need every shortcut we can think of to advance the science (art?) of robotics.
I only do what my robot tells me to do.

Matt
11-28-2007, 11:45 AM
I would have to disagree and say that I think computer programming is a very mature technology. 40+ years and now dirt cheap and available to the masses. Maybe I'm misreading your meaning.

The idea behind having a ROM is to not get locked in. That's the whole point. It's so that you can swap out software and hardware as it evolves and also choose your own environment & platforms. Can I ask where you see things as being locked in? I'm curious of where you are seeing that.

cheers,

kdwyer
11-28-2007, 01:12 PM
Matt, I was speaking very generally. ANd I still say programming is not 'mature' tho it is cheap and available. IMHO mature technologies are easily understood (at a basic level), virtually bug-free (look at plumbing, house-type electric wiring, basic auto tech), and pretty much nailed down and not changing very much.
By these benchmarks, software is buggy, ever-evolving, and increasingly difficult to master. Ditto computer hardware. I don't think these are BAD things, just an indication that rapid change/improvement is an inverse index of maturity.
It took plumbing, carpentry, masonry etc. quite a while to get to the point where they are today, and they are dramatically easier to master than computers/programming. (Note: Master) Lots of very smart people over the ages slowly improved usage. Now, there hasn't been much truly new in these areas in a loooong while. By contrast, computing is still changing almost monthly. I don't expect last year's chip designers to sit back and retire, I expect an ever-increasing pace of development. IMO this defines an immature technology.
BTW, wondrous things can be accomplished with even immature technologies. The pyramids, Stonehenge etc. are certainly not the be-all and end-all of construction engineering, but they are still marvels.
Let us not forget, when Microsoft began they saw no need to even consider code that dealt with more than what they thought was a ridiculously large amount of memory... 640k. ANd it wasn't that long ago.

Besides, that's what I love about standards, there's so many to choose from. (Tongue firmly in cheek.)

Nammo
11-28-2007, 02:13 PM
Matt, thank you for the detailed information! I'm getting a feel for why TRS is more like URBI and MSRS than like Player. However, I think Player might be evolving into a decent open-source competitor in this space.

I like your numbering system. But, I wonder if, or, where, high-level components fit into that system. For example, many mobile robots require basic path planning and localization. Is this just another layer in #3, or is it '#6'?

My understanding is that Player pretty much addresses #3 only. In Player's current implementations, sensors and motors still have their own funky hardware interfaces (#1/#2). The robot controller (usually an embedded PC of some kind), runs drivers to translate hardware interfaces to #3 (the Player protocol).

For example, in the 914 PC-BOT, the motor controller board (M3) is hooked up to the PC motherboard via USB (that's #2 -- right?). The USB protocol is proprietary, but the PC motherboard runs a Player Server that exposes a TCP/IP port -- connecting to that port gives you standardized access to the robot.

That makes the job of #4 easier -- the language-specific 'object models' (including URBI) can just talk to the Player Server without having to know anything about the hardware.

Player used to stop there. That's why some people use URBI and Player together. But, Player is growing new standard interfaces for localization and mapping, path planning, and other basic stuff robots need to do. Player seems to be wedging these things into layer #3 -- perhaps because of Player's existing architecture.

I am excited about Player because there is a lot of activity around it and more robots supporting it these days, but I share kdwyer's concern about too much standardization too soon. I love minimalism and I think overreaching standards like the OMG effort are too much for community development.

With that said, I am hoping for a day where I can reuse big chunks of community/open source code to make my robot smarter. For me personally, that's what a robot API is all about. So naturally I'm going to be attracted to the APIs that look 'busiest'.

Matt
11-28-2007, 03:39 PM
Matt, I was speaking very generally. ANd I still say programming is not 'mature' tho it is cheap and available. IMHO mature technologies are easily understood (at a basic level), virtually bug-free (look at plumbing, house-type electric wiring, basic auto tech), and pretty much nailed down and not changing very much.

Okay, I can hear that argument I guess. All though I don't know if comparing computers to plumbing is fair. We could look at cars and have the same debate, they are a mature technology to some and to others still buggy and evolving too fast. I guess it comes down to subjective opinion, but you've given me a new way to look at it :robotsurprised:

Matt
11-28-2007, 04:12 PM
Matt, thank you for the detailed information! I'm getting a feel for why TRS is more like URBI and MSRS than like Player. However, I think Player might be evolving into a decent open-source competitor in this space.

TRS can be defined this way, it allows people to turn any language into an URBI or MSRS or use the same philosophy and ROM in URBI/MSRS. The TRS isn't really trying to compete with either of them. We think the ROM and the TRS more as a methodology (with a ROM standard) that would benefit the community as a whole.

I've often asked why we need a robotics specific environment at all :) What's wrong with regular languages? Of course, I'm a rebellious newcomer to the field generally speaking and I used to be a programmer. So I look at programming robots very differently than most people. I used to build robots in VB6 using Phidgets. Quite different than coming from a world of embedded microcontrollers! I say a language IS an environment. All the parts you need like vision analysis, rendering engines, and algorithm libraries should be 3rd party tools you buy and use. It's a shift in thinking.


I like your numbering system. But, I wonder if, or, where, high-level components fit into that system. For example, many mobile robots require basic path planning and localization. Is this just another layer in #3, or is it '#6'?

I suppose you would call those higher layers, or you could just say they are tools being used in layer #5.


For example, in the 914 PC-BOT, the motor controller board (M3) is hooked up to the PC motherboard via USB (that's #2 -- right?). The USB protocol is proprietary, but the PC motherboard runs a Player Server that exposes a TCP/IP port -- connecting to that port gives you standardized access to the robot.

Sounds correct to me. Player is helping out at the #3 layer which delivers the communications to the environment or language being used.


That makes the job of #4 easier -- the language-specific 'object models' (including URBI) can just talk to the Player Server without having to know anything about the hardware.

I left out talk of what could be considered another layer earlier because I didn't want to confuse things too badly. There needs to be some management code between the APIs and the ROM. That doesn't just magically happen and it's not really the job of the APIs or the object model. We call it the controller library and the sensor library in our model. You can read all about them here: http://www.trossenrobotics.com/tutorials/TRS4.aspx

How this is best done is probably still up for debate. To start I imagine that these would just be public chunks of code that people give away and trade around. If the ROM is established well enough then the manufactures of hardware could support their own stuff as well. How ever this gets done my message is always, "Don't leave out the ROM in your system!" :)


Player used to stop there. That's why some people use URBI and Player together. But, Player is growing new standard interfaces for localization and mapping, path planning, and other basic stuff robots need to do. Player seems to be wedging these things into layer #3 -- perhaps because of Player's existing architecture.

And this is where I staunchly debate that it's the wrong way to go, a box is being created that locks you in once you've started tying layers together like that. Mapping and path planning have no business intermingling with the hardware or even the hardware APIs directly. That's a totally monolithic architecture, how can anything be changed or swapped out independently?

Keep in mind I'm not trash talking Player, it might be a great environment and technology. People use such things because they are there and there is a community which is fine. People use what is available and currently it's a matter of choosing a more complete environment from top to bottom. Those tend to be more closed in the current evolutionary stage in robotics. This would make sense since limited resources means tighter integration of technologies using a smaller pool of code objects and hardware. It's a natural occurrence. The philosophy of the TRS and a standard ROM would be the next evolution IMHO. It's not immediatly intuitive until you think about it on a much grander scale including all the hardware, languages, environments, uses and users out there. It's only then that the extra effort of building in extra layers and more code make sense. Because the masses benefit from it and the ability to transfer knowledge and advancements.


With that said, I am hoping for a day where I can reuse big chunks of community/open source code to make my robot smarter.

Me too my friend. Me too! :)