PDA

View Full Version : A few questions?!?



El Espada
01-13-2008, 11:42 PM
Ok! I have been doing some research for my bot project and have raised some questions!
1. Can you use more than one phidget usb servo controller on a system?
2. Stereophonic vision allows detection of depth, but how accurate is it? Does it look at the colors in one camera ,then try to match the image in the next camera and then calculate the distance from the object based on the difference of how the pixels align?
3. My bot in the end will be for outside use mainly and i was curious on the survivability of the sonar sensors. Also, will stereophonic vision take care of the issue of object avoidance and mapping?

If stereophonic vision will do this then it should greatly decrease the need for extra sensor hence making this a more simple project. Any feed back or experience will be greatly appreciated!!

Alex
01-14-2008, 01:24 PM
I can't answer your vision questions, but:


Can you use more than one phidget usb servo controller on a system?

You can use as many Phidget USB servo controllers as you'd like:) When you create the object and then open it, you just need to specify the serial number as a parameter in the Open() method (ie. servo.Open(XXXX);).

To get the serial number of your Phidget (assuming you have already downloaded the Phidget MSI (http://www.phidgets.com/downloads.php?os_id=1)):

With no Phidgets connected to your system, double click on the icon labeled "Ph" in your system tray next to your clock in the lower right corner (If it's not there, go to C:\Program Files\Phidgets\ and run Phidget21Manager). A dialog box should pop up. With the dialog box up, plug a single Phidget device into your machine, record the Serial Number that shows up and repeat:)

El Espada
01-14-2008, 01:37 PM
Cool thats good to know. Reading this book i have seen some similarities between php and C#. I haven't got to far yet but some of it is starting to make sense. Plus, the more i learn the more my project changes so hopefully when this is all said and done i should have something pretty cool. Thanks Alex for the info on the phidget. That will be the first piece of equipment i buy once i have learned enough c# to get started. BTW i do have an BSC2 and a board of education so plus other things for that kit. So if u think of anything that may help teach me using that let me know thanks. If anyone has any input as far as the vision goes let me know. As always any help is deeply appreciated. :happy:

El Espada
01-16-2008, 11:30 AM
I just realized i have been calling the vision the wrong thing. Anyone know about stereoscopic vision?

DresnerRobotics
01-16-2008, 12:04 PM
I believe Jon Hylands is doing some work with stereoscopic vision on his raptor project, might want to check out his work.

JonHylands
01-16-2008, 12:37 PM
I'm not actually doing anything with vision at the moment.

Here's some links from my research on the topic:

http://code.google.com/p/sentience/
http://www.ces.clemson.edu/~stb/
http://www.alexandria.nu/ai/machine_vision/introduction/

- Jon

Alex
01-16-2008, 01:12 PM
Thanks for the links Jon! I was always interested in learning about stereoscopic vision as well, but I didn't even know where to start... till now! If you have any others, by all means, update this thread! We're also looking into adding a links directory in the TR Community in the next week or two, so all of us can share these links with each other in one centralized area:D

motters
01-16-2008, 01:51 PM
2. Stereophonic vision allows detection of depth, but how accurate is it? Does it look at the colors in one camera ,then try to match the image in the next camera and then calculate the distance from the object based on the difference of how the pixels align?

For cameras spaced 10cm apart (a similar distance to human vision) the effective range is approximately three or four metres. As the range increases so does the position uncertainty of the feature observed. If the robot is moving through space, updating a probabilistic grid map as it goes it is possible to achieve longer effective ranges and much higher spatial resolutions through repeated overlapping observations.

For more information about the nature of positional uncertainty within stereoscopic vision see this page: http://code.google.com/p/sentience/wiki/StereoUncertainty

Colour can be used as one indicator for matching a pair of images, but there are also many others. Stereo correspondence algorithms try to detect invariant features within both images and then match them together. It's a classic computer science problem which many researchers have attempted to solve, with greater or lesser degrees of success. I've been hacking on this problem for long enough now to have some idea of what's involved, and you can find my correspondence algorithm here http://code.google.com/p/sentience/wiki/StereoCorrespondence . The dense version takes about 50 milliseconds to compute on a modern PC, without any hardware or GPU acceleration. I don't use colour for the correspondence algorithm, since the extra information just slows things down without significantly increasing the quality of the depth maps.



3. My bot in the end will be for outside use mainly and i was curious on the survivability of the sonar sensors. Also, will stereophonic vision take care of the issue of object avoidance and mapping?

If stereophonic vision will do this then it should greatly decrease the need for extra sensor hence making this a more simple project. Any feed back or experience will be greatly appreciated!!

Yes, stereoscopic vision can be used for both object avoidance and mapping the environment. However, mapping involves complex simultaneous localisation and mapping (SLAM) algorithms which do take a fair amount of processing power but which are not beyond any recent PC hardware capabilities. To get some idea of what mapping with stereo cameras looks like there's an animation here http://code.google.com/p/sentience/wiki/OccupancyGridCreation . This only shows an overhead 2D view of what is actually a 3D map. The other animation shows what is effectively a tree search algorithm applied to the problem of spatial mapping using distributed particle filters.

I'm currently developing a new mobile robot which will use the Sentience algorithms, and I hope to have some far more impressive 3D animations later in the year showing the true potential of stereo vision.

JonHylands
01-16-2008, 02:03 PM
Thanks for the excellent reply, motters. My personal research leans a little to one side of classic stereoscopic vision - see my vision (http://www.bioloid.info/tiki/tiki-index.php?page=MicroRaptor+Vision) page for details...

Basically I'm more interested in using stereo vision to look at specific objects that are close to the robot. Objects that are farther away than "right in front of you" aren't as important, especially if you aren't trying to build 3D models of your environment. I want to experiment with using visual landmarking and vector-based hierarchical connection maps for navigation (http://www.bioloid.info/tiki/tiki-index.php?page=MicroRaptor+Navigation).

- Jon

motters
01-16-2008, 02:45 PM
I also started off with stereo vision looking at objects which were close up to the robot - typically less than one metre away. However over the last few years I've come to realise that navigation and object recognition are fundamentally the same thing, just on different scales.

I also started off having the cameras able to verge together, but I found that this adds a lot of additional uncertainty which is a nightmare to model unless you're using very accurate and repeatable servoing. I've since gone back to a more conventional design, where the cameras are fixed in parallel. They're *far* easier to calibrate that way, and you don't get any nasties such as slop in the verge servo gearboxes messing up the stereo correspondences.

JonHylands
01-16-2008, 02:50 PM
I'm not planning on building a "model" of the area the robot is in, so I don't need perfect calibration. The convergence of the cameras is simply to get a rough estimate of the distance to the object, nothing more.

I agree that navigation and object recognition are fundamentally the same thing, since visual landmarking (one of the main techniques I plan to use for navigation) is all about object recognition.

- Jon

El Espada
01-18-2008, 04:03 PM
Cool that is some very good info. As soon as i get closer to tackling this issue i will be getting ahold of you for some further questions. Thanks for the awesome reply!!

El Espada
01-21-2008, 09:21 PM
Okay so answer this. I have been reading through the sentience page http://code.google.com/p/sentience/ and i have a few more questions. I was thinking about using the pico-itx or nano-itx pcs for the heart of my system. And from the site stereoscopic vision for object avoidance, mapping or tracking is very cpu comprehensive. So intern one would need a more powerful platform hence the need for more power. That would essentially mean a larger chassis. So i guess would object avoidance or even object tracking would be possible for these small low power options? BTW is there a such thing as a weatherproof ping sensor?