Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: A few questions?!?

  1. A few questions?!?

    Ok! I have been doing some research for my bot project and have raised some questions!
    1. Can you use more than one phidget usb servo controller on a system?
    2. Stereophonic vision allows detection of depth, but how accurate is it? Does it look at the colors in one camera ,then try to match the image in the next camera and then calculate the distance from the object based on the difference of how the pixels align?
    3. My bot in the end will be for outside use mainly and i was curious on the survivability of the sonar sensors. Also, will stereophonic vision take care of the issue of object avoidance and mapping?

    If stereophonic vision will do this then it should greatly decrease the need for extra sensor hence making this a more simple project. Any feed back or experience will be greatly appreciated!!

  2. #2
    Join Date
    Sep 2006
    Location
    Carol Stream, Illinois
    Posts
    1,695
    Images
    118
    Rep Power
    105

    Re: A few questions?!?

    I can't answer your vision questions, but:

    Can you use more than one phidget usb servo controller on a system?
    You can use as many Phidget USB servo controllers as you'd like When you create the object and then open it, you just need to specify the serial number as a parameter in the Open() method (ie. servo.Open(XXXX).

    To get the serial number of your Phidget (assuming you have already downloaded the Phidget MSI):

    With no Phidgets connected to your system, double click on the icon labeled "Ph" in your system tray next to your clock in the lower right corner (If it's not there, go to C:\Program Files\Phidgets\ and run Phidget21Manager). A dialog box should pop up. With the dialog box up, plug a single Phidget device into your machine, record the Serial Number that shows up and repeat

    �In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed�
    - Charles Darwin

  3. Re: A few questions?!?

    Cool thats good to know. Reading this book i have seen some similarities between php and C#. I haven't got to far yet but some of it is starting to make sense. Plus, the more i learn the more my project changes so hopefully when this is all said and done i should have something pretty cool. Thanks Alex for the info on the phidget. That will be the first piece of equipment i buy once i have learned enough c# to get started. BTW i do have an BSC2 and a board of education so plus other things for that kit. So if u think of anything that may help teach me using that let me know thanks. If anyone has any input as far as the vision goes let me know. As always any help is deeply appreciated.

  4. Re: A few questions?!?

    I just realized i have been calling the vision the wrong thing. Anyone know about stereoscopic vision?

  5. #5
    Join Date
    Dec 2007
    Location
    Portland, OR
    Posts
    3,198
    Images
    952
    Rep Power
    138

    Re: A few questions?!?

    I believe Jon Hylands is doing some work with stereoscopic vision on his raptor project, might want to check out his work.

  6. Re: A few questions?!?

    I'm not actually doing anything with vision at the moment.

    Here's some links from my research on the topic:

    http://code.google.com/p/sentience/
    http://www.ces.clemson.edu/~stb/
    http://www.alexandria.nu/ai/machine_.../introduction/

    - Jon

  7. #7
    Join Date
    Sep 2006
    Location
    Carol Stream, Illinois
    Posts
    1,695
    Images
    118
    Rep Power
    105

    Re: A few questions?!?

    Thanks for the links Jon! I was always interested in learning about stereoscopic vision as well, but I didn't even know where to start... till now! If you have any others, by all means, update this thread! We're also looking into adding a links directory in the TR Community in the next week or two, so all of us can share these links with each other in one centralized area

    �In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed�
    - Charles Darwin

  8. Re: A few questions?!?

    Quote Originally Posted by El Espada View Post
    2. Stereophonic vision allows detection of depth, but how accurate is it? Does it look at the colors in one camera ,then try to match the image in the next camera and then calculate the distance from the object based on the difference of how the pixels align?
    For cameras spaced 10cm apart (a similar distance to human vision) the effective range is approximately three or four metres. As the range increases so does the position uncertainty of the feature observed. If the robot is moving through space, updating a probabilistic grid map as it goes it is possible to achieve longer effective ranges and much higher spatial resolutions through repeated overlapping observations.

    For more information about the nature of positional uncertainty within stereoscopic vision see this page: http://code.google.com/p/sentience/w...reoUncertainty

    Colour can be used as one indicator for matching a pair of images, but there are also many others. Stereo correspondence algorithms try to detect invariant features within both images and then match them together. It's a classic computer science problem which many researchers have attempted to solve, with greater or lesser degrees of success. I've been hacking on this problem for long enough now to have some idea of what's involved, and you can find my correspondence algorithm here http://code.google.com/p/sentience/w...Correspondence . The dense version takes about 50 milliseconds to compute on a modern PC, without any hardware or GPU acceleration. I don't use colour for the correspondence algorithm, since the extra information just slows things down without significantly increasing the quality of the depth maps.

    Quote Originally Posted by El Espada View Post
    3. My bot in the end will be for outside use mainly and i was curious on the survivability of the sonar sensors. Also, will stereophonic vision take care of the issue of object avoidance and mapping?

    If stereophonic vision will do this then it should greatly decrease the need for extra sensor hence making this a more simple project. Any feed back or experience will be greatly appreciated!!
    Yes, stereoscopic vision can be used for both object avoidance and mapping the environment. However, mapping involves complex simultaneous localisation and mapping (SLAM) algorithms which do take a fair amount of processing power but which are not beyond any recent PC hardware capabilities. To get some idea of what mapping with stereo cameras looks like there's an animation here http://code.google.com/p/sentience/w...cyGridCreation . This only shows an overhead 2D view of what is actually a 3D map. The other animation shows what is effectively a tree search algorithm applied to the problem of spatial mapping using distributed particle filters.

    I'm currently developing a new mobile robot which will use the Sentience algorithms, and I hope to have some far more impressive 3D animations later in the year showing the true potential of stereo vision.

  9. Re: A few questions?!?

    Thanks for the excellent reply, motters. My personal research leans a little to one side of classic stereoscopic vision - see my vision page for details...

    Basically I'm more interested in using stereo vision to look at specific objects that are close to the robot. Objects that are farther away than "right in front of you" aren't as important, especially if you aren't trying to build 3D models of your environment. I want to experiment with using visual landmarking and vector-based hierarchical connection maps for navigation.

    - Jon

  10. Re: A few questions?!?

    I also started off with stereo vision looking at objects which were close up to the robot - typically less than one metre away. However over the last few years I've come to realise that navigation and object recognition are fundamentally the same thing, just on different scales.

    I also started off having the cameras able to verge together, but I found that this adds a lot of additional uncertainty which is a nightmare to model unless you're using very accurate and repeatable servoing. I've since gone back to a more conventional design, where the cameras are fixed in parallel. They're *far* easier to calibrate that way, and you don't get any nasties such as slop in the verge servo gearboxes messing up the stereo correspondences.

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Post Your Contest Questions Here
    By Bodie in forum Project Showcase
    Replies: 40
    Last Post: 10-01-2012, 11:57 AM
  2. Contest Questions
    By Vaughn in forum Project Showcase
    Replies: 1
    Last Post: 08-17-2007, 11:42 AM
  3. Scorpion Mini question(s)
    By xyzzy1126 in forum DYNAMIXEL & Robot Actuators
    Replies: 7
    Last Post: 02-21-2007, 03:26 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •