PDA

View Full Version : ROS, SLAM, VSLAM, stereo and PlayStation Eye



RobotAtlas
11-01-2010, 02:42 PM
I seams important for a robot to do its own mapping and localization rather than relying on human-generated maps.

After experimenting more with sonar sensor on Lego NXT over the weekend, I’m starting to agree with what Fergs was saying all along:
at 30 points per second per IR sensor it might be too hard (impossible?) to be able to do SLAM with just IR sensor(s).

While there’s still potential in using multiple PML sensors, maybe it’s better to look into adding other sensors for SLAM.

With real LiDARs being north of one grand, what other choices do we have to do SLAM on our robots?

Well, there’s VSLAM – a high-end ROS package that allows doing SLAM based on stereo camera pair.
One of the biggest problems there was ability to synchronize two cameras, so that images are taken at exactly the same time.
This ability is especially important on mobile robots, considering that as little as one pixel of movement between frames can
lead to a significant distortion in depth estimate in any stereo algorithm.

I was under impression that consumer webcams just don’t have any ways of syncing two cameras.

That is until 4 days ago, when Kurt Konolige posted this link as a reply to one of the questions on ros-users mailing list.
This link talks about how to sync consumer webcameras:
http://sszymczy.rootnode.net/index.php?menu=projects (http://sszymczy.rootnode.net/index.php?menu=projects)

Looking [much more] into it, it turns out that one of the webcams that has ability to synch is PlayStation Eye.
Would you like to know what’s so great about that camera?
From here: http://en.wikipedia.org/wiki/PlayStation_Eye (http://en.wikipedia.org/wiki/PlayStation_Eye)
640×480 pixels @ 60 Hz[1]
320×240 pixels @ 120 Hz[1]
Sony states that the PlayStation Eye can produce "reasonable quality video" under the illumination provided by a television set.

Plus, you can get them on eBay for under $20 shipped (compare that to $600 cheapest synced camera).

I already bought one PlayStation Eye to see if instruction here can be followed:
http://sszymczy.rootnode.net/index.php?menu=projects&submenu=webcamsync2&subsub=PS3Eye (http://sszymczy.rootnode.net/index.php?menu=projects&submenu=webcamsync2&subsub=PS3Eye)

Once I get it, I’ll post my findings and then purchase second camera and actually try using it with ROS.

In the mean time, I have a couple of questions for Fergs:
What was the motivation to write stereo_usb_cam? What is its advantage over, uvc_stereo?
Is it to use usb_cam instead on uvc_cam?
Which driver do you use for Microsoft LifeCam Cinema on Linux?
How do you determine what’s the good distance between cameras?

SK.
11-02-2010, 03:06 AM
I'm actually also very interested in that, just not sure I'll have the time to really work on PS Eye stereo. At least I'm looking for 2 cameras on ebay now.

I found a blog entry about a stereo setup using multiple Playstation Eyes, it's from June, unfortunately with no further updates:
http://blog.onthewings.net/2010/06/08/using-multiple-ps3eye-cameras-with-haxe-cpp/

The camera seems to be very popular in the multitouch community, this forum is basically full of posts about using it for these applications:
http://nuigroup.com/forums/viewforum/34/

RobotNV, I take you'll also be using Linux, using ROS and all that? From what I gathered so far, there are Windows drivers which you can use for [email protected] fps, for Linux one has to patch the available driver (http://bear24rw.blogspot.com/2009/11/ps3-eye-driver-patch.html) to get more control about the cam settings.

I'm wondering how [email protected]@60fps (in color as I understand) is possible (or which compression/format is used at least) via USB2.0 anyways, as commonly 30fps is the maximum for YUV422 images at this resolution, taking most available bandwidth.

cdop
11-02-2010, 04:47 AM
You might also have a look at
http://www.surveyor.com/stereo/stereo_info.html
I am working with the normal/"mono"-version, pretty good and all opensource.
Maybe you might even get some code from those guys:
http://cs.gmu.edu/~robotics/pmwiki.php/Main/RoboPatriots

SK.
11-02-2010, 06:37 AM
I found two clips of the dual PS3 eye stereo setup I from the "onthewings" blog I linked earlier performing dense stereo estimation using OpenCV:
YouTube - Stereo Correspondence (OpenCV) Test 1
YouTube - Stereo Correspondence (OpenCV) Test 2


/edit:
Link to another (commercial) stereo system based on PS3 eye:
http://codelaboratories.com/forums/viewthread/66/

RobotAtlas
11-02-2010, 09:54 AM
RobotNV, I take you'll also be using Linux, using ROS and all that? From what I gathered so far, there are Windows drivers which you can use for [email protected] fps, for Linux one has to patch the available driver (http://bear24rw.blogspot.com/2009/11/ps3-eye-driver-patch.html) to get more control about the cam settings.

Thanks for the links SK.

I've been using ROS on Kubuntu for about 2 moths now. Mostly doing tutorials and also connecting it to Mindstorms NXT using nxt_ros package. I also did a few stereo experiments on ROS with a pair of Microsoft Lifecam cameras. Results are pretty good so far. ROS has great visualization tool - rviz that allows seeing results of stereo algorithm as a cloud of 3D points. That's really powerful.

As far as Windows is concerned - look at Pi Robot. Patrick was using Serializer with Windows/.NET for quite a while and then switched to Linux/ROS.
He just wrote a pretty nice blog post about ROS here: http://www.pirobot.org/blog/0014/
I'm staying away from Windows.


I'm wondering how [email protected]@60fps (in color as I understand) is possible (or which compression/format is used at least) via USB2.0 anyways, as commonly 30fps is the maximum for YUV422 images at this resolution, taking most available bandwidth.

That's a very good question. I don't think a robot really needs 60 (or 120) frames every second.
What 120 fps just tells me that camera can take a picture in less than 1/120 = 8ms. That's pretty fast. I think speed is important for a moving robot with stereo vision.

RobotAtlas
11-02-2010, 10:30 AM
You might also have a look at
http://www.surveyor.com/stereo/stereo_info.html (http://www.surveyor.com/stereo/stereo_info.html)

Thanks cdop for the link. It looks like they use a different (but very similar) version of the sensor. I was wondering if it can be synced for stereo. I do want to use FitPC2 for getting image data rather than 2 blackfins.

billyzelsnack
11-02-2010, 01:15 PM
I was using a PlayStation Eye with a FitPC2 for my Robocup bot. I had hoped to do 640x480[email protected], but my image processing code was just too slow so I ended up needing to go [email protected] for decent results.

RobotAtlas
11-02-2010, 01:49 PM
I was using a PlayStation Eye with a FitPC2 for my Robocup bot. I had hoped to do [email protected], but my image processing code was just too slow so I ended up needing to go [email protected] for decent results.

Billy, did you use OpenCV for ball recognition?

lnxfergy
11-02-2010, 02:23 PM
One warning I would have is: that the PR2 not only has frame-synchronized cameras for stereo, but the cameras themselves are global shutter -- that is, all pixels are exposed at the exact same time. Most webcams are going to be rolling shutter -- which means even if you sync the camera, there is a potential for skew within the image itself (especially under motion) which could cause the VSLAM not to work. In fact, it's certainly going to degrade VSLAM -- the question is by how much (I'm not sure that anyone can answer that right now).

As for the PML -- my comments were that "it wouldn't work with *gmapping*" -- that is, you may find that other SLAM methodologies can work with the PML, but the default AMCL/gmapping combo is probably not going to handle the low sensor density and short sensor range. On the scale of a typical house, I think the PML+decent odometry has at least a shot with a different mapping algorithm (gmapping's algorithm is designed for very confident and long range lidars -- something most of our bots won't have).

As for the more specific questions --
* Why stereo_usb_cam -- I wasn't aware of uvc_stereo when I started working on stereo_usb_cam. I haven't gotten back to playing with either yet, when I do, I may kill stereo_usb_cam if uvc_stereo meets my needs.
* Most human-size robots will be using 6-9cm base length between cameras. This works pretty well for table-top recognition. There are of course equations you can work out based on pixel size and image lens properties that will give the optimal spacing based on the range of the objects you wish to see, and at what resolution.

-Fergs

SK.
11-02-2010, 02:43 PM
As far as Windows is concerned - look at Pi Robot. Patrick was using Serializer with Windows/.NET for quite a while and then switched to Linux/ROS.
He just wrote a pretty nice blog post about ROS here: http://www.pirobot.org/blog/0014/
I'm staying away from Windows.
Same for me, especially the last part (at least when it comes to running stuff on robots) ;)
I'm currently on Ubuntu 10.04, from what I googled so far the cam will work out of the box, albeit with limited control of camera settings compared to the specially compiled driver.


That's a very good question. I don't think a robot really needs 60 (or 120) frames every second.
What 120 fps just tells me that camera can take a picture in less than 1/120 = 8ms. That's pretty fast. I think speed is important for a moving robot with stereo vision.
Of course, 60 fps doesn't make much sense for a stereo application, it's interesting nonetheless.
Especially with a rolling shutter cam like the PS3 eye an as low as possible shutter speed is important. We had a commercial rolling shutter webcam on a quadrotor for a project I worked on and one was basically able to measure the frequency of high frequency vibrations of the airframe from the rolling shutter camera images, as the different rows of the image were taken at different times and showed the vibration pattern quite "nicely" (and making the image unusable for normal processing).

The rolling shutter effect can also be seen quite nicely here (Philips SPC1300 Cam 640x480 30fps, didn't write down the shutter speed for that logfile):
YouTube - Camera View of Humanoid Soccer Playing Robot 'Bruno'

RobotAtlas
11-02-2010, 03:37 PM
Most webcams are going to be rolling shutter -- which means even if you sync the camera, there is a potential for skew within the image itself (especially under motion) which could cause the VSLAM not to work. In fact, it's certainly going to degrade VSLAM -- the question is by how much (I'm not sure that anyone can answer that right now).


My hope is with identical camera firmware, if both cameras are synched, then whatever movement in the image happens from the time first scan line is taken to the time last scan line is taken (which is < 8ms) is going to be [almost] the same. We will find out soon.


As for the PML -- my comments were that "it wouldn't work with *gmapping*" -- that is, you may find that other SLAM methodologies can work with the PML, but the default AMCL/gmapping combo is probably not going to handle the low sensor density and short sensor range. On the scale of a typical house, I think the PML+decent odometry has at least a shot with a different mapping algorithm.

Not all of us are smart enough (or have enough time) to swap out mapping algorithms in and out of navigation stack. We are hoppyists, not scientists. :)

But if you think it's possible, I'll be happy to help with beta-testing.

billyzelsnack
11-02-2010, 05:13 PM
Billy, did you use OpenCV for ball recognition?

OpenCV with custom blob detector code.

Pi Robot
11-03-2010, 08:44 AM
I also did a few stereo experiments on ROS with a pair of Microsoft Lifecam cameras. Results are pretty good so far. ROS has great visualization tool - rviz that allows seeing results of stereo algorithm as a cloud of 3D points. That's really powerful.

Hey RobotNV,

Do you have any screen captures of point clouds made with the MS Lifecams? I'd love to see what can be done with a pair of webcams. I have a Videre STOC camera on order but I probably won't see it for a couple of months so I'm keen to see how you guys are making out with the webcams. Also, is the disparity data good enough to do basic obstacle avoidance similar to the PML?

--patrick

RobotAtlas
11-03-2010, 10:18 AM
Hey RobotNV,

Do you have any screen captures of point clouds made with the MS Lifecams? I'd love to see what can be done with a pair of webcams. I have a Videre STOC camera on order but I probably won't see it for a couple of months so I'm keen to see how you guys are making out with the webcams. Also, is the disparity data good enough to do basic obstacle avoidance similar to the PML?

--patrick

I can capture some. Camera calibration seems to be extremely important for disparity to work correctly. I have cameras just sitting on top of monitor. So they are not rigidly attached. Then when I carefully moved monitor back to give me more depth variations, disparity completely broke. So I had to calebrate them again. Lesson - cameras need to be rightly attached. To see what will be possible with your Vedere cameras, did you try doing rotating_detergent.bag ROS tutorial?
That tutorial is mandatory to understand how those stereo matching parameters works. Also reading OpenCV book would not hurt, but I have not done it.

lnxfergy
11-03-2010, 10:42 AM
I can capture some. Camera calibration seems to be extremely important for disparity to work correctly. I have cameras just sitting on top of monitor. So they are not rigidly attached. Then when I carefully moved monitor back to give me more depth variations, disparity completely broke. So I had to calebrate them again. Lesson - cameras need to be rightly attached. To see what will be possible with your Vedere cameras, did you try doing rotating_detergent.bag ROS tutorial?
That tutorial is mandatory to understand how those stereo matching parameters works. Also reading
OpenCV book would not hurt, but I have not done it.

My experience with the webcams is that stereo_proc works decently if nothing is moving -- as soon as movement comes into play, it goes to hell. On cheap webcams, I'm thinking you might get better overall accuracy on (and lower overhead) using something like Bob Mottram's libv4l2cam ROS bindings (http://code.google.com/p/libv4l2cam/). While it doesn't attempt to be as precise as stereo_proc's algorithms, it was designed and tested on relatively un-synchronized cameras (unlike stereo_proc, which grew up on global shutter cameras).

One other thing to keep in mind is that you're never going to get results like you're seeing on the PR2 without adding a texture projector of some kind -- traditional stereo has huge dropout zones.

Although I have to admit -- I recently purchased a Videre STOC as well (should be here next week), and so I've lost a bit of interest in webcam stereo..... although I am now looking into putting together a texture projector on the cheap.

-Fergs

Pi Robot
11-03-2010, 11:28 AM
To see what will be possible with your Vedere cameras, did you try doing rotating_detergent.bag ROS tutorial?
That tutorial is mandatory to understand how those stereo matching parameters works. Also reading OpenCV book would not hurt, but I have not done it.

Thanks for the pointer--looks like a very useful tutorial. I'm just now playing with the basics of OpenCV using the Python bindings. I'll start by trying to recreate the head tracking stuff I did in RoboRealm (under Windows). And it looks like I have access to an electronic version of the OpenCV book for reference.

--patrick

RobotAtlas
11-03-2010, 01:19 PM
Hey RobotNV,
is the disparity data good enough to do basic obstacle avoidance similar to the PML?

From my limited experience it looks like stereo matching algorithm has a chicken-and-egg problem.
It needs to know the range of object to effectively determine their range.
So I wonder if PML can provide initial value for range and then stereo fills the points in between.
This combiled approach should also help with those pesky black shiny table legs Fergs was whining, I mean complaing, about. :)


My experience with the webcams is that stereo_proc works decently if nothing is moving -- as soon as movement comes into play, it goes to hell.
Were those cameras synched?


On cheap webcams, I'm thinking you might get better overall accuracy on (and lower overhead) using something like Bob Mottram's libv4l2cam ROS bindings (http://code.google.com/p/libv4l2cam/).
I think so too. He used PlayStation Eye-like sensors too (and Minoru) on a tank-like mobile platform.


although I am no looking into putting together a texture projector on the cheap.

Considering cheap, Nelson, ROSalyn and PML, my guess is that typo is now rather than not.
Do they use that projector all the time? I thought they only use it for super-precise oprations like opening doors and plugging in. Interestingly, they don't use LiDAR for that.

Pi Robot
11-03-2010, 04:12 PM
From my limited experience it looks like stereo matching algorithm has a chicken-and-egg problem. It needs to know the range of object to effectively determine their range. So I wonder if PML can provide initial value for range and then stereo fills the points in between. This combiled approach should also help with those pesky black shiny table legs Fergs

Well I have zero stereo matching experience so far (although my eyes seem to be doing a good job) so you're way ahead of me. But I would have thought that a single calibration of the disparities against a known distance would be good enough to get the right scale. Is that not the case?

--patrick

lnxfergy
11-03-2010, 04:13 PM
Were those cameras synched?
No, standard webcams. Fully synched (as in global shutter) should have no problem with movement, vertically synced will probably have a bit of trouble.


Considering cheap, Nelson, ROSalyn and PML, my guess is that typo is now rather than not.
Do they use that projector all the time? I thought they only use it for super-precise oprations like opening doors and plugging in. Interestingly, they don't use LiDAR for that.Yes, that should say "now" -- I've had a couple designs I was playing around with for a $20 projector. The projected stereo is actually getting a lot of usage these days -- I've even seen discussion about using it as the default 3D perception input to the costmap during navigation rather than the tilting lidar.

The problem with the tilting lidar is that it's slow -- it's something like 2seconds to create a full vertical scan -- and that's with a 40hz laser - opposed to 10-15 full frames per second. If you need very high-quality, the tilting laser is actually the way to go -- I think the door recognition algorithms all used the tilting laser cloud.

And yes, one of my research interests is "cheap tricks in robotics"...

-Fergs

RobotAtlas
11-03-2010, 07:12 PM
The projected stereo is actually getting a lot of usage these days -- I've even seen discussion about using it as the default 3D perception input to the costmap during navigation rather than the tilting lidar.

That's great news for us hobbyists. To be honest I was surprised to see you keeping working on PML _after_ you got your Hokuyo.


The problem with the tilting lidar is that it's slow
Yeah, that's what I've heard those RoboCup guys say. They (all?) use cameras now.


And yes, one of my research interests is "cheap tricks in robotics"
You mean affordable, right? That's what I love about you.

lnxfergy
11-03-2010, 08:42 PM
That's great news for us hobbyists. To be honest I was surprised to see you keeping working on PML _after_ you got your Hokuyo.
I should note -- that's just for the costmap generation (obstacle avoidance), they still use the base laser for localization and a majority of costmap generation....


Yeah, that's what I've heard those RoboCup guys say. They (all?) use cameras now.
Well, in the humanoid league, you HAVE to use a camera, in the head. You can't use any active sensors such as lidar or IR rangers.


You mean affordable, right? That's what I love about you.
Affordable, yes. But also when it comes to computation, I'm not a huge fan of overly complicated algorithms -- I like simple and elegant.

-Fergs

airuno2l
11-04-2010, 06:14 PM
I was going to begin work on my SLAM via IR sensor this weekend. I've been thinking about the problem of not having enough data. I think I'm going to try anyways, I can always add sensors or use different ones. I just want to get started.

RobotAtlas
11-04-2010, 06:42 PM
I was going to begin work on my SLAM via IR sensor this weekend. I've been thinking about the problem of not having enough data. I think I'm going to try anyways, I can always add sensors or use different ones. I just want to get started.

Since you want to try SLAM, you probably have a mobile robot. Do you want to share with us what type it is?

SK.
11-05-2010, 02:43 AM
My experience with the webcams is that stereo_proc works decently if nothing is moving -- as soon as movement comes into play, it goes to hell. On cheap webcams, I'm thinking you might get better overall accuracy on (and lower overhead) using something like Bob Mottram's libv4l2cam ROS bindings (http://code.google.com/p/libv4l2cam/). While it doesn't attempt to be as precise as stereo_proc's algorithms, it was designed and tested on relatively un-synchronized cameras (unlike stereo_proc, which grew up on global shutter cameras).
I found a video that shows a two PS3 eye setup on a (relatively slow) mobile robot using OpenCV for stereo:
YouTube - Robot Navigation by Stereoscopic Camera
Definitely looks usable for a lot of applications IMHO.


Although I have to admit -- I recently purchased a Videre STOC as well (should be here next week), and so I've lost a bit of interest in webcam stereo..... although I am now looking into putting together a texture projector on the cheap.
What's the approximate price for a Videre STOC? I asked both the US and european distributors for a price quote by mail, but so far they failed to answer.

SK.
11-05-2010, 02:50 AM
Yeah, that's what I've heard those RoboCup guys say. They (all?) use cameras now.
What Fergs said. There are quite a few different leagues in RoboCup. In some leagues (like the humanoid league) active sensors are forbidden.
On the other hand, in the @Home and Rescue leagues, LIDARs are still the sensor used by basically everyone for SLAM, as they are proven and give pretty robust results.
There's a lot of very interesting papers, great results and videos on Visual SLAM, but at least at the moment it's not robust enough to be used in those competition environments instead of LIDARs (but that will hopefully change in the next years).

airuno2l
11-05-2010, 06:55 AM
Since you want to try SLAM, you probably have a mobile robot. Do you want to share with us what type it is?

Thanks for the interest. I don't actually have a platform yet. As of now my setup consists of an IR sensor, a servo, and one of these massive contraptions:

http://www.evbplus.com/hcs12_9s12/dragon12p_2_10.jpg

It's a Dragon12 development board that uses a freescale controller. My plan is to, first synchronize the servo and IR sensor, and get that data to my laptop (using serial) and make a map using MATLAB. At first it will just be a static map, since I don't have a platform yet, but that will change once I get this up and running. One piece at a time!

I have formal education in robotics (mainly kinematics, localization stuff), but this is the first one I’ve built by myself with my own stuff. The biggest challenge is going to be keeping everything cheap…a lidar would be awesome, but I want to see what I can do with a couple IR sensors and maybe a sonar or two.

My ultimate goal is to have the robot talk to my laptop via bluetooth and receive high level comands (i.e. come here, go home). I do plan to let the robot know the blueprint of my house so it can compare senor data to it, perhaps I can do this on my laptop and only use bluetooth to send kinematic info from robot to computer and send motor command from computer to robot.

SK.
11-15-2010, 05:17 PM
My Playstation Eye cams have arrived. They work as advertised with framerates up to 60 fps (640x480) and 125 fps (320x240). Trying both cams at the same time with such high framerates leads to both barely working, but for a stereo setup such high frame rates are not practical anyways.
I patched the driver as described here: http://wiki.tekkotsu.org/index.php/Sony_PlayStation_Eye_driver_install_instructions

Now if I get around to doing a stereo setup is another question, with all the Kinect goodness competing ;).

RobotAtlas
11-15-2010, 06:20 PM
SK, have you tried synchronizing those 2 cameras?
I still have not received my second camera, but I took apart first one and verified that electronics is still the same as in the link I posted at the beginning of this thread.

Kinect _is_ great, but there are limitations of min range and weight is a big problem for smaller bipeds. I don't think stereo is dead. ;)

lnxfergy
11-15-2010, 07:09 PM
Kinect _is_ great, but there are limitations of min range and weight is a big problem for smaller bipeds. I don't think stereo is dead. ;)

I feel like power consumption could be an even bigger problem -- at 12W, it's 1.5x the power consumption of a FitPC (and you'll need that fitPC to do anything with it).

-Fergs

RobotAtlas
11-15-2010, 07:53 PM
More power = more battery = more weight.
Closed implementation is another big problem - is that 0.5 - 0.8m a hard limit?
Or is it what Microsoft/PrimaTech decided is good enough?
They do unload some of the processing power from CPU.
I read somewhere they achieved single digit Xbox CPU utilization.

Fergs, do you think that PointCloud from Kinect can be fed directly to navigation stack?

SK.
11-16-2010, 01:12 AM
SK, have you tried synchronizing those 2 cameras?
I still have not received my second camera, but I took apart first one and verified that electronics is still the same as in the link I posted at the beginning of this thread.
No, not yet, I might try that later. I'm pretty busy right now, so I don't know exactly when later is ;)


Kinect _is_ great, but there are limitations of min range and weight is a big problem for smaller bipeds. I don't think stereo is dead. ;)
Of course, mounting on small bipeds isn't really what I have in mind for either Kinect or PS Eye stereo, as both have their drawbacks in that scenario. Kinect weights quite a lot and draws quite some of power, but OTOH you basically get PointCloud data directly. PS Eye stereo OTOH might be lighter, but you need all that processing which in the end probably is at least as heavy and power hungry as Kinect. Of course, this argument is invalid once you offload processing to an off-board PC (which I don't wanna do for a variety of reasons).

dexfur
11-17-2010, 02:41 AM
Hi, I'm a newbie to stereo vision.
sync cameras are what i looking for too.

According to CogVille (http://sszymczy.rootnode.net/index.php?menu=projects),
there are three models which are possible for sync.
Why you guys choose PS3 Eye?

And after I look at how to modify PS3 Eye page,
I'm a little bit confused about it.
Do I have to connect VSYNC of 1 to FSIN of 2 and vice versa or just one way.
(in images, both cameras are wired in two way.)

I plan to try the real thing very soon.
Thank you for any additional information.

RobotAtlas
11-17-2010, 10:54 AM
dexfur, I have chosen PlayStation Eye out of 3 models based on the following:
1. Impressive frame rates (up to 125 fps), which means the sensor is fast, which is important for moving objects.
2. Low light sensitivity is a big plus
3. The fact that it’s made by Sony and their engineers usually make good hardware decisions
4. High availability – there are _tons_ of PlayStation Eye cameras.
5. Price – I got 2 cameras for $21 + $17 total shipped on eBay.

As far as 2 wires are concerned, I had that question too.
My guesses of why he did that:
1. He wanted to daisy chain more than two cameras (unlikely)
2. He wanted to do synch from either camera (more likely)
3. He used a scope to see signals coming out of those (most likely)

I didn’t really like the method of using LCD to measure synch.
I would think using rotating target should produce better results.
But it’s either synched or not synched, so his method proved it’s better with connection.
So my guess would be cameras are synched.

dexfur
11-18-2010, 03:20 AM
RobotNV,
Thx very much for your very clear reply.
So, I understand now why you choose PS3 Eye among those three.
However, in my area, Thailand, it costs about $40 dollars per piece
, double price of Trust WB-5400 on e-bay which is, unfortunately, not available in Thailand.

Overall, now I decide to try PS3 Eyes because of performance aspects you suggested.
Thx again.:veryhappy:

I'll post what result is, soon.

RobotAtlas
11-26-2010, 08:12 PM
Since my PhantomX kit have not arrived yet, I decided to finish this project.
At first I wanted to see how increased frame rates help with motion blur.
I'm using Ubuntu 10.10 with kernel 2.6.35. guvcview was used to change frame rates (no module recompilation or any other tweaking had t be done).
I took pictures of moving fan at 30, 60 and 125 Hz.
You van see that the shape of fan blades is a lot better at 125 Hz.
At 30 Hz the blade is half the actual size. :(
At 125 Hz you still can see the effect of rolling shutter, but not nearly as bad as at 30 fps.

Next steps would be to record stereo image for 2 unsynchronized cameras and then solder synch wires, do the same recording and compare results.
I still don't know which software to use for this. Should I try stereo_image_proc from ROS?
http://forums.trossenrobotics.com/attachment.php?attachmentid=2211&stc=1&d=1290823522
http://forums.trossenrobotics.com/gallery/showimage.php?i=3534&original=1&c=newimages

lnxfergy
11-26-2010, 09:46 PM
Next steps would be to record stereo image for 2 unsynchronized cameras and then solder synch wires, do the same recording and compare results.
I still don't know which software to use for this. Should I try stereo_image_proc from ROS?
http://forums.trossenrobotics.com/gallery/showimage.php?i=3534&original=1&c=newimages

I suppose you could -- but I'm not sure how well it would work. Most of the docs assume you have at least calibrated cameras -- and I feel like good results are generally only going to be achieved using an active texture projector on many environments.

I was just going to lookup a link for you to Bob Mottram's libv4l2cam project -- when I found this youtube video showing his new stereo work:

http://www.youtube.com/watch?v=Mx_JqLWVwgM

That's with a Minoru camera I believe -- I'm currently shoving my minoru cam back on a robot here to play with..

-Fergs

RobotAtlas
11-26-2010, 10:42 PM
Thanks Fergy for the video. It looks pretty cool with as cheap of camera as Minoru. I had a couple of chances to buy it for about $20, but I'm sticking with Sony PS Eyes for now.


I suppose you could -- but I'm not sure how well it would work. Most of the docs assume you have at least calibrated cameras

Talking about calibration, I'm having hard time running stereo_image_proc.

When I calibrate stereo camera, everything goes fine. I do get a little high epipolar error of about 2.

When prssing COMMIT, my calibration exits and see following messages in uvc_stereo process:
[ INFO] [1290827734.215168732]: writing calibration data to /tmp/calibration_left_camera.yaml
[ INFO] [1290827734.219761874]: writing calibration data to /tmp/calibration_right_camera.yaml

which I think means that cameras were calibrated.

What I don't understand is how reuse those two .yaml files next time drive is run.

Anyway, the problem happens when I do
rosrun stereo_image_proc stereo_image_proc

I see this error:

terminate called after throwing an instance of 'std::runtime_error'
what(): Cannot call rectifyImage when distortion is unknown.
Aborted

I tried searching ros-user list, but have not found anything yet...

lnxfergy
11-26-2010, 10:44 PM
Thanks Fergy for the video. It looks pretty cool with as cheap of camera as Minoru. I had a couple of chances to buy it for about $20, but I'm sticking with Sony PS Eyes for now.



Talking about calibration, I'm having hard time running stereo_image_proc.

When I calibrate stereo camera, everything goes fine. I do get a little high epipolar error of about 2.

When prssing COMMIT, my calibration exits and see following messages in uvc_stereo process:
[ INFO] [1290827734.215168732]: writing calibration data to /tmp/calibration_left_camera.yaml
[ INFO] [1290827734.219761874]: writing calibration data to /tmp/calibration_right_camera.yaml

which I think means that cameras were calibrated.

What I don't understand is how reuse those two .yaml files next time drive is run.

Anyway, the problem happens when I do
rosrun stereo_image_proc stereo_image_proc

I see this error:

terminate called after throwing an instance of 'std::runtime_error'
what(): Cannot call rectifyImage when distortion is unknown.
Aborted

I tried searching ros-user list, but have not found anything yet...

You need to somehow give the calibration yamls to the uvc driver so that it can broadcast correct image info topics (I would hope this is documented for uvc, but last time I looked, the uvc_stereo had very little docs)

-Fergs

RobotAtlas
11-28-2010, 10:25 PM
Well, I have results to report. I did find a way to make ROS' uvc_stereo take two URI files. It was just a matter of specifying two parameters. My problem had to do with unstable vs. cturtle release. It looks like unstable stereo calibration is in flux and has some problems at the moment.

Back to PlayStation Eye synching. From my experiment with the fan I think motion blur and rolling shutter are bigger problems than time synching. At least on my new beefy i7 PC.

It sure is nice to have 125 fps to reduce motion blur.
In the pictures below you can see the effect of motion blur is about the same as the effect of difference in times between frames.
http://forums.trossenrobotics.com/attachment.php?attachmentid=2215&d=1291002886

http://forums.trossenrobotics.com/attachment.php?attachmentid=2216&d=1291002886

Lastly I have a piece of advice for those who will want to attempt soldering wires on PlaySation Eye board:
use as thin of wires as possible. Those traces are very thin and easily come off. :(
Also you will need a smaller soldering iron. Magnifying glass would be handy too.

SK.
11-29-2010, 01:31 AM
For getting low motion blur, you don´t have to run the camera at 125fps, reducing the exposure time should have the same effect. That is, at least if everything is implemented correctly, which is not the case for all cameras i played with. The Sony SPC1330 for example IIRC ignores the uvc exposure controls and always uses the maximum exposure time possible with the selected framerate. Using guvcview, it seemed like setting exposure time worked as expected for my PS-Eyes.

I also played around with my PS-Eyes a little, but likely will only find time for doing that again in 2 weeks or so. My plan so far was to start with Bob Mottram´s v4l2stereo driver and modify it, so it streams out the 2 Image and CameraInfo topics which could then be used with the ROS stereo framework. I looked into v4l2stereo because it is already waiting for 2 images and then gets them out with synced timestamps, so one avoids all that synchronization stuff.
My plan so far is to have them running at 640x480, 15fps on a static scene (to test before soldering) and then going from there.