PDA

View Full Version : Ar_kinect



Peter_heim
05-03-2011, 04:30 AM
Hi Fergs
I just tried the latest version of ar_kinect (r250)
but as soon as i connect to pointclouds or color_image ar_kinect crashes with this error


[ INFO] [1304412830.475260035]: *** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 0.000000 0.000000 -0.000000 1.000000
262.50000 0.00000 159.50000 0.00000
0.00000 262.50000 119.50000 0.00000
0.00000 0.00000 1.00000 0.00000
--------------------------------------
[ INFO] [1304412830.475593186]: Opening Data File /home/peter/ros/albany-ros-pkg/albany_vision/ar_kinect/data/objects_kinect
[ INFO] [1304412830.478873618]: About to load 1 Models
[ INFO] [1304412830.479044538]: Read in No.1
[ INFO] [1304412830.481147218]: Subscribing to image and cloud topics
header:
seq: 0
stamp: 0.000000000
frame_id:
points[]: 0
width: 0
height: 0
sensor_origin_: 0 0 0
sensor_orientation_: 0 0 0 1
is_dense: 1

-1.070328 -0.801345 2.942000
[ar_kinect-1] process has died [pid 11239, exit code -11].
log files: /home/peter/.ros/log/3f40260a-7562-11e0-b1ff-0023cdfebf39/ar_kinect-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete


I'm running diamondback on ubuntu 10.4 and using the openni_camera launch files
after ar_pose crashes i can see the marker

peter

lnxfergy
05-03-2011, 08:12 AM
Can we get the process specific log file (
/home/peter/.ros/log/3f40260a-7562-11e0-b1ff-0023cdfebf39/ar_kinect-1*.log) ?

Also, are you running a fully up to date Kinect drivers? The update the other day was to fix the fact that /camera/depth/points2 no longer exists -- and that /camera/rgb/points is the right replacement.

-Fergs

Peter_heim
05-04-2011, 03:53 AM
Hi Fergs

Also, are you running a fully up to date Kinect drivers? The update the other day was to fix the fact that /camera/depth/points2 no longer exists -- and that /camera/rgb/points is the right replacement.

I'm using the latest kinect drivers and i can see /camera/rgb/points in rostopics
below is the ar_kinect file



[roscpp_internal] [2011-05-03 19:07:37,593] [thread 0xb52fd7d0]: [DEBUG] UDPROS server listening on port [53613]
[roscpp_internal] [2011-05-03 19:07:37,596] [thread 0xb52fd7d0]: [DEBUG] Started node [/ar_kinect], pid [13050], bound on [10.1.1.8], xmlrpc port [50181], tcpros port [58412], logging to [/home/peter/.ros/log/3f40260a-7562-11e0-b1ff-0023cdfebf39/ar_kinect-1.log], using [real] time
[roscpp_internal] [2011-05-03 19:07:37,598] [thread 0xb52fd7d0]: [DEBUG] XML-RPC call [searchParam] returned an error (-1): [Cannot find parameter [tf_prefix] in an upwards search]
[roscpp_internal] [2011-05-03 19:07:37,879] [thread 0xb52fcb70]: [DEBUG] Accepted connection on socket [7], new socket [12]
[roscpp_internal] [2011-05-03 19:07:37,880] [thread 0xb52fcb70]: [DEBUG] TCPROS received a connection from [10.1.1.8:56017]
[roscpp_internal] [2011-05-03 19:07:37,880] [thread 0xb52fcb70]: [DEBUG] Connection: Creating TransportSubscriberLink for topic [/rosout] connected to [callerid=[/rosout] address=[TCPROS connection to [10.1.1.8:56017 on socket 12]]]
[roscpp_internal] [2011-05-03 19:07:37,987] [thread 0xb52fd7d0]: [DEBUG] XML-RPC call [getParam] returned an error (-1): [Parameter [/ar_kinect/publish_tf] is not set]
[ros.ar_kinect] [2011-05-03 19:07:37,987] [thread 0xb52fd7d0]: [INFO] Publish transforms: 1
[roscpp_internal] [2011-05-03 19:07:37,988] [thread 0xb52fd7d0]: [DEBUG] XML-RPC call [getParam] returned an error (-1): [Parameter [/ar_kinect/publish_visual_markers] is not set]
[ros.ar_kinect] [2011-05-03 19:07:37,988] [thread 0xb52fd7d0]: [INFO] Publish visual markers: 1
[ros.ar_kinect] [2011-05-03 19:07:37,989] [thread 0xb52fd7d0]: [INFO] Threshold: 100
[ros.ar_kinect] [2011-05-03 19:07:37,990] [thread 0xb52fd7d0]: [INFO] Marker Pattern Filename: /home/peter/ros/albany-ros-pkg/albany_vision/ar_kinect/data/objects_kinect
[ros.ar_kinect] [2011-05-03 19:07:37,991] [thread 0xb52fd7d0]: [INFO] Marker Data Directory: /home/peter/ros/ccny-ros-pkg/ccny_vision/ar_pose
[ros.ar_kinect] [2011-05-03 19:07:37,991] [thread 0xb52fd7d0]: [INFO] Subscribing to info topic
[roscpp_internal] [2011-05-03 19:07:37,991] [thread 0xb52fd7d0]: [DEBUG] Publisher update for [/camera/rgb/camera_info]: http://10.1.1.8:33398/, already have these connections:
[roscpp_internal] [2011-05-03 19:07:37,992] [thread 0xb52fd7d0]: [DEBUG] Began asynchronous xmlrpc connection to [10.1.1.8:33398]
[roscpp_internal] [2011-05-03 19:07:38,181] [thread 0xb4afbb70]: [DEBUG] Connecting via tcpros to topic [/camera/rgb/camera_info] at host [10.1.1.8:53304]
[roscpp_internal] [2011-05-03 19:07:38,181] [thread 0xb4afbb70]: [DEBUG] Async connect() in progress to [10.1.1.8:53304] on socket [11]
[roscpp_internal] [2011-05-03 19:07:38,181] [thread 0xb4afbb70]: [DEBUG] Connected to publisher of topic [/camera/rgb/camera_info] at [10.1.1.8:53304]
[ros.ar_kinect] [2011-05-03 19:07:38,203] [thread 0xb52fd7d0]: [INFO] *** Camera Parameter ***
[ros.ar_kinect] [2011-05-03 19:07:38,203] [thread 0xb52fd7d0]: [INFO] Opening Data File /home/peter/ros/albany-ros-pkg/albany_vision/ar_kinect/data/objects_kinect
[ros.ar_kinect] [2011-05-03 19:07:38,204] [thread 0xb52fd7d0]: [INFO] About to load 1 Models
[ros.ar_kinect] [2011-05-03 19:07:38,204] [thread 0xb52fd7d0]: [INFO] Read in No.1
[ros.ar_kinect] [2011-05-03 19:07:38,205] [thread 0xb52fd7d0]: [INFO] Subscribing to image and cloud topics
[roscpp_internal] [2011-05-03 19:07:38,212] [thread 0xb52fd7d0]: [DEBUG] Publisher update for [/camera/rgb/image_color]: http://10.1.1.8:33398/, already have these connections:
[roscpp_internal] [2011-05-03 19:07:38,212] [thread 0xb52fd7d0]: [DEBUG] Began asynchronous xmlrpc connection to [10.1.1.8:33398]
[roscpp_internal] [2011-05-03 19:07:38,214] [thread 0xb52fd7d0]: [DEBUG] Publisher update for [/camera/rgb/points]: http://10.1.1.8:33398/, already have these connections:
[roscpp_internal] [2011-05-03 19:07:38,214] [thread 0xb52fd7d0]: [DEBUG] Began asynchronous xmlrpc connection to [10.1.1.8:33398]
[roscpp_internal] [2011-05-03 19:07:38,382] [thread 0xb4afbb70]: [DEBUG] Connecting via tcpros to topic [/camera/rgb/points] at host [10.1.1.8:53304]
[roscpp_internal] [2011-05-03 19:07:38,383] [thread 0xb4afbb70]: [DEBUG] Async connect() in progress to [10.1.1.8:53304] on socket [14]
[roscpp_internal] [2011-05-03 19:07:38,383] [thread 0xb4afbb70]: [DEBUG] Connected to publisher of topic [/camera/rgb/points] at [10.1.1.8:53304]
[roscpp_internal] [2011-05-03 19:07:38,383] [thread 0xb4afbb70]: [DEBUG] Connecting via tcpros to topic [/camera/rgb/image_color] at host [10.1.1.8:53304]
[roscpp_internal] [2011-05-03 19:07:38,383] [thread 0xb4afbb70]: [DEBUG] Async connect() in progress to [10.1.1.8:53304] on socket [15]
[roscpp_internal] [2011-05-03 19:07:38,383] [thread 0xb4afbb70]: [DEBUG] Connected to publisher of topic [/camera/rgb/image_color] at [10.1.1.8:53304]

Regards Peter

gonzo
05-05-2011, 05:55 PM
Dear Fergs..

I'm new to Probabilistic Robotics, SLAM, and everything related to this field.
I thought robot like phoenix hexapod and kinematics involved is already cool enough. But that's until
I see your project. And I must say, this is a lot cooler.

Now, I started to read some basic about SLAM (kalman filter, particle filter, etc) using many reference.
But I think that the most effective way is also to implement it to the real robot.

Do you have any suggestion about what platform I should choose? I was thinking about a kinect coupled with a iRobot create would make a great pair .. What do you think?

And one more question to ask since I'm new to this, do you think there's any chance that your SLAM implemented in fire fighting robot to compete in Trinity or Robogames , maybe??
That would be SICK!! :veryhappy:
In Indonesia, roboticists don't know much about this, so I see it as an advantage to compete in local fire fighting competition..

Thanks Fergs.. respect!! ;)

lnxfergy
05-06-2011, 02:51 PM
Peter, upon further analysis I just noticed:



header:
seq: 0
stamp: 0.000000000
frame_id:
points[]: 0
width: 0
height: 0
sensor_origin_: 0 0 0
sensor_orientation_: 0 0 0 1
is_dense: 1


In your log file... this means that the first cloud your kinect output had width/height of 0.... this is very bad as it probably caused a /0 error later on when we try to automatically deal with different size cloud/images.....

To me, that says there is an issue with the Kinect driver...
-Fergs

lnxfergy
05-06-2011, 03:45 PM
Dear Fergs..

I'm new to Probabilistic Robotics, SLAM, and everything related to this field.
I thought robot like phoenix hexapod and kinematics involved is already cool enough. But that's until
I see your project. And I must say, this is a lot cooler.

Now, I started to read some basic about SLAM (kalman filter, particle filter, etc) using many reference.
But I think that the most effective way is also to implement it to the real robot.

Do you have any suggestion about what platform I should choose? I was thinking about a kinect coupled with a iRobot create would make a great pair .. What do you think?

And one more question to ask since I'm new to this, do you think there's any chance that your SLAM implemented in fire fighting robot to compete in Trinity or Robogames , maybe??
That would be SICK!! :veryhappy:
In Indonesia, roboticists don't know much about this, so I see it as an advantage to compete in local fire fighting competition..

Thanks Fergs.. respect!! ;)

There are a number of interesting questions here, I'm going to address them each individually below.

1) Kinect for SLAM
The Kinect really has revolutionized not only hobby robotics, but robot perception in general. This means there is a lot of ongoing working, but that there aren't a lot of finished and polished software packages. The Kinect has a few major limitations in SLAM --


57 degree field of view -- this is a major one. Many SLAM algorithms work in 2D and depend on finding good features in a 2d depth scan. With only a 57 degree wide scan, you don't often have many features. Good odometry can help here, but, most low-cost platforms have terrible odometry.
5.0m max range -- another major one. Many optimizations SLAM algorithms have leveraged over the past few years depend on the 30m range of the higher end lasers.
0.5m minimum range -- this is dicey for navigation, certainly give the robot a bumper as backup.
Doesn't detect dark surfaces -- this one really causes me some headaches as our doors at Albany are all painted in black. The robot can't see the door is closed until it is about <1.0 m away.

That said, I think we'll see some amazing new 3d or 2.5d map building and navigation algorithms start to arrive on the scene as people get more time with the
The other possibility is a base with higher Kinect. Remember, it's only been out since last November...

2) A Platform
The Kinect+Create combo has a lot of good stuff going for it. Large user community, etc. The TurtleBot (http://www.willowgarage.com/turtlebot) has some additional features you would certainly want to look into, such as the added gyro to improve odometry output from the Create (which is quite poor).

3) SLAM for Trinity Fire Fighting
Because the map is already known, there is no reason to build it ahead of time in my opinion. As for navigation and localization, certainly there are ways to improve your reliability by adding these.

As for using the Kinect at Trinity -- I think that is a no go, since it's quite big, and the 0.5m deadzone is really far too large for such a small maze -- even if you mounted the kinect on the back edge of the robot, by the time you've proceeded down a hallway far enough to turn towards another hallway, you've already gone past the point where the Kinect looses the wall in front of you...

-Fergs

lnxfergy
05-18-2011, 12:16 AM
Peter, upon further analysis I just noticed:



header:
seq: 0
stamp: 0.000000000
frame_id:
points[]: 0
width: 0
height: 0
sensor_origin_: 0 0 0
sensor_orientation_: 0 0 0 1
is_dense: 1
In your log file... this means that the first cloud your kinect output had width/height of 0.... this is very bad as it probably caused a /0 error later on when we try to automatically deal with different size cloud/images.....

To me, that says there is an issue with the Kinect driver...
-Fergs

Peter, I just committed some patches to ar_kinect after this popped up on ros answers as well.

-Fergs

wmccafferty
05-18-2011, 07:34 AM
For anyone in the future looking for the rosanswers thread: http://answers.ros.org/question/951/ar_kinect-not-finding-published-point-cloud-data

Billy McCafferty
http://www.sharprobotica.com/

lnxfergy
05-23-2011, 05:25 PM
Ok, this should now be fixed. In addition, I implemented a bunch of upgrades, which should result in MAJOR speed improvements. The newly committed version (r263) now does the following:


subscribes only to the cloud -- uses pcl functions to create the image, which will fix issues with synchronization
no longer computes normals (which was slow), we now use PCL functions to estimate the transformation based on the corner points (which also appears to be more accurate than just computing the normal of the center).

Please let me know if you encounter any issues.

-Fergs