View Full Version : Line Extraction with IR sensor/servo scanner

01-09-2011, 08:45 PM
Feature extraction is commonly used in SLAM applications. It is useful because it reduces the amount of data that has to be processed by the mapping functions. Some applications extract lines and curves from the scan data and sends it to a global map, then during later scans, if the same feature is recognized, the amount of movement of the robot can be calculated. Feature extraction is usually only used on robots that are going to be operating in structured environments (indoors, office spaces, homes etc.) as opposed to unstructured environment (with trees, and bushes around). This is because homes and offices have walls that are lines. I have even seen some applications that take advantage of the Cartesian worlds we construct around us even farther by correcting the lines they extract by assuming all corners are 90 degrees, and it seems to work great in office settings.

Unfortunately, this process requires very dense scan data, therefore LIDAR sensors are usually used.I'm trying to find out if it is feasible to use a IR sensor mounted to a servo to do line extraction of scan data. I have the sensor and servo hooked up to a project board (Freescale HCS12 MCU) which sends the data to the PC where Matlab takes over.

I messed around with a few different line extraction methods, but Split and Merge seemed to do the best (as opposed to hough transform which takes a significant amount more processing time). I learned about the different algorithms from a paper by Roland Siegwart titled "A Comparison of Line Extraction Algorithms using 2D Laser Rangefinder for Indoor Mobile Robotics" (http://infoscience.epfl.ch/record/97571/files/nguyen_2005_a_comparison_of.pdf)

I did some desktop tests and I don't like the results very much. The problem is that the scan data is just too sparse to do anything with. I posted a video that shows it working where objects are within a foot of the sensor, but once they are farther away (>2 ft), the data is too messy and there is no way of getting valuable information from it.


My goal is to utilize SLAM on a cheap robot, hopefully using this sensor setup, encoders, and maybe another one or two rangers (IR or sonar). Next up on my list, I'm going to try scan matching. This process takes two consecutive scans, and figures out how to translate and rotate one scan onto the other, by doing this you know how much the robot has moved. I'm was hoping to do this by matching lines from one scan to the next, but the line extraction thing didn't work out to well so I will be using raw scan data. the method I'm going to use is called metric based Iterative Closest Point (mdICP), it's describe in a paper titled "Metric-based scan matching algorithms for mobile robot displacement estimation" by Javier Minguez ( http://webdiis.unizar.es/~jminguez/MbICP_ICRA.pdf )

03-25-2012, 10:14 PM
Hi there, any progress on this subject? I am also interested in a rudimentary map orientation that does not require a laser sensor. I don't intend to do slam, but I would like to take it as far as it's possible. Thank you for sharing your efforts!