PDA

View Full Version : [Project] "Free" "LIDAR"



Adrenalynn
07-20-2008, 02:48 AM
Since LinuxGuy and I first started talking about the lasers that were being carried in the catalog here, I've been working on designing an effective but super cheap LIDAR. I've put a ton of time and thought into it, and I'm finally getting close to something to show for the effort.

The basic theory is that we calibrate a camera to correct any abberation. Then subtract the image from the image (basically creating a flat image), and then scan a laser line down the area that the camera views. The laser needs to be above (works better than below) the camera by a pretty good bit, but as it scans down the object(s), the line will appear offset by a number of pixels depending upon how close it is to the focal plane of the camera. We're looking at the parallax of the camera and light source.

Using trig, we can then figure out the distance of any given point in that three dimensional scene of objects. This is a point-cloud. We can assemble that point-cloud into a three dimensional image - but only as much of the image as our laser and camera can see.

My latest experiments [finally] show some good progress, but with still some substantial limitations.

- I need to use a better camera. I've been cheaping-out on this project from the start. Free/repaired lipstick camera at only 180 lines. It has horrible light sensitivity and is terribly noisey.

- I need a more focused laser. I'm using a $0.50 pointer shining through a glass bubble-level capsule. The thinner the line, the more resolution and faster the scan can be run.

- There's still a lot of limitations to my software. It takes a full second to scan 60deg, but only a couple milliseconds to process it. Part of that is the laser as above.

- The resolution is pretty good, sub-millimeter (as you'll see), but really only accurate with this camera out six feet or so before it loses too much depth of field.

- Another software limitation: I have to stop moving to scan. Since I subtract my images before running a 1sec scan, I can't keep moving or the subtracted image and the natural image will no longer match.

- Calibration takes a good ten minutes, but is pretty stable over the entire session.

Here's an image of my progress.

What we see here is the depth-map that is created, this is from the camera's viewpoint, as you can see. I assembled the point-cloud in MatLab so that we could visualize what the depth looks like. As you can see, I've rotated it some counter-clockwise to show the depth-field.

So the upshot is: I'm now confident that this CAN be made to work for next to nothing. There's still at least another month or two of coding on it, but this will at some point fly... It may not be as fast as a commercial SICK/LIDAR system, but it's going to work darn it!

4mem8
07-20-2008, 03:37 AM
M'mmm Interesting work Adrenalynn, I wish I had your talents in this area, Mine is modeling and mechanical, yours is programing, Oh, to have both.

LinuxGuy
07-20-2008, 10:54 AM
M'mmm Interesting work Adrenalynn, I wish I had your talents in this area, Mine is modeling and mechanical, yours is programing, Oh, to have both.
Actually, Adrenalyn does have both of these skills. I've never seen any of her code, so I don't know what her programming ability is, but it's clear she has great mechanical and electronics skills. If she were developing things Open Source, she'd be a great contributor in both hardware and software.

I'd rather skip the mechanical and PCB design stuff and go right to the software. I also do 3D modeling of my designs, and would like to learn more about doing simulations and CAM. My 3D CAD software has modules for simulation and CAM.

8-Dale

Adrenalynn
07-20-2008, 11:33 AM
Although I've conservatively written more than 6,000,000 lines of code in my life, and if you're running Windows post-1996 you're likely running my code and architecture, I don't really identify as a programmer. Although I do modeling, electronics design, mechanical systems design, board layout, ... I don't really identify in there either.

On the technical side, my specialty is a lot narrower. If someone needs my area of expertise, they _really_ need it. I'm a mathematician specializing as a numerical receipist. In other words, I solve problems by beating them to death with math.

Robotics are an interesting avenue because I can play with my first love AND my second - problem solving in general - and that's where my broadest strength lies - solving problems in general.

Job-title-wise, that makes me a CTO (Chief Technology Officer) or a minimum position of Lead Architect/Principal Architect.

I write code to solve problems. I do mechanical design to solve problems. I do electrical engineering to solve problems. You get the idea...

My doctorate is actually in business administration, where I... solve problems. Primarily operations issues, although I'm pretty solid in the finance/GAP side too.

I really don't have the patience or attention span to sit down and bash out code every day. Nor do I have the patience to make things look really really pretty, like your Wall-E. I can turn out some pretty cool code in short sprints, like this laser ranger, but traditionally this is about the time I'm turning it over to a team - all the problems have been solved, now it's code-monkey work...

I admire the tenacity and attention to detail that goes into something like Wall-E, but try as I might I know that I don't have that gift.

robot maker
07-20-2008, 01:23 PM
i know about that type of design
thats really great you started on that,i havent had the time to try with my lasers
have about 7 lasers and 2 line lasers,i even got a chance to get a optical bench
but with all my robot projects,plus home projects and being a cook at my house ,kinda hard
but good news going to retire over 20 years early
and spend full time building robots,mostly i build and design high end in house test equipment for big companywhere i am in charge of the test and calibration lab ,been there 19 years ,i guess now you know my age ,also have electronic photography memory,if i see a design for a few minutes i can latter rembember the circuit and draw it
and when i started building robots,started to learn programming ,besides robots and cooking ,i travel alot mostly once month or 2

Although I've conservatively written more than 6,000,000 lines of code in my life, and if you're running Windows post-1996 you're likely running my code and architecture, I don't really identify as a programmer. Although I do modeling, electronics design, mechanical systems design, board layout, ... I don't really identify in there either.

On the technical side, my specialty is a lot narrower. If someone needs my area of expertise, they _really_ need it. I'm a mathematician specializing as a numerical receipist. In other words, I solve problems by beating them to death with math.

Robotics are an interesting avenue because I can play with my first love AND my second - problem solving in general - and that's where my broadest strength lies - solving problems in general.

Job-title-wise, that makes me a CTO (Chief Technology Officer) or a minimum position of Lead Architect/Principal Architect.

I write code to solve problems. I do mechanical design to solve problems. I do electrical engineering to solve problems. You get the idea...

My doctorate is actually in business administration, where I... solve problems. Primarily operations issues, although I'm pretty solid in the finance/GAP side too.

I really don't have the patience or attention span to sit down and bash out code every day. Nor do I have the patience to make things look really really pretty, like your Wall-E. I can turn out some pretty cool code in short sprints, like this laser ranger, but traditionally this is about the time I'm turning it over to a team - all the problems have been solved, now it's code-monkey work...

I admire the tenacity and attention to detail that goes into something like Wall-E, but try as I might I know that I don't have that gift.

milw
07-20-2008, 01:36 PM
Are you guys aware of the David laser scanner software? It has been under development for a while, was freeware but they may be charging now, but it used a line laser just as you describe Adrenalynn (I use a Home Depot laser level), and a webcam. The software builds up a 3d mesh model of what you are scanning. So far the applications are fairly small scale, and you need to do a calibration with a right-angled pattern so it knows how to calculate the mesh. But it works pretty slick, and they recently added some features. Check it out at http://www.david-laserscanner.com/
(disclaimer- I'm in no way associated with the David project)

@Linuxguy, what 3d software do you use? I'm heavily into Cinema4D and can use Maya, trying to learn SolidWorks too. C4d is not really intended for CAD tho, more for Pixar type character animation, but I may use it to simulate my mech!

Adrenalynn
07-20-2008, 01:38 PM
Back to laser ranging -

I'm beginning to think this is the wrong direction. I'm getting crazy insanely good resolution, certainly it wouldn't miss a chair leg, but except in an instance of object identification - I suspect this isn't the way to go about navigation. Do I really need sub-millimeter with all the overhead that involves? If I were trying to shoot down projectiles in mid-flight maybe...

I'm thinking about going back to my original point-design that I tossed together when LinuxGuy and I were chatting about it in another thread, and just modify that for line detection. At first blush, I suspect all I really need is to fix the camera and laser a known distance apart, and then mount that assembly to a tilt head. Just build a map of the distances to everything in view, even +- 0.5cm is fine...

I went back and completely rewrote my calibration algorithm - calibration is down from 10mins to <1min now. Using the same camera with better lighting as well, but it becomes apparent that focus is going to be the killer. That line needs to be razor thin over the entire visible area.

Naw, I'm really starting to think that building a hires 3D map is for chumps. I was looking at the TTF (Time To Flight) systems like the SICK, and they are really very low resolution devices...

Adrenalynn
07-20-2008, 01:41 PM
Are you guys aware of the David laser scanner software? It has been under development for a while, was freeware but they may be charging now, but it used a line laser just as you describe Adrenalynn (I use a Home Depot laser level), and a webcam. The software builds

Thanks for the pointer, I just checked 'em out. Surprised I missed 'em first search a couple months ago.

Alas, totally inappropriate for robotics. It requires a known 90 degree plane be in place to wrap the line around. Great idea for 3D scanning objects, but that methodology won't work in the wild, in much the same way that the interesting 3D scanner in Make Magazine this last time won't work (requires the target to be rotated around a plane)

robot maker
07-20-2008, 01:46 PM
it seems that most everyone has some or more talents of each skills and 1 skills or 2 of what they do really good
mostly we need to put each of our resources together and build a really good design
i spend a very long time testing each part ,electronics or mechanical before i build the next part
also dont like the full johnny five kit,i like to buy each part seperate as needed,some parts might need to be milled or special made,same for any robot project
but i do like Adrenalynn work on the laser,thats the main design looking for mine,even servo magazine has info on using lasers with webcams,also from laser tracking designs on the internet i saved many links for diferent types to try ,one project getting near done is a sonar ring, ordered 25 sonars for a 16 sonar ring design,mostly the code will be the problem

M'mmm Interesting work Adrenalynn, I wish I had your talents in this area, Mine is modeling and mechanical, yours is programing, Oh, to have both.

milw
07-20-2008, 01:51 PM
actually not just as you described, David has you adjusting brightness and contrast to get only the laserlight visible. Issues I've had are with objects that are not evenly colored- you get better reflectance/visibility off of white, very little to none off of transparent tinted plastic (I tried to scan the head of my white/black Robosapien V2).

robot maker
07-20-2008, 01:51 PM
yes i have look at thier website and they even send me a news letter every so often to my mailbox
havent had a chance to play with it yet


Are you guys aware of the David laser scanner software? It has been under development for a while, was freeware but they may be charging now, but it used a line laser just as you describe Adrenalynn (I use a Home Depot laser level), and a webcam. The software builds up a 3d mesh model of what you are scanning. So far the applications are fairly small scale, and you need to do a calibration with a right-angled pattern so it knows how to calculate the mesh. But it works pretty slick, and they recently added some features. Check it out at http://www.david-lasers canner.com/
(disclaimer- I'm in no way associated with the David project)

@Linuxguy, what 3d software do you use? I'm heavily into Cinema4D and can use Maya, trying to learn SolidWorks too. C4d is not really intended for CAD tho, more for Pixar type character animation, but I may use it to simulate my mech!

Electricity
07-20-2008, 01:55 PM
Wow so I am so lost.. Would you mind breaking down the basics of what you're attempting to do to about a 5th grade level for me?
From what I gather you're trying to create some sort of laser scanner to be used as an optics system for a bot?

Have you considered infrared lasers? I dono what the benefit would be, if any, but atleast now I can pretend I contributed.. :p

robot maker
07-20-2008, 01:55 PM
also i guess everyone know there is a very bad side to lasers,if you have a robot that has one mounted and have small children in the house will blind the childs eyes

Adrenalynn
07-20-2008, 02:19 PM
actually not just as you described,

Is it accurate to say that it requires a 90 degree intersection of planes in order to perform its ranging? That's the really big sticking point that I see. Automagically puttering with brightness and contrast, saturation, etc is pretty trivial. The intersection isn't (for ranging, great for scanning though)

Thanks, Milw!

Adrenalynn
07-20-2008, 02:24 PM
also i guess everyone know there is a very bad side to lasers,if you have a robot that has one mounted and have small children in the house will blind the childs eyes


Largely false. The warning for the tiny little lasers we're using is a CYA against the idiot that decides to stare into it for hours on end. There's just not enough coherence with enough power behind it to be blinding people in a flash.


Electricity -

Well, the basic notion is that if one scans a laser line and looks at the reflection coming back from it, and measures the angle between the breaks in the line and the camera's focal plane, then one can employ basic trigonometry and derive the exact distance that those points in space reside.

Effectively I'm building a three dimensional map of everything the camera can see, in much the same way our stereoscopic vision works, only infinitely more accurately.

Once you have that very accurate spacial map, you can get your robot to do path planning through that space so that it doesn't run into anything that it shouldn't.

Make sense?

metaform3d
07-20-2008, 04:15 PM
Really impressive result for something so simple. Great job!

Agreed the high precision and the need for stability is not ideal for a mobile platform. I would be great for a pick & place system though. Throw a jumble of objects on a table and your system would be ideal for identifying them and determining their orientation.

For more dynamic situations perhaps it could be used to find the range of selected objects. Put a spot laser on a pan & tilt rig and aim for a specific landmark. When you hit it you can triangulate its position. Combined with full FOV stereo you could build a pretty accurate map.

4mem8
07-21-2008, 12:21 AM
This is all great stuff people, A little beyond me, Well not the application but the mathematics involved, but I read with interest and look forward to more on this subject.

milw
07-21-2008, 09:34 AM
Adrenalynn, you're using 2 cameras? or only one with the two images? Do you have to take a 'blank' image for every scan line?

robot maker
07-21-2008, 09:44 AM
not really that much false a child whould stared at it because its some thing new and doesnt know it will hurt them
on a flash is only for a second or so



Largely false. The warning for the tiny little lasers we're using is a CYA against the idiot that decides to stare into it for hours on end. There's just not enough coherence with enough power behind it to be blinding people in a flash.


Electricity -

Well, the basic notion is that if one scans a laser line and looks at the reflection coming back from it, and measures the angle between the breaks in the line and the camera's focal plane, then one can employ basic trigonometry and derive the exact distance that those points in space reside.

Effectively I'm building a three dimensional map of everything the camera can see, in much the same way our stereoscopic vision works, only infinitely more accurately.

Once you have that very accurate spacial map, you can get your robot to do path planning through that space so that it doesn't run into anything that it shouldn't.

Make sense?

Adrenalynn
07-21-2008, 11:18 AM
A scanning laser doesn't stay in place for more than a flash. A SICK is complete scans 10x per second. I'm at 60 degrees/0.19sec. Not quite as fast, but blazing fast if you're trying to follow that laser.

Milw,

A single camera today. It takes one non-laser photo per scan, 1/30th of a second. The remaining 29 frames are distance estimating frames. Each step is about 2 degrees.

Adrenalynn
07-21-2008, 12:07 PM
Really impressive result for something so simple. Great job!

Agreed the high precision and the need for stability is not ideal for a mobile platform. I would be great for a pick & place system though. Throw a jumble of objects on a table and your system would be ideal for identifying them and determining their orientation.

For more dynamic situations perhaps it could be used to find the range of selected objects. Put a spot laser on a pan & tilt rig and aim for a specific landmark. When you hit it you can triangulate its position. Combined with full FOV stereo you could build a pretty accurate map.

Thanks, Meta'! Knew I could count on you! :)

I agree, it'd probably work really well in a pick-and-place. It's actually fast enough even on modest hardware, and being able to control the environment would really help.

I found that large issue was interlacing. I tried some less destructive deinterlacing methods, but really the only viable method I've found is throwing away half my scanlines. The next step, if I wanted to take it there today, would probably be a pure digital non-NTSC camera that was full-frame non-interlaced.

I originally did a point laser ranger in about 20mins and uploaded some screenshots in the OMG Lasers! Pew Pew thread.

I think I over-complicated this project. A single laser line with simple break detection is probably the happy medium. Look at the line between each line break and measure the distance in exactly the same way I measured a single-point distance. It doesn't matter what the image is, or how deep the scene is, just measure everything you can see and build a very simple map rather than a full topographic map that I was building before.

I've also been thinking about the circuit design for a Time To Flight system, but even DIY it's going to be crazy expensive. The whole circuit is multi-ghz of bandwidth. Basically a fiber optic network without the fiber. ;)

I've been avoiding stereoscopic vision. Seems like such a waste of processing power... I just can't see a single sensor as being trustworthy enough for true autonomy. At this point I'm thinking Laser, IR, Sonar, and Force as being the bare minimum.

Thanks for your insightful comments!

ooops
07-21-2008, 01:11 PM
I've also been thinking about the circuit design for a Time To Flight system, but even DIY it's going to be crazy expensive. The whole circuit is multi-ghz of bandwidth. Basically a fiber optic network without the fiber. ;)



I took a quick glance at the SICK but didn't sort out what they are using for the receiver. Can you shed light on that?

Adrenalynn
07-21-2008, 01:32 PM
The way that I've seen them work is that they scan a narrow beam over some fraction of a degree, then look for the bounce back time (Time to Flight, TTF). They're like little sonars.

The bounce return is picked up by a specialized photosensor that is hyper-tuned to just the wavelength of the laser.

Due to the incredible bandwidth that would be required to deal with this, most of them appear to play tricks with pulsing and doppler/phase change of the light.

robot maker
07-21-2008, 01:56 PM
adrenalynn
a great source for lasers is cd players and scanners and also they have much more parts for robots,like optics and motors besides all the laser modules i collected from broken test equipment at work we make and samples,i have also taken apart many scanner,printers and cd machines

robot maker
07-21-2008, 02:04 PM
didnt know you was using as a scanning laser ,but still,needs to be safe guards,like in all electronics can go bad
sick laser does come with a warning sticker,
but mostly i like time of flight design for my robot ,but looking to test both

ooops
07-21-2008, 05:33 PM
Due to the incredible bandwidth that would be required to deal with this, most of them appear to play tricks with pulsing and doppler/phase change of the light.
Oddly enough the doppler effect was what I was wondering about, but I thought no, that's for moving objects. Anyway, guess I should read more before asking questions that may already be answered. But you have my attention on this one;)

ooops
07-21-2008, 05:59 PM
Due to the incredible bandwidth that would be required


OK, I am still lost ... not that anyone should be suprised.
Where is the issue with the bandwidth? Is that in the number of calculations or with the transmission?
Just when I thought I knew something about light, you make me question everything:)

Electricity
07-21-2008, 06:21 PM
SO I'm very confused. But I'll keep reading, maybe I'll start to understand.
The basic idea makes sense to me, its the crazy stuff after that.

robot maker
07-21-2008, 06:23 PM
OK, I am still lost ... not that anyone should be suprised.
Where is the issue with the bandwidth? Is that in the number of calculations or with the transmission?
Just when I thought I knew something about light, you make me question everything:)
from a low cost laser tracking system the LIDAR using a web camera is the best like Adrenalynn is working on
time of flight system will cost alot more ,but will need a faster processor and special bandwith circuits,in the big robotics club in yahoo ,the seattle robotics club there is alot of talk on that subject

Doctor Robotnik
07-27-2008, 04:01 AM
Even with a short distance it still is way cool, and definatly is worth more then it's cost. Maybe if you add one 9 degress apart you could give it full, local spatial awareness?(Is that even a phrase?) That would be good for avoiding obstacles in a 3d world.

Adrenalynn
07-27-2008, 05:42 AM
Hi Dr Robotnik,

First and foremost - welcome to the TRC!

I really appreciate your thoughts - let me describe what I'm observing from this nearly-failed experiment...

An advantage to a laser system such as this is that it doesn't need a second imager. It is inherently stereoscopic and massively more accurate. It can tell you to a fraction of a millimeter where any visible point in space is, and it can take those samples in a fraction of a second - but that's also its failing for a navigational system. Imagine that when you looked at a tree your brain computed the exact dimensions of every leaf and its exact distance from you and from every other leaf. Your brain would end up leaving a very messy misty grayish pink coating over everything surrounding you. Too much data. POP!

Every time this system samples its environment it has to deal with a point cloud representing more than a million pieces of information. As implemented, it has the exact distance of about 1.3 million points in space and their relation to each other. Data overload. Deciding how to optimize that data in a lossy fashion is a non-trivial problem. How do you really know what can be safely thrown away or ignored without processing all that data first? And every fraction of a second you need to make those decisions again. In the short space between those, you need to decide how to act upon that datastream.

I believe a worthwhile avenue for exploration in navigation is to simplify this system by sampling less data. If we just simply scan a laser line across the scene, look for breaks in the line (where objects start and stop) and then measure the simple angle to each obvious obstacle, we have a system with millimeter accuracy [or better] that doesn't get swamped with data overload.

I "over-solved" navigation, but as Metaform3D points out, it probably has application in problems such as object recogniton with variable orientation. That's a great problem too, but not what I was trying to solve. ;)

Again, thanks and Welcome!

LinuxGuy
07-28-2008, 04:58 AM
@Linuxguy, what 3d software do you use? I'm heavily into Cinema4D and can use Maya, trying to learn SolidWorks too. C4d is not really intended for CAD tho, more for Pixar type character animation, but I may use it to simulate my mech!
I'm using Alibre Design (http://www.alibre.com).

8-Dale

rudi
07-29-2008, 10:01 PM
Here are some links that may be of interest:

http://www.pages.drexel.edu/~twd25/webcam_laser_ranger.html (http://www.pages.drexel.edu/&#37;7Etwd25/webcam_laser_ranger.html)
http://www.seattlerobotics.org/encoder/200110/vision.htm
http://www.ugcs.caltech.edu/~dstaff/distance.html (http://www.ugcs.caltech.edu/%7Edstaff/distance.html)
http://ashishrd.blogspot.com/2006/11/obstacle-detector-using-webcam-and.html
http://ashishrd.blogspot.com/2007/03/autonomous-rc-car-ii-with-wireless.html (http://ashishrd.blogspot.com/2007/03/autonomous-rc-car-ii-with-wireless.html)

Adrenalynn
07-29-2008, 11:40 PM
Hi Rudi, welcome to the TRC!

Addressing your links:

- Dot-based. Same idea as my first. Too low of fidelity and speed for my use, or to really be thought of as a "lidar"
- I saw that page - that gentleman's a crazy hacker with some cooool stuff. Alas that bill of materials starts to look like a real SICK price-wise. :)
- Hadn't seen that page, thanks! Looks similar to #1 - low fidelity dot based.
- Same
- Same

The project I describe here is very different, but alas suffers from the problem of being too high fidelity. What I want to work on at this point is something like #2 above, only PC-webcam based rather than embedded hardware based.

Thanks for digging these up! And again, welcome to the TRC!

robot maker
07-30-2008, 04:03 PM
seattle robotics encoder has lot of good info on the site and i saw the same link and wanted to make something close to that design using mostly a webcam ,and cpu

Adrenalynn have you tried different lasers ,like different colors and straight line,and crosshair type
at my job ii so great i can get samples of what i need or company can buy them
and have a project on counting gold rings in a special sensor tube for detecting freon,so will need a laser for it



Hi Rudi, welcome to the TRC!

Addressing your links:

- Dot-based. Same idea as my first. Too low of fidelity and speed for my use, or to really be thought of as a "lidar"
- I saw that page - that gentleman's a crazy hacker with some cooool stuff. Alas that bill of materials starts to look like a real SICK price-wise. :)
- Hadn't seen that page, thanks! Looks similar to #1 - low fidelity dot based.
- Same
- Same

The project I describe here is very different, but alas suffers from the problem of being too high fidelity. What I want to work on at this point is something like #2 above, only PC-webcam based rather than embedded hardware based.

Thanks for digging these up! And again, welcome to the TRC!

Adrenalynn
07-30-2008, 04:25 PM
Yes. It's not so much the laser color as requiring a line. An inexpensive line level from Harbor Freight ($20) , one of the larger survey-style ones, is ideal.

The larger project to get the "perfect" amount of detail is *very* quickly looking at the line breaks and the parallax in between each break to determine distance. This ideally should be done at least ten times a second.

robot maker
07-30-2008, 08:11 PM
i thought you are using a dot laser or are you using a line laser
we use single dot laser in a infrared thermometer and have alot of them ,plus a few line lasers


Yes. It's not so much the laser color as requiring a line. An inexpensive line level from Harbor Freight ($20) , one of the larger survey-style ones, is ideal.

The larger project to get the "perfect" amount of detail is *very* quickly looking at the line breaks and the parallax in between each break to determine distance. This ideally should be done at least ten times a second.

rudi
07-30-2008, 11:13 PM
Thank you for the warm welcome and response.

I got some line lasers for < $4 each here: http://search.ebay.com.au/_W0QQsassZff77hh

Adrenalynn
07-30-2008, 11:45 PM
Without many many hours of scanning, it would be all but impossible for a point laser to make a 3D image like the ones I was showing. The first was a line generated through homemade optics (bubble level). Second was a line from a cheap line level.

I'm sure those IR thermometers are great, but once you convert them to a line, how wide is it, and what does it look like through a cheap camera? Because if the line is not very bright, very thin, and very visible in the camera, it's utterly worthless.

Hi Rudi, those are pretty cool, but the shipping will eatcha alive (at least to the US here).

robot maker
07-31-2008, 12:42 AM
will have to try them
so the one from harbor freight is the one you are using
will look into get one to compare to ones i have,what class is the one you are using and power

my favorate place for optics is edmund scientific and optic bench is scientificsonline same company just low cost items optics and alot more,looking into buying a fiber optic system for my robot hand,something like a borescope,for beer-bot hand for finding a beer can or bottle using roborealm


Without many many hours of scanning, it would be all but impossible for a point laser to make a 3D image like the ones I was showing. The first was a line generated through homemade optics (bubble level). Second was a line from a cheap line level.

I'm sure those IR thermometers are great, but once you convert them to a line, how wide is it, and what does it look like through a cheap camera? Because if the line is not very bright, very thin, and very visible in the camera, it's utterly worthless.

Hi Rudi, those are pretty cool, but the shipping will eatcha alive (at least to the US here).

Adrenalynn
07-31-2008, 02:19 AM
Class and power is:

Fell off the boat on the way over from China, bobbed to the surface, and ended-up floating into the Port of Sacramento where someone scraped it up and tossed it on a store shelf. ;)

metaform3d
08-08-2008, 04:52 PM
Had another question about the LIDAR setup and thought I should post here rather than the group project thread.

As I understand it, the horizontal laser line is projected parallel to the ground but higher (or lower) than the camera. If the camera is also looking parallel to the ground then the ranging math is trivial.

I was wondering if you had tried projecting a vertical line. That could be very useful for finding inclines, bumps and drop-offs in the robot's path. I could imagine there might be a problem with beam intensity, as it might not reflect back from the nearly parallel ground as well as from vertical obstacles. If it worked you could also get a measure of the roughness of the ground.

Adrenalynn
08-08-2008, 04:58 PM
Thanks for your thoughts! Actually vertical is what I've been playing with lately. It's a lot more challenging because the line breaks that I use to detect the edges of an object can be obscured by the object itself. If you imagine scanning your face, for example, and the camera is offset from the laser, your nose can obscure half your face.

I've been thinking about a "traveling laser" that arcs out, and maybe the camera tracking with it, so it could do a, say, 60-90deg arc out a foot in each direction and overcome some of that limitation, at least in the size of the 'bots we're talking about.

Hephaistos
08-19-2008, 12:29 PM
I believe a worthwhile avenue for exploration in navigation is to simplify this system by sampling less data. If we just simply scan a laser line across the scene, look for breaks in the line (where objects start and stop) and then measure the simple angle to each obvious obstacle, we have a system with millimeter accuracy [or better] that doesn't get swamped with data overload.

This really does seem like a great idea for object detection and avoidence. In your experience, how does it measure up to more traditional sensors such as infrared and ultrasonic? After reading through this thread I was thinking of trying to add this type of a setup to the base of my bot to detect and avoid obstacles.

By "this type of setup" I mean a webcam with a laser above it doing a simple laser line across the scene in front of it. Then using the data to detect obstacles and navigate through doorways.

Since the devices are so easy and cheap to build, maybe even put one on all four sides of the bot's base.

Adrenalynn
08-19-2008, 12:55 PM
Well, for a line detector the software is a little more complicated. I'm not quite ready to release source for that yet - but if you're comfortable with video and graphics programming it's not terribly tough. I have source code for using a point-source, however.

How does it function? I don't think I'd use any single sensor exclusively. It's certainly better in all instances I've explored when compared to IR. It fails horribly on specular surfaces and really black black. For that, ultrasound works better. It succeeds in many instances where ultrasound fails though. At a minimum, I'd have both since they're both inexpensive and ultrasound is a dirt simple "gimmie".

jes1510
08-19-2008, 04:23 PM
I know next to nothing about vision systems so this seems like a great project. How does the performance fare against using two cameras and using the parallax between the two to detect obstacles?

Adrenalynn
08-19-2008, 04:29 PM
So far, I've seen very little success out there in stereoscopic ranging, and a lot of effort. With a fair bit of effort, I can get sub-millimeter accuracy, with a moderate effort, a couple mm accuracy.

Hephaistos
08-19-2008, 05:20 PM
I know next to nothing about vision systems so this seems like a great project. How does the performance fare against using two cameras and using the parallax between the two to detect obstacles?
In my experience, stereo vision has been useful in determining what is in close proximity to the robot but not necessarily exactly how close. I use it to limit the field of interest for the robot to make some of the vision processing routines faster and more accurate (filtering out background "noise"). It would not be extremely useful in doing something like mapping a room, where this laser level approach looks to be very promising.

jes1510
08-20-2008, 11:35 AM
So far, I've seen very little success out there in stereoscopic ranging, and a lot of effort. With a fair bit of effort, I can get sub-millimeter accuracy, with a moderate effort, a couple mm accuracy.

That's good information. This is a new area for me and I'm really interested. I picked up a power wheels car at a yard sale for $27 and it is just screaming to have a full computer running Linux and a LIDAR system.

Adrenalynn
08-20-2008, 12:51 PM
If you're comfortable with writing the software, I can go into the design and algorithm details.

ScuD
08-20-2008, 01:52 PM
A few years ago I read about some guy who used a laser line, a b&w camera with a filter in front of it, and a CPLD as a laser ranging device.

The filter was basically a very narrow bandpass filter, tuned for the red of the laser, so he only had a white line on black background.

Next he scanned each pixel, line to line, and as soon as he found a level over the "white" threshold he'd have an indication as to the distance from the camera a certain object was.

He had explained it loads better, very easy to read and very graphically intensive site, but I can't seem to retrieve it. If anyone perhaps knows this link, it'd be much appreciated..

jes1510
08-20-2008, 02:43 PM
If you're comfortable with writing the software, I can go into the design and algorithm details.

My problem is I don't know enough to know what I need to know. Python has some good video processing libraries and I'm pretty comfortable with python. I have never done video stuff before though.

I'm sure the design and algorithm details would be appreciated by all though.

Adrenalynn
08-20-2008, 02:51 PM
ScuD, the downside was, from my perspective, that it was very hardware intensive with an over-grown BOM. He was hacking the info out of the scanlines. If you're not going to put a computer on your 'bot, then it's a good idea. I think this is the one you were referring to: http://www.seattlerobotics.org/encoder/200110/vision.htm

jes, I'm really not all that fond of working with video on Linux. I started writing devices drivers for security cameras on it when v4l was first released, and I've just gotten less fond since. ;)

ScuD
08-20-2008, 03:08 PM
Lynn, you're the best!! Thanks a bunch! I've been looking all over for that site!

Adrenalynn
08-20-2008, 03:27 PM
:) You're welcome!

I actually had to buy a bookmark management application - I have a gihugic collection of stuff I've stumbled upon. ;)

Hephaistos
08-21-2008, 08:15 AM
My problem is I don't know enough to know what I need to know. Python has some good video processing libraries and I'm pretty comfortable with python. I have never done video stuff before though.

I'm sure the design and algorithm details would be appreciated by all though.

You might want to check out Microsoft's IronPython. It's a full implementation of Python 2.5, that runs under what's called the DLR (Dynamic Language Runtime), which in turn runs on the CLR, which is the base of C# and VB.NET.

Using IronPython would allow you to leverage your knowledge of Python on a Windows platform that also would give you access to any .NET code (and access to the full .NET Framework). This means that if someone were to write some vision processing routines in C#, you could use them from Python.

Here are some IronPython references to get you started:

The wikipedia entry (http://en.wikipedia.org/wiki/IronPython) has some interesting history as well as the current state of the project.

The main project page (http://www.codeplex.com/IronPython) for IronPython that links you to the downloadables and documentation.

IronPython in Action (http://www.manning.com/foord/) is an upcoming book from Manning. The first chapter is available as a free download and is well worth printing out and reading (if you're interested that is). It's an early release and so has not yet been edited. So it's got a little extra entertainment value to boot!

It should be noted that this is also one of Microsoft's first projects released on their new open source initiative.

Hephaistos
08-21-2008, 08:17 AM
If you're comfortable with writing the software, I can go into the design and algorithm details.

I am interested in any algorithms and theory you would be willing to share relating to the use of a laser level. I've read several of the links already posted using the "dot" approach, but the laser level approach, using breaks in the line for object detection, has some fascinating potential.

jes1510
08-21-2008, 11:07 AM
You might want to check out Microsoft's IronPython. It's a full implementation of Python 2.5, that runs under what's called the DLR (Dynamic Language Runtime), which in turn runs on the CLR, which is the base of C# and VB.NET.

Using IronPython would allow you to leverage your knowledge of Python on a Windows platform that also would give you access to any .NET code (and access to the full .NET Framework). This means that if someone were to write some vision processing routines in C#, you could use them from Python.

Here are some IronPython references to get you started:

The wikipedia entry (http://en.wikipedia.org/wiki/IronPython) has some interesting history as well as the current state of the project.

The main project page (http://www.codeplex.com/IronPython) for IronPython that links you to the downloadables and documentation.

IronPython in Action (http://www.manning.com/foord/) is an upcoming book from Manning. The first chapter is available as a free download and is well worth printing out and reading (if you're interested that is). It's an early release and so has not yet been edited. So it's got a little extra entertainment value to boot!

It should be noted that this is also one of Microsoft's first projects released on their new open source initiative.

Very cool stuff. I'll check into it!

robot maker
08-21-2008, 02:09 PM
Adrenalynn
what software did you get for bookmarks,i must have over 1000 and hard to locate one i need
mostly robot related ,but some cook book recipes


:) You're welcome!

I actually had to buy a bookmark management application - I have a gihugic collection of stuff I've stumbled upon. ;)

Adrenalynn
08-21-2008, 02:29 PM
NetVisualize

robot maker
08-22-2008, 10:07 AM
thanks alot

NetVisualize

LinuxGuy
08-22-2008, 02:05 PM
My problem is I don't know enough to know what I need to know. Python has some good video processing libraries and I'm pretty comfortable with python. I have never done video stuff before though.
Oh yes! Linux + Python will be controlling W.A.L.T.E.R. I already pretty much have the software setup I want (Python + Phenny [IRC bot] + DJango [web framework] + SQLite). Phenny and DJango are both written in Python. :veryhappy: I just need to connect DJango to SQLite (PySqlite). Oh, and I will be using the Cherokee or lighhttpd web server.

8-Dale

jes1510
08-22-2008, 11:11 PM
If you don't want to do a full web server then Python has great support for UDP and TCP/IP sockets

robot maker
08-28-2008, 10:33 AM
adrenalynn
how is the LIDAR project comming along,any where near finished

Adrenalynn
08-28-2008, 10:54 AM
My main projects right now are: resumes and the ROM second draft.

jes1510
10-21-2008, 06:34 AM
www.Woot.com (http://www.Woot.com) has laser levels for $10 for 2 today with shipping.


(http://enstxzrnsprxt.6hops.net/Laser_Level___2_PackrefStandard.jpg)

jes1510
10-30-2008, 09:36 AM
Hey Adrenalynn I have a couple of questions. I received my Laser levels yesterday and I think I am going to play with this some tonight.

Are you using Roborealm for capturing and analyzing the video? I noticed that there is a "Laser Line" extension but it doesn't seem to be very sensitive. I admittedly only got to play with it for about 5 minutes last night.

If you are using Roborealm then were you doing any other filtering on the image?

robot maker
11-05-2008, 07:58 PM
adrenalynn any new changes or updates on the LIDAR project

No0bert
12-06-2008, 09:26 PM
I just finished the schematics for a PICAXE-base Laser Rangefinder, I will post a topic on this as soon as the board is shipped :D.

ahab
12-07-2008, 01:19 AM
I have a Hokuyo UBG (http://www.acroname.com/robotics/parts/R313-HOKUYO-LASER3.html) that I've been playing with lately... very impressive stuff. If there is any insight I can provide I'd be more than happy to. I'd like to see more things like this advance so we've got more choices for range finders available.

(I have drivers in Ada written for it, which is easily translated to Python)

Adrenalynn
12-07-2008, 02:10 AM
No0bert -

Can't wait to see it! Are you doing single-point like my post in another thread for long yonder, or a point-cloud representation as I did here, or something else entirely? If you're doing a point-cloud - how are you trimming it down to something your embedded CPU can handle?

robot maker
12-09-2008, 10:18 PM
i see roborealm has 2 plug-ins laser line and laser spot
hope to play with it this weekend
Adrenalynn have you tried this idea yet
would be great to compare it with other designs,doesnt cost anything but lasers needed
only problem might be is quality and frame rate,frame rate can maybe be fixed with better camera
good thing i like roborealm it has so many many filters and easy to interface to almost any robot microcontroller


Hey Adrenalynn I have a couple of questions. I received my Laser levels yesterday and I think I am going to play with this some tonight.

Are you using Roborealm for capturing and analyzing the video? I noticed that there is a "Laser Line" extension but it doesn't seem to be very sensitive. I admittedly only got to play with it for about 5 minutes last night.

If you are using Roborealm then were you doing any other filtering on the image?

jes1510
12-09-2008, 10:25 PM
Robot maker, I have been trying this exact thing. The problem is that the image is incredibly noisy and and laser light is really crappy from 2 different levels and 3 different pointers.

I simply have not been able to get the filters right when looking for the line. It always picks up ambient noise so I end up with a falsely broken line. There may be a way to do it but I haven't had much success with my limited amount of playing.

edit: Also the range had to be really close.

robot maker
12-10-2008, 12:55 AM
here is a idea,so busy havent got a chance to play with it ,but ask the question to roborealm forum ,Steven is a great help,since he design most if not all roborealm software
also another problem might be the camera you are using
looking to buy 4 channel 60fps dvr card,mostly all web camera's are 30fps
what camera are you using

Robot maker, I have been trying this exact thing. The problem is that the image is incredibly noisy and and laser light is really crappy from 2 different levels and 3 different pointers.

I simply have not been able to get the filters right when looking for the line. It always picks up ambient noise so I end up with a falsely broken line. There may be a way to do it but I haven't had much success with my limited amount of playing.

edit: Also the range had to be really close.

Adrenalynn
12-10-2008, 02:03 AM
I didn't use roborealm - I wrote my own code to do it.

I did two different experiments. One was using a single point source, sampling the brightest spot, and taking the angle to it.

The results you see in this thread were very much more complicated - far more complicated than one could really get in Roborealm without a lot of add-on DLL work - it was easier by far to do it from scratch. This is scanning a line and building a 3D point-cloud. Very impractical because of the sheer volume of data. I didn't spend any time coming up with a good culling methodology to try to trim it down. It just wasn't looking like a practical direction of exploration. I have some newer ideas that I've been playing with - not quite ready to detail them, but they're still looking pretty promising.

robot maker
12-10-2008, 11:38 PM
when you do it your seems better then roborealm
cant wait for the design to test and check it out
i am looking to buy SURVEYOR SRV-1 camera it looks very good with built-in 500 mhz processor
instead of using cpu power on a motherboard and has laser drivers
seems like a perfect camera for LIDAR PROJECT but also face detecting and more
mostly getting it for face detecting and object detecting


I didn't use roborealm - I wrote my own code to do it.

I did two different experiments. One was using a single point source, sampling the brightest spot, and taking the angle to it.

The results you see in this thread were very much more complicated - far more complicated than one could really get in Roborealm without a lot of add-on DLL work - it was easier by far to do it from scratch. This is scanning a line and building a 3D point-cloud. Very impractical because of the sheer volume of data. I didn't spend any time coming up with a good culling methodology to try to trim it down. It just wasn't looking like a practical direction of exploration. I have some newer ideas that I've been playing with - not quite ready to detail them, but they're still looking pretty promising.

rckrchrdsn
12-24-2010, 11:01 PM
Does anyone know what happened with this project? It seems to end here about the time there seems to be some progress. :confused:

artifice
02-10-2011, 08:44 AM
good afternoon!!!!!!!!!



i'm contacting you because i come across a problem i think you can help me, i´ve been trying to built a laser range findder but i really don´t know very much about the subject.
i'm working one a final year project of advanced electronics i have liberty to search for schematics where i can and but i can´t find any and i don´t know how to start with them. so if you please could send me the information that you have i will be eternally gratefull, like all kind of schematics mounting tecniques. etc...


i wil leave to you the components that i'm using so that could be a guidance to your help.


laser diode splpl90 905nm
laser receiver 905 nm
TDC-GP1
beam spliters (don´t know how to mount them)
ir filter
the laser has to measure between 30 to 100 metters and to be eye safe.

much thanks for your attention and will wait for you reesponse

sincerilly

ishmael garcia



p.s these are the components that i would like to use but if you have anything else you cold please send them to me to
oh and sorry about the bad english but in not my mother language:)

lnxfergy
02-10-2011, 09:01 AM
Ishmael,

Just a heads up, but Adrenalynn hasn't been active on this forum for quite some time.

-Fergs

jwatte
11-16-2012, 12:45 AM
How about you use an actual useful reference on LIDAR (http://en.wikipedia.org/wiki/Lidar)?

rckrchrdsn
11-16-2012, 12:27 PM
How does LIDAR work? LIDAR works by firing a laser beam and either analysing the time the beam took to come back to the receiver by the laser (TOF - Time of Flight), or by measuring the angle of the point in the image sensor (Parallax). Now add scanning and you start collecting measured points around the receiver. You can make it a 2D set of points to sense whether there are reflective (probably hard) points and can avoid them. You assemble these points into a "map" as you move and you Simulataneous Location and Mapping (SLAM). In 2D you are rotating the laser/sensor (L/S) in a circle or some porition of a circle, but the laser is always on the plane of the L/S as it spins. This means if your platform changes angles then you have to add in data from a IMU to keep track of where those dots should fall in real space, so just jumping around on a bumpy area could give you bit of a 3D view. However, if you want to go 3D you have to a way of changing the angle of the beam to a known angle so the dot can be added to the cloud of data that will form. Companies use several ways, either angling a mirror that is already spinning (which means the actual area covered is limited in x and y, remembering that you are always moving the L/S's plane to get the Z, kinda).

Does this help?