PDA

View Full Version : [Question(s)] Robust Autonomous Robot Localization Using Interval Analysis



aadhavan
12-13-2009, 12:21 AM
hi


i'm gonna do my project on autonomous robot.... and i got a ieee abstract.....

here goes....

Abstract :

This paper deals with the determination of the position and orientation of a mobile robot from distance measurements provided by a belt of onboard ultrasonic sensors. The environment is assumed to be two-dimensional, and a map of its landmarks is available to the robot. In this context, classical localization methods have three main limitations. First, each data point provided by a sensor must be associated with a given landmark. This data-association step turns out to be extremely complex and time-consuming, and its results can usually not be guaranteed. The second limitation is that these methods are based on linearization, which makes them inherently local. The third limitation is their lack of robustness to outliers due, e.g., to sensor malfunctions or outdated maps. By contrast, the method proposed here, based on interval analysis, bypasses the data-association step, handles the problem as nonlinear and in a global way and is (extraordinarily) robust to outliers.


i need help in Interval analysis concepts and algorithm concepts...


give some ideas.... plz...thx in advance....

i have article based on tis project.if you need i'll mail that....plz clarify me with those calculation and equational concepts.....plz...thx...

MikeG
12-13-2009, 10:02 PM
Interval analysis takes into account the inherent error in sensors like ultrasonic and IR. Using a range and unions we can make accurate assumptions about the distance to a landmark thereby localizing the robot to its current map and translating to a global coordinate.



You need background in transformation geometry and vector analysis to understand the equations your attached pdf.

aadhavan
12-13-2009, 11:29 PM
ya...ok....thanks MIkey G...for your guidance .... i'm on it to study transformation theory..if i got doubts can you clarify me??? plz.. cuz i got little time to do this project.....

aadhavan
12-14-2009, 12:47 AM
heya....thx....can you explain me the equation [4] & [6] in that attachment???

Adrenalynn
12-14-2009, 01:12 AM
Have you spoken to your professor/advisor about it?

aadhavan
12-14-2009, 01:15 AM
They Doesn t know anything about this concept....lol...Worst professors in the world......

just i need some "conditions" clarification like how they using those "conditions"....

like in Equation [4] & [6]...

Adrenalynn
12-14-2009, 01:51 AM
I think a lot of people are hesitant to do homework/school projects for new users. We see it every semester. Someone comes in, the collective minds of the forum do their project and/or homework for them, they get their grade and then vanish, never contributing anything to the community. It's a give-and-take thing, but many of us have been bitten by the "take-and-run".

aadhavan
12-14-2009, 02:02 AM
If i got enough knowledge I can reply for posts...but a newbie... so its experts turn to teach these newbies.... once i got knowledge i'll be the expert replying to the posts in this forum /community......

MikeG
12-14-2009, 08:18 AM
Equation 4 and 6 describe the relationship of a sensor (cone) reading from the robot's perspective then to the world frame using standard vector (matrix) notation.

I agree with Adrenalynn, we're NOT going to do your math homework.

The reality is, this subject is not something you can learn overnight. If you do not have an professor that can help, then I suggest you pick a project that is more in tune with your skill level.

lnxfergy
12-14-2009, 08:31 AM
Why this particular approach? This doesn't seem to be exactly a mainstream approach (in other words, finding someone who has actually implemented such a thing might be a bit difficult). Do you have a test platform (robot)?

-Fergs

MikeG
12-14-2009, 09:09 AM
Inxfergy; the approach is stand localization using vectors from a known pint of origin. The difference here is the use of Interval analysis to handle sensor deltas from the perspective of the robot. So you end up with sets of points (begin and end) and not single points. These points are then translated to a global frame.

I would not do it this way either.

lnxfergy
12-14-2009, 09:44 AM
Inxfergy; the approach is stand localization using vectors from a known pint of origin. The difference here is the use of Interval analysis to handle sensor deltas from the perspective of the robot. So you end up with sets of points (begin and end) and not single points. These points are then translated to a global frame.

I would not do it this way either.

I get that it's a vector based-map, and that was specifically the part I was referring to as not main stream. Vector-based approaches have mostly lost out to occupancy grids, or variations thereof. Hence, my question of: Why this approach, and what's the end application?

-Fergs

MikeG
12-14-2009, 10:45 AM
Correct if I'm mistaken but an occupancy grids relies on a known pose. A pose is generally represented by a homogeneous frame transform which contains the orientation and position. P = [Xa Ya Za Pa]. The transform is used to relate local position to a global frame.

lnxfergy
12-14-2009, 11:11 AM
Correct if I'm mistaken but an occupancy grids relies on a known pose. A pose is generally represented by a homogeneous frame transform which contains the orientation and position. P = [Xa Ya Za Pa]. The transform is used to relate local position to a global frame.

Yeah, but? I don't see where you're going with this...

-Fergs

MikeG
12-14-2009, 11:26 AM
Vector-based approaches have mostly lost out to occupancy grids
I'm not arguing. Sometimes I receive things very literally which gets me into trouble... a lot of trouble. Your statement blew my mind. I took your statement as mutually exclusive.

lnxfergy
12-14-2009, 11:42 AM
I guess I should have clarified that a bit, I meant vector-based MAPS have lost out to occupancy grid approaches.

Sure, pose is still a vector, and the map is a matrix, but we don't represent items in the map as a vector (a wall becomes a collection of points of high density, rather than a vector). The example "maps" shown in that paper are idealized environments, when you start adding clutter to the environment, the number of vectors required to describe it goes up, a lot. At some point, when you move into a real-world scenario, the noise/clutter of the environment is such that vectors just don't cut it. Occupancy grid approaches also throw out the data-association problem almost entirely (they do of course have their own issues: high data/processing overhead is typically the biggest).

However, most of the reason for new techniques is the use of new tools: namely, LIDAR. Hence, my question to the OP of what his platform is, that really does determine what approach is best.

-Fergs

badcommandorfilename
12-15-2009, 08:41 PM
I've recently finished my undergraduate thesis on landmark based localisaton, and (although I've never encountered the term before) "Interval Analysis" seems to be similar to Simultaneous Localisation and Mapping.

I skimmed the paper, and it seems to be focused on simply detecting the intersections of lines (ie, corners) and using them as navigational aids. You seem worried that the sensors will let you down, so you might want to investigate confidence filters such as the Kalman Filter.

Here (http://www.cas.kth.se/SLAM/slam-papers.html) is a collection of publications on the subject, which you might find useful. Good luck!

lnxfergy
12-15-2009, 08:57 PM
Having been published in 1999, the paper the OP posted would mostly predate the major advent of SLAM (note the use of sonar rings, rather than LIDAR).

-Fergs