PDA

View Full Version : Mech (Robot) Localization



GhengisDhon
04-24-2016, 10:35 AM
6570
A bunch of us at RTeam dream of the day when we will have our Mechs fully autonomous. I've also had it on my list of thing to do to figure out particle filters (PF). So as a bit of side diversion I've made a simulation of a Mech in a arena using PF localization (determining its location on a map). The simulation is written in the Matlab language.

The video on the left is the "Truth" model. Its a simulation of a Mech walking around the arena. It is assumed that the Mech has an angular rate sensor (gyro) in the yaw axis, two accelerometers in the x- and y-axes that are integrated to determine rates, a magnetometer to determine heading, as well as four sonar sensors to locate the buildings/arena walls.

The video on the right is the estimate of the mech pose (location and orientation). It uses the sensor data (with noise and bias sources) and a PF to determine its pose. If you read any papers on PFs they can seem quite daunting, but if you cut through it all they aren't actually that bad.

The way a particle filter works is that instead of running a single estimator for pose, it runs multiple estimators using Monte Carlo (randomizaton) methods. Each of these estimators are called "particles". At each time step in the estimate of the current pose, it uses the sonar sensor data and compares each estimator to a locally stored map of the arena to determine how well each is performing. For the next time step, it will randomly resample (select) each estimator based on how well its performing. Each individual estimator can be resampled zero, one, or multiple times. Higher performing estimators are more likely to be reselected. A bit of randomization is then added to each resampled estimator and the process continues. At any time step, the "output" estimate is just the average of all the estimators.

Its really quite a clever way of estimating pose but is very computationally intensive. May be a challenge implementing this on a small Mech microprocessor. I'll be working on this over the next year and (hopefully) will have something running by the 2017 Robogames. I'll also post the code once I get it cleaned up a bit if anyone wants to see it (and has access to Matlab).


https://youtu.be/MuFD6OSfNtU

tician
04-24-2016, 11:28 AM
Matlab and octave are nice for prototyping, but often shite for practical use. Part of the reason I failed to finish most of the projects in Computational Intelligence in spring 2013 was initially trying to use matlab instead of C/C++ for all the GA and PSO code at the heart of the course. Took the better part of a day to finish runs that later C++ code could finish in minutes or maybe an hour. Fall behind and get left behind...


Also reminds me that I still have not finished the environmental text mixture monte carlo localization ROS node I started making in the other CSCI course that semester. MMCL really is surprisingly simple once you get past the terminology. MMCL is a mix of forward MCL and the 'dual' of MCL. Use DMCL for some small probability phi (~0.1), and use FMCL for the rest of the time (1-phi).
FMCL: 1) Generate a new particle by sampling over the cumulative distribution of the last particle set's weights. 2) Apply odometry transformation to the particle to determine new particle pose within the map. 3) Calculate weight of the particle as a function of the distance sensor scan and the rays traced from the new pose to the map walls.
DMCL: 1) Generate a new particle by randomly creating a particle consistent with the distance sensor scan (the more information with each distance scan, the better the particle generated). 2) Apply odometry transformation to particle to determine its pose if it had existed in the last time step (set of particles). 3) Calculate weight of the particle as a function of the distance sensor scan and the rays traced from the 'previous' pose to the map walls.

GhengisDhon
04-24-2016, 12:21 PM
Matlab and octave are nice for prototyping, but often shite for practical use.

I generally prototype all algorithms in Matlab/Simulink first (I use both quite extensively for work), Convert them to C, embed them in a Simulink sfunctions to debug the C-Code before moving on to actual implementation in hardware. It saves a lot of time compared to trying to debug algorithms on the hardware itself. I used the same method for the inverse kinematics and gate code that many of the RTeam Mechs are running.

The sim I showed uses 100 particles and goes through 448 time steps. It takes about 25 minutes to complete the simulation. So you're correct, you couldn't implement it anywhere near real time in Matlab/Simulink and its quite slow. When debugging the algorithms I generally only use a few time steps and less particles just to prove out its doing what I want, then occasionally run the whole sim to verify. This has taken me about a week of evenings after to work to teach myself MCL (PF) localization and code it up in Matlab. If I tried to code this up in C, I wouldn't be anywhere near where I'm at... (In college, I chose Fortran programming over C. C was new at that time, and Fortran was the engineering language of choice...)


MMCL really is surprisingly simple once you get past the terminology.

Ya. That what I discovered doing the particle filtering. It was a concept that was just coming out when I was finishing grad school. Its been on my list of items I've been wanting to figure out for the last several years, but didn't really didn't have a reason until we started talking about autonomous Mechs.

It was really quite simple once you get past the terminology and figure out exactly what's actually going on.

jwatte
04-24-2016, 03:51 PM
a magnetometer to determine heading

The motors of the mech (as well as currents through wires) generate a reasonably strong magnetic field.
Maybe the magnetometer will be OK if it's mounted way at the top, using a small mast or something?


CPU usage

I'm moving forward with a "hat" attachment to a Raspberry Pi that lets me talk to Dynamixels.
The Raspberry Pi has plenty of CPU AND a GPU!

Anyway -- regarding particle filters, how well do you expect the arena to be surveyed?
Do the buildings move if a contestant bumps into them?
Will the camera people cause false readings? :-)

One thing that I'm keen on trying would be to have left/right ultrasonic sensors to know whether I'm walking/strafing into walls on the sides. The camera doesn't give me enough situational awareness for that.

tician
04-24-2016, 06:17 PM
Anyway -- regarding particle filters, how well do you expect the arena to be surveyed?
Do the buildings move if a contestant bumps into them?
Will the camera people cause false readings? :-)
False readings from people will affect some particles for a short while until the bot or the person moves; the odometry helps keep the bot from getting lost because of extra objects/obstacles not part of the map. The limited range and accuracy of sonar sensors means it may not even notice the person unless a sensor is pointing directly at the person standing within a couple feet of the bot. The symmetry of the arena does mean that there will probably be issues dealing with the kidnapped robot problem; a decent IMU to go with the leg odometry helps only as long as it continues tracking during the transit (which violates actual conditions of the kidnapped robot problem; instantaneous movement from known position to new position). In theory, any sponsor billboards on the arena buildings would be enough for the environmental text MMCL to figure out where it has been moved even if no IMU during the transit.


One thing that I'm keen on trying would be to have left/right ultrasonic sensors to know whether I'm walking/strafing into walls on the sides. The camera doesn't give me enough situational awareness for that.
Not a fan of sonar sensors in air where assorted portions of the EM spectrum are far more accurate and usually more reliable than sound. A couple small mirrors to the sides of the camera's field of view to be a bit like side-view mirrors on a road vehicle reserving center for steering/aiming? Kogeto lens on an upward pointing camera? Newer Sharp IR distance sensors (possibly mounted on panning servo)?

jwatte
04-24-2016, 09:36 PM
Sharp IRs are another option I've considered.

A chrome sphere and an upwards-pointing camera is also something I've considered. With the Raspberry PI GPU and high-rez camera, I could even generate the "flat" views from that project in real time!

Also: "kidnapped robot problem" -- here's hoping there will be a research paper with that title some time :-)

ArduTank
04-24-2016, 11:34 PM
If the camera people are a problem, replace them with a mini-skycam and corner/building cams and/or a mini-cambot :D

The kidnapping could be solved by having there be a way to tell the robot it's general location (ie, NE/NW/SE/SW corner) then do a full 360 to allow it's sensors to map it's actual location, while taking into account the aforementioned general location.

sensor placement could be on a mast, possibly with an AX-12 rotating all the distance sensors to allow for more data points. The hole in the AX's position sensing could be remedied by using the average of the deg/sec of the 300 degrees of readable range, then use that as the "known" rotational speed during the other 60.

Also, the f12 key on my keyboard is too close to the backspace key. :(

giantflaw
04-25-2016, 06:34 PM
In 2012 Robogames we had a compass on RA 2.0. It worked great. Did not notice any issues with the motors and the high gun currents. It was mounted on the bottom front of the robot body. RA 2.0 also had two Sharp range finders mounted on two servos. The servos were on either side of the gun on the turret. The screen showed a WWII type radar screen with dots that outlined walls and corners. The dots persisted for 1 sweep of the servo and then would vanish. It gave a good awareness of what was around the robot. Before that we did try some Maxbotics ultrasonic sensors and found that the beam width was too wide to resolve the building corners to what we wanted.

This year I used a Sharp IR range finder on the back of Nomad to follow the wall walking backwards at full speed. Since the sensor was looking backwards at a 45 degree angle the control loop auto-turned Nomad when it saw the corner wall emerge. I used it at Robogames but I don't think the function was noticed much. If you look at the Robogames video at the back of Nomad you'll see the range finders mounted outboard on both sides of the rear target panel.

RA 2.0 did not compete this year. It currently has a Lidar mounted on top in place of the turret.

6572

jwatte
04-25-2016, 06:56 PM
Thanks for sharing -- sounds very promising!