View Full Version : Autonomous robot's navigation

08-27-2009, 07:45 AM

Now I work at associative video memory. The method still in developing (now it version 0.5)
but it gives good results already today.

I am dealing with research of computer vision in parallel with my main job
at "Impulse" more than three years (it is my hobby).

About me: http://edv-detail.narod.ru/EDV_resume.html

In the beginning my achievements were insignificant and little part of ideas has worked properly.
But I did not surrender. I generated big quantity of hypotheses and then test it.

The most ideas did not work indeed but those that worked were similar to particles of gold
in huge quantity of dross. My associative video memory method is working indeed.

============================- Common information -==========================

Algorithm AVM uses a principle of multilevel decomposition of recognition matrices,
it is steady against noise of the camera and well scaled, simply and quickly
for training, also it shows acceptable quick-action on a greater image resolution
of entrance video (960x720 and more). The algorithm works with grayscale images.

The detailed information about AVM algorithm can be looked here:

AVM SDK v0.5 with examples of using and tests for comparison
of characteristics of the previous and new versions:

Demonstration video how to train AVM:

AVM demo with the user interface (GUI), installation for Windows:

Connect the web-camera and start AVM demo after installation of "Recognition.exe".
After starting the program will inform that there is not stored previously data
of training AVM and then will propose to establish the key size of the image
for creation of new copy AVM. Further train AVM using as an example Face_training_demo.avi (http://edv-detail.narod.ru/Face_training_demo.avi).

========================- Robot's navigation -=========================

I also want to introduce my first experience in robot's navigation powered by AVM.

Briefly, the navigation algorithm do attempts to align position of a tower
and the body of robot on the center of the first recognized object in the list
of tracking and if the object is far will come nearer and if it is too close it
will be rolled away back.

See video below:



I have made changes in algorithm of the robot's control (Navigator.cpp)
also I have used low resolution of entrance images 320x240 pixels.
And it gave good result (see "Follow me"):


Source code of "Navigator" program:

During experiments I noted significant reserve of quick-action
when AVM v0.5 is working with resolution 320x240.

And it give me good conditions for starting of new experiments
in navigation field (algorithm "Location tree"), of course I also
will try to make AVM faster with resolution 640x480 in the future.

Also I finished some experiments with "Natural beacons":


Source code of demo (project "Beacons test"):

I hope that this information can help you in questions:
How we can use AVM algorithm?

If somebody can read in Russian then welcome: (Russian thread: "Autonomous robot's navigation" (http://roboforum.ru/viewtopic.php?f=35&t=6021)).

08-27-2009, 03:46 PM
Wow this works great so far! How easy would it be to interface it with python?

08-27-2009, 05:08 PM
WOW Nice I'll be taking a more indepth look at this over the weekend

08-27-2009, 06:17 PM
Dear Dmitriy, first - welcome to the Trossen Robotics Forum!

Thank you most sincerely for sharing such a comprehensive result of your investigations into computer vision/recognition and vision-based navigation. A tremendous amount of information to absorb here - digital video is my specialty - and I look forward to learning from your explorations and documentation.

I do hope you will find yourself visiting us regularly!

In response to questions about "ease of integration" - this is a research project, and its intent is to advance a field and body of knowledge. IMHO: If that's the first question that comes to mind, this probably isn't the best place to start.

08-28-2009, 12:55 AM
Wow this works great so far! How easy would it be to interface it with python?

I don't plan a development of the interface for python at the moment but it is possible in the future.

But AVM v0.5 SDK (http://edv-detail.narod.ru/AVM_SDK_v0-5.zip) with examples of using is available.

The compilation is possible by one of compiler:
AVM_SDK\samples\vc6.prj\AVM_prj.dsw for Microsoft Visual C++ 6.0
AVM_SDK\samples\vc9.prj\AVM_prj.sln for Microsoft Visual Studio 2008
Also library OpenCV_1.0 is needed for compilation.

The algorithm "Associative video memory" is commercial project but you can use AVM SDK in your researches and in non-commercial projects.

I will try to prepare the detailed documentation about using of AVM v0.5 SDK.

Also I will share with you results of my future researches in robot's navigation.

08-28-2009, 12:57 AM
Congrats too on spinning AVM off into a commercial project!

08-28-2009, 04:42 AM
Source code of "Navigator" program (http://edv-detail.narod.ru/Navigator_src.zip) and "Navigator tool kit" (http://edv-detail.narod.ru/Navigator_Tool_Kit.zip) was prepared for Russian community (some comments in source code in Russian).

But if you would show interest to this topic then I could adapt it to English community too.

09-29-2009, 07:30 PM
[Mod Note: Thread-Jack Relocation: http://forums.trossenrobotics.com/showthread.php?t=3600 (http://forums.trossenrobotics.com/showthread.php?t=3600)Nikhil Thiruvengadam's mower project moved]

09-30-2009, 02:33 PM
Awesome! I can't read russian "directly", but Google is pretty useful (http://translate.google.com/translate?hl=ns&sl=ru&tl=en&u=http%3A%2F%2Froboforum.ru%2Fviewtopic.php%3Ff%3D 35%26t%3D6021) :)

12-11-2009, 08:34 AM
Robot navigation by gate from point "A" to "B"



First an user must set the visual beacons (gates) that will show direction where robot has to go.
Robot will walk from gate to gate. If the robot recognize "target" then he come nearer and stop walking.

Navigation application (installation for Windows):

Installation also contains source code of robot control driver (InstDir\RobotController_src).
So you can adapt it to your robots and repeat my navigation experiments by yourself.

Also available source code of navigation application (but all comments in Russian):

12-11-2009, 08:54 AM
Thanks again for posting some great stuff!!!!

12-22-2009, 08:53 AM
Navigator package description

The package consists of three parts: the robot control driver, the pattern recognition application (GUI), and a dynamic link library "Navigator".
Compilation of pattern recognition application will need wxWidgets-2.8.x (http://sourceforge.net/projects/wxwindows/files/wxMSW/2.8.10/wxMSW-2.8.10-Setup.exe/download) and OpenCV_1.0 (http://sourceforge.net/projects/opencvlibrary/files/opencv-win/1.0/OpenCV_1.0.exe/download). If someone has no desire to deal with the GUI, then the project already has compiled recognizer (as EXE) and you will be enough to compile Navigator.dll, which contains the navigation algorithm. Compilation of Navigator.dll needed only library OpenCV_1.0 (http://sourceforge.net/projects/opencvlibrary/files/opencv-win/1.0/OpenCV_1.0.exe/download). You can build project by compiler Microsoft Visual C ++ 6.0 (folder vc6.prj) and by compiler Microsoft Visual Studio 2008 (folder vc9.prj).

After installation (and compilation) libraries wxWidgets-2.8.x and OpenCV_1.0 need to specify additional folders for the compiler:

Options / Directories / Include files:
<Install_Dir> \ OPENCV \ CV \ INCLUDE
<Install_Dir> \ OPENCV \ CVAUX \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSW
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSWD
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE \ MSVC

Options / Directories / Library files:
<Install_Dir> \ OPENCV \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB

Source code of the "Navigator" (for English community) can be downloaded here:

For compilation of the source code you can use the "MS Visual C ++ 2008 Express Edition" (http://www.microsoft.com/express/download/). It is all official and free.

For connection of robot to Navigator program you have to adapt the control driver (.\src\RobotController) to your robot.

It's simple: the application Recognition.exe interacts with the robot driver "through shared memory (gpKeyArray). And all you need to do - it is a timer (method CMainWnd:: OnTimer) to send commands from the "gpKeyArray" to your robot.

The chain of start commands will be transmitted to robot for "power on" (cmFIRE, cmPOWER) when you start navigation mode. Respectively command "power off" (cmPOWER) will be transmitted when navigation mode will be disabled.

And most importantly: the commands cmLEFT and cmRIGHT should not activate motion in itself but only in combination with the commands "forward", "back" (cmFORWARD, cmBACKWARDS).

If you have adapted control driver to your robot then you are ready join to navigation experiments.

So, let's have a fun together :)



12-22-2009, 01:36 PM
wow! +Rep for Massive Code Share! As soon as I'm moved, I'll definitely have to look at your work. Thanks for sharing!

12-22-2009, 03:11 PM
This looks very interesting, EDV. My thesis was on autonomous navigation - specifically on perception and landmark sensing, so I'll have to give this a try out.

Are the lib dependancies open source libraries? It would be very cool to get this to run on an embedded system so that the robot wouldn't need to be tethered.

12-22-2009, 05:10 PM
given the load such video processing generates, a 1.8Ghz tiny little Windbook would be the ideal "embedded system" - and this would do fine.

12-23-2009, 08:32 AM
>> Are the lib dependencies open source libraries?

In "Autonomous robot's navigation" project (http://edv-detail.narod.ru/Navigator_src_en.zip) AVM algorithm (http://edv-detail.narod.ru/AVM_main.html) is a closed technology (presented as library "rcg.lib") but other part (that is involved in navigation) is open source.

What is AVM v0.5?
- It is obtainable, accurate and fast pattern recognition algorithm already today.

What AVM algorithm will able give for us? How it can be useful for us?
- You can use AVM algorithm (http://edv-detail.narod.ru/AVM_main.html) in your researches for development of efficient navigation solution for robotics (as recognizer). You can test your hypothesis concerning robot navigation based on landmark beacons with AVM. And if successful navigation solution will be achieved then you will have two ways:

1. You can develop your own pattern recognition algorithm and then replace AVM algorithm in your finished project.
2. Or you can use commercial version of AVM algorithm in your finished project.

Anyway now you can use AVM v0.5 SDK (http://edv-detail.narod.ru/AVM_SDK_v0-5.zip) for free in your robotics hobby.

>> It would be very cool to get this to run on an embedded system so that the robot wouldn't need to be tethered.

You can compile "Navigator" program (http://edv-detail.narod.ru/Navigator_src_en.zip) by "Microsoft Visual C ++ 6.0" for example on "Windows 98" and then install it on "Roboard RB 100" (http://www.robotshop.us/roboard-starter-kit-3.html).

01-07-2010, 05:48 PM
New Year's weekend has not gone in vain and I want to share my experiment results with you:


So, what we see in video
First the robot was set at checkpoint p[1] and then robot has received command: "walking to the point p[0]".
The robot has started looking around and when he has seen gates which correspond to the route [0] then began move to checkpoint p[0] by gates.
In checkpoint p[0] robot has stopped. Then the robot was instructed to follow to checkpoint p[1] and robot again has started looking around
and found the appropriate route gates and has come to checkpoint p[1].

I plan to update "Navigator" package soon by new navigation algorithm.

01-10-2010, 05:09 PM
thats awesome man :eek:

01-11-2010, 01:49 AM
Navigation algorithm update and yet another video:


In the new version of navigation algorithm the gate data contains weights for the seven routes that indicate importance of this gateway for each route. At the bottom of the screen was added indicator "horizon" which shows direction for adjust the robot's motion for further movement on the route. Field of gates is painted blue if the gates do not participate in this route (weight rate 0), and warmer colors (ending in yellow) show a gradation of "importance" of the gate in the current route.

The procedure of training on route
For training of the route you have to indicate actual route (button “Walking by way”) in "Nova gate" mode and then you must drive the robot manually by route (the gates will be installed automatically). In the end of the route you must click on the button "Set checkpoint" and then robot will turn several times on one spot and mark his current location as a checkpoint.

So, I updated "Navigator" package:

02-04-2010, 02:43 AM
I have remade AVM v0.5 SDK and now algorithm AVM presented as DLL library.

AVM v0.5 SDK package was updated:

Implementation of AVM algorithm as DLL:

Program interface:

Also I updated "Navigator" package:

Implementation of AVM algorithm as DLL:

Program interface:

02-04-2010, 09:17 PM
Awesome work EDV so interesting. keep up the good work.

02-04-2010, 11:54 PM
Thanks, I will try to go further and once maybe machines will see the light and T-1000 will cease to be a fantasy :)

02-19-2010, 01:31 AM
The first iteration of AVM v0.5 SDK porting to C# is done:

This package is using C# porting of OpenCV v2.0 by library: "Emgu CV" (http://www.emgu.com/wiki/index.php/Main_Page).

Emgu CV download (http://sourceforge.net/projects/emgucv/files/emgucv/

About OpenCV v2.0 (http://www.willowgarage.com/pages/software/opencv)

OpenCV v2.0 download (http://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.0/OpenCV-2.0.0a-win32.exe/download)

The test video ".\AVM_SDK.net\bin\RcgTraining4s.avi" is using XviD codec (http://www.codecguide.com/download_k-lite_codec_pack_full.htm).
Make sure that it installed on your PC.

02-20-2010, 02:04 PM
Awesome I love seeing this type of work this may help me understand your prev work better! Thanks! +rep for sharing!

08-03-2010, 09:06 AM
I made more convenient implementation of algorithm AVM for C # (http://edv-detail.narod.ru/AVM_SDK_simple-net.zip) and also I prepared detailed documentation "Library AVM SDK simple.NET" (http://edv-detail.narod.ru/Library_AVM_SDK_simple_NET.html).

Just print file .\AVM_SDK_simple.net\bin\Go.png and show it against camera for recognition in example 2.

09-20-2010, 02:41 AM
RoboRealm company begins distribution of "Navigator" plugin (http://www.roborealm.com/help/AVM_Navigator.php) within RoboRealm software (http://www.roborealm.com/).


11-18-2010, 12:08 AM
Dr. Bruce (http://www.leafproject.org/) shared his successful experience with AVM Navigator plugin (http://www.roborealm.com/help/AVM_Navigator.php):

Leaf can look for a fiducial. Then he can calculate angle and distance using x-rotation and size.
Next, because he knows where in the room the fiducial is located, he can apply some trigonometry
to calculate his exact pose in the room (X Y Orientation). He then goes to a specific place
in the room about 3 feet in front of the door. Next, switching to AVM, he can navigate through
the doorway successfully!

In fact, this was so successful, I then decided to have him try the hallway. He goes down the hallway
stopping when his sonar sensors detect the wall. I then have him programmed to turn clockwise 90 degrees.
And he then finishes by going down a short narrow irregular hallway (previously a difficult challenge).

Here are links to 2 videos:



I have more to do - but this is a great start...

11-24-2010, 01:17 AM
Now I work at developing of new mode for "AVM Navigator plugin" that will completely autonomous. This mode will have name "Walking mode" and robot in this mode will be able to explore an unknown rooms environment.

And first stage of this development (obstacle avoidance) is successfully passed now:


Visual odometry, autonomous mapping will further...

03-01-2011, 01:15 AM
-= Walking mode - how it work? =-
*Obstacle avoidance

1. Scan the vertical contours (the legs of chairs, corners of walls, doors and jambs, etc)
and if there is vertical contours in front of robot then it is obstacle
(yellow indicator in form as trapeze), and then robot begin to quietly go around.

2. The camera has infrared illumination (headlight) and if robot comes close to the wall
then the lamp provides illumination in the form of white spots on the screen.
Accordingly if in the center of the screen will appear a large white spot then robot
will roll back with a writing of obstacle image to AVM (this case indicated by a large
red crossed rectangle). The marked obstruction (which are stored in AVM) is indicated
by blue rectangles with the inscription "Obstacle". If the robot sees the marked obstacles
in front then robot just will go around it at a distance (it is exclude needs in going closer
to wall in second time).

3. The robot processes the motion history of input images for one second earlier. And if robot
were given the command "forward" but it not caused to any changes in input image (nothing happens)
then the robot got stuck (his fore-part stuck into the wall). Then robot gives the command
backwards with a writing of obstacles in the AVM (it could prevent collision with the wall
in second time). This case indicated by a red rectangle with a circle in the center.



The robot sets the marks (it writes central part of the screen image with associated data to AVM tree).
Marker data (inside AVM) contain horizon angle (azimuth), path length from start and
X,Y location position (relatively of start position). Information for marker data is based
on marks tracking (horizontal shift for azimuth and change of mark scaling for path length measurement).
Generalization of all recognized data of marks in input image gives actual value of azimuth and path length.
If we have information about motion direction and value of path length from previous position and x,y
coordinates of previous position then we can calculate a next coordinates of current position.
This information will be written to the new mark (inside AVM) when it will be created and so forth.

Set of marks inside AVM gives a map of locations (robot see the marks and recognize his location).

I finished update of my robot (now it Nettop + motor/servo controller + local camera + tracks platform):


And it allows me to proceed with developing of "Walking mode".


>> What does this kaleidoscope in the center of the screen?

This is an indicator of markers density inside of map (gradation from blue to red is respectively:
blue - no markers in this direction, the red - the maximum density of markers in this direction).
It works like radar that indicates the ratio between the numbers of markers per sector in each
direction with increments of 5 degrees (72 partitions in a circle). The "blue arrow" shows the direction
to the least markers density and robot will try to turn in this direction for the exploration
of new uncharted territory.

>> What's going there?

The robot looks around and marks territory by markers along the arc then moving forward
and inspects again etc. If he sees an obstacle then robot marks the corresponding marker
in the map as "Obstacle". Further the robot guided by a map marker will not touch
with obstacle again (red light in the center of the green ellipse).

03-27-2011, 06:06 AM
Especially for improvement of algorithm navigation in "Walking mode" was developed mod for Quake 3
that provide environment emulation and can be used instead real robot.


04-12-2011, 11:55 PM
It is a testing of odometry-localization algorithm at Simpsons map. The Quake 3 mod was used as environment emulator for capture of live video by AVM Navigator v0.7 in this test.


Simpsons map for Quake 3


04-16-2011, 04:20 PM
It is first simple test of new version AVM Navigator v0.7 in mode "Navigation by map".


04-19-2011, 02:56 PM
Hi Dmitri and to all trossenrobotics forum members. I want to share with you Argon Technology's designed world first, fully autonomous robotic at Fusion and TGA laboratory.


04-20-2011, 12:14 AM
And how this solution touched "computer vision" and "robot navigation" problems?

04-23-2011, 05:53 AM
It is experiment of autonomous robot's navigation by map in emulation mode. Primarily robot plans the path from current location to the target position (big green circle) and then robot begins walking to the interim point of path (big red circle). When interim point is achieved then robot gets direction to the other point and so further until finish of path.


04-23-2011, 06:11 AM
It is experiment of autonomous robot's navigation by map in emulation mode. Primarily robot plans the path from current location to the target position (big green circle) and then robot begins walking to the interim point of path (big red circle). When interim point is achieved then robot gets direction to the other point and so further until finish of path.

Waypoints is the term you are speaking of. Excellent work!

04-23-2011, 06:33 AM
Thank you, I will keep in mind (path points = waypoints) :rolleyes:

04-25-2011, 01:02 PM
So, let's continue with waypoints in real environment :happy:
Turn is passed in autonomous "Navigation by map" mode by real robot.


04-29-2011, 02:11 PM
Autonomous walking of robot from bedroom to another room based on visual landmark navigation (AVM Navigator v0.7).


05-03-2011, 03:16 AM
AVM Navigator v0.7 will be released soon.
In new version will be added two modes: "Marker mode" and "Navigate by map".

Marker mode


Marker mode provides a forming of navigation map that will be made automatically by space marking. You just should manually lead the robot along some path and repeat it several times for good map detailing.

Navigation by map


In this mode you should point the target position at the navigation map and further the robot plans the path (maze solving) from current location to the target position (big green circle) and then robot begins automatically walking to the target position.


For external control of "Navigate by map" mode will be added new module variables:

NV_LOCATION_X - current location X coordinate;
NV_LOCATION_Y - current location Y coordinate;
NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians);

Target position at the navigation map
NV_IN_TRG_POS_X - target position X coordinate;
NV_IN_TRG_POS_Y - target position Y coordinate;

NV_IN_SUBMIT_POS - submitting of target position (value should be set 0 -> 1 for action).

05-04-2011, 02:14 AM
Now AVM Navigator v0.7 is released and you can download it from RoboRealm website (http://www.roborealm.com/help/AVM_Navigator.php).

06-04-2011, 12:46 PM
Quake 3 Mod


Don't have a robot just yet? Then click here (http://www.roborealm.com/help/AVM_Navigator/avm_q3mod_help.html) to view the manual that explains how to setup RoboRealm
with the AVM module to control the movement and processing of images from the Quake first person video game.
This allows you to work with visual odometry techniques without needing a robot!

The additional software needed for this integration can be downloaded here (http://www.roborealm.com/help/AVM_Navigator/Setup_avm_q3mod.exe).

10-24-2011, 12:25 AM
Now I work over new "Watching mode" that allow robot to respond on motion in input video.


This mode will be included to the next version of AVM Navigator v0.7.3.

02-04-2012, 10:18 AM
It is a testing of new robot for AVM Navigator project:


01-06-2014, 08:00 AM
Hi guys,

I'm still working over AVM technology. Now I've founded my own company that is named Invarivision.com (http://invarivision.com).
We are small but passionate team of developers that are working over system which would be able watch TV (http://invarivision.com/help.php) and recognize video that interests user.

And we need your help!

It seems that interface of our search system is good enough because we try to make it to be simple and user friendly but from other point of view it could be a total disaster.

Could you please take a look to our system (http://invarivision.com) and then tell us about good and bad sides of this?

The constructive criticism is welcome.

With kind regards, EDV.

01-07-2014, 06:01 AM
This is quite different from autonomous robot navigation.
Our search system also is the robot that watches TV like a human but does it simultaneously on several TV channels and this robot can watch TV nonstop and do not to get tired :)

Can you explain more about how the system would determine what video interests the user?
The user just uploads video (http://invarivision.com/help.php) which interests him to the search system and then system would search it on all channels that are scanned.

Also user can add own TV channel that interests him if system still does not have such channel.

So, in other words: AVM Video Search system (http://invarivision.com/) provides service that allows customers to make audit of TV channels or searching of forbidden or copyrighted video in file with help of automatically recognition of the video content.

The main advantage of this system is direct recognition of separate images into analyzing of video content that is provide searching of very small video fragments about two seconds long.

01-07-2014, 10:41 AM
Why TV? Broadcast TV is going the way of the steam engine and the fax machine. Make it find YouTube videos or Netflix videos that I may like. (I wonder if the Netflix prize is still available, btw -- maybe you can win a million dollars!)