Awesome work EDV so interesting. keep up the good work.
Awesome work EDV so interesting. keep up the good work.
People yearn after this robotic dream, but you can't strip your life of all meaning, emotion and feeling and expect to function.
Thanks, I will try to go further and once maybe machines will see the light and T-1000 will cease to be a fantasy![]()
The first iteration of AVM v0.5 SDK porting to C# is done:
http://edv-detail.narod.ru/AVM_SDK_v0-5-net.zip
This package is using C# porting of OpenCV v2.0 by library: "Emgu CV".
Emgu CV download
About OpenCV v2.0
OpenCV v2.0 download
The test video ".\AVM_SDK.net\bin\RcgTraining4s.avi" is using XviD codec.
Make sure that it installed on your PC.
Last edited by EDV; 02-19-2010 at 03:11 AM.
Awesome I love seeing this type of work this may help me understand your prev work better! Thanks! +rep for sharing!
I made more convenient implementation of algorithm AVM for C # and also I prepared detailed documentation "Library AVM SDK simple.NET".
Just print file .\AVM_SDK_simple.net\bin\Go.png and show it against camera for recognition in example 2.
Last edited by EDV; 08-05-2010 at 01:26 AM.
RoboRealm company begins distribution of "Navigator" plugin within RoboRealm software.
![]()
Dr. Bruce shared his successful experience with AVM Navigator plugin:
Leaf can look for a fiducial. Then he can calculate angle and distance using x-rotation and size.
Next, because he knows where in the room the fiducial is located, he can apply some trigonometry
to calculate his exact pose in the room (X Y Orientation). He then goes to a specific place
in the room about 3 feet in front of the door. Next, switching to AVM, he can navigate through
the doorway successfully!
In fact, this was so successful, I then decided to have him try the hallway. He goes down the hallway
stopping when his sonar sensors detect the wall. I then have him programmed to turn clockwise 90 degrees.
And he then finishes by going down a short narrow irregular hallway (previously a difficult challenge).
Here are links to 2 videos:
I have more to do - but this is a great start...
Now I work at developing of new mode for "AVM Navigator plugin" that will completely autonomous. This mode will have name "Walking mode" and robot in this mode will be able to explore an unknown rooms environment.
And first stage of this development (obstacle avoidance) is successfully passed now:
Visual odometry, autonomous mapping will further...
-= Walking mode - how it work? =-
*Obstacle avoidance
1. Scan the vertical contours (the legs of chairs, corners of walls, doors and jambs, etc)
and if there is vertical contours in front of robot then it is obstacle
(yellow indicator in form as trapeze), and then robot begin to quietly go around.
2. The camera has infrared illumination (headlight) and if robot comes close to the wall
then the lamp provides illumination in the form of white spots on the screen.
Accordingly if in the center of the screen will appear a large white spot then robot
will roll back with a writing of obstacle image to AVM (this case indicated by a large
red crossed rectangle). The marked obstruction (which are stored in AVM) is indicated
by blue rectangles with the inscription "Obstacle". If the robot sees the marked obstacles
in front then robot just will go around it at a distance (it is exclude needs in going closer
to wall in second time).
3. The robot processes the motion history of input images for one second earlier. And if robot
were given the command "forward" but it not caused to any changes in input image (nothing happens)
then the robot got stuck (his fore-part stuck into the wall). Then robot gives the command
backwards with a writing of obstacles in the AVM (it could prevent collision with the wall
in second time). This case indicated by a red rectangle with a circle in the center.
*Odometry/localisation
The robot sets the marks (it writes central part of the screen image with associated data to AVM tree).
Marker data (inside AVM) contain horizon angle (azimuth), path length from start and
X,Y location position (relatively of start position). Information for marker data is based
on marks tracking (horizontal shift for azimuth and change of mark scaling for path length measurement).
Generalization of all recognized data of marks in input image gives actual value of azimuth and path length.
If we have information about motion direction and value of path length from previous position and x,y
coordinates of previous position then we can calculate a next coordinates of current position.
This information will be written to the new mark (inside AVM) when it will be created and so forth.
Set of marks inside AVM gives a map of locations (robot see the marks and recognize his location).
I finished update of my robot (now it Nettop + motor/servo controller + local camera + tracks platform):
And it allows me to proceed with developing of "Walking mode".
>> What does this kaleidoscope in the center of the screen?
This is an indicator of markers density inside of map (gradation from blue to red is respectively:
blue - no markers in this direction, the red - the maximum density of markers in this direction).
It works like radar that indicates the ratio between the numbers of markers per sector in each
direction with increments of 5 degrees (72 partitions in a circle). The "blue arrow" shows the direction
to the least markers density and robot will try to turn in this direction for the exploration
of new uncharted territory.
>> What's going there?
The robot looks around and marks territory by markers along the arc then moving forward
and inspects again etc. If he sees an obstacle then robot marks the corresponding marker
in the map as "Obstacle". Further the robot guided by a map marker will not touch
with obstacle again (red light in the center of the green ellipse).
Especially for improvement of algorithm navigation in "Walking mode" was developed mod for Quake 3
that provide environment emulation and can be used instead real robot.
There are currently 5 users browsing this thread. (0 members and 5 guests)
Bookmarks