PDA

View Full Version : Using of AVM plugin in RoboRealm



EDV
03-23-2011, 03:07 AM
Since October 2007 I developed new object recognition algorithm "Associative Video Memory" (AVM) (http://edv-detail.narod.ru/AVM_main.html).


Algorithm AVM uses a principle of multilevel decomposition of recognition matrices,
it is steady against noise of the camera and well scaled, simply and quickly for training.

And now I want to introduce my experiment with robot navigation based on visual landmark beacons:
"Follow me" and "Walking by gates".

I embodied both algorithms to Navigator plugin (http://www.roborealm.com/help/AVM_Navigator.php) for using within RoboRealm software.


The Navigator module has two base algorithms:

-= Follow me =-

http://www.youtube.com/watch?v=HTxNlOpm11U

The navigation algorithm do attempts to align position of a tower and the body of robot
on the center of the first recognized object in the list of tracking and if the object is far
will come nearer and if it is too close it will be rolled away back.


-= Walking by gates =-

http://www.youtube.com/watch?v=xbCpthKrL0o

The gate data contains weights for the seven routes that indicate importance of this gateway
for each route. At the bottom of the screen was added indicator "horizon" which shows direction
for adjust the robot's motion for further movement on the route. Field of gates is painted blue
if the gates do not participate in this route (weight rate 0), and warmer colors (ending in yellow)
show a gradation of "importance" of the gate in the current route.

* The procedure of training on route
For training of the route you have to indicate actual route (button "Walking by way") in "Nova gate"
mode and then you must drive the robot manually by route (the gates will be installed automatically).
In the end of the route you must click on the button "Set checkpoint" and then robot will turn several
times on one spot and mark his current location as a checkpoint.


life before and after Kinect (http://www.rssc.org/life-and-after-kinect)

http://www.rssc.org/sites/default/files/images/100_6561.JPG

Dr. Bruce (http://www.leafproject.org/) shared his successful experience with AVM Navigator plugin (http://www.roborealm.com/help/AVM_Navigator.php):


Leaf can look for a fiducial. Then he can calculate angle and distance using x-rotation and size.
Next, because he knows where in the room the fiducial is located, he can apply some trigonometry
to calculate his exact pose in the room (X Y Orientation). He then goes to a specific place
in the room about 3 feet in front of the door. Next, switching to AVM, he can navigate through
the doorway successfully!

In fact, this was so successful, I then decided to have him try the hallway. He goes down the hallway
stopping when his sonar sensors detect the wall. I then have him programmed to turn clockwise 90 degrees.
And he then finishes by going down a short narrow irregular hallway (previously a difficult challenge).

Here are links to 2 videos:


http://www.youtube.com/watch?v=k_0iMlfQ0Nk


http://www.youtube.com/watch?v=7DQv4OTBh7I

I have more to do - but this is a great start...



Some user video about testing of AVM plugin in RoboRealm to build a set of objects:


http://www.youtube.com/watch?v=eBWXcjc0xS8


http://www.youtube.com/watch?v=SE5uaWaI1Mc

EDV
03-24-2011, 04:31 AM
George Peter's blog: "Navigation using object recognition" (http://programmablerobots.blogspot.com/2011/01/navigation-using-object-recognition.html)

bonmot
03-29-2011, 11:34 PM
wow, this is amazing. what operating system is used in that tank? ROS?
What type of CPU? I am planning a robot project too. want the single board computer can do object recognition / learning too.

EDV
03-30-2011, 01:43 AM
I used in my experiments:

Software: Windows 7, RoboRealm\AVM_Navigator (http://www.roborealm.com/help/AVM_Navigator.php)

Hardware:
- tracks platform;
- motor/servo controller (OR-AVR-M128-DS + OR-USB-UART);
- HXT12k (servo for camera rotation);
- Logitech HD Webcam C270;
- 3Q NetTop Qoo (Intel Atom 230 1600Mhz).


http://www.youtube.com/watch?v=0jXW83az20E

bonmot
03-30-2011, 08:29 AM
Thank you. I am planning to use fitpc2 1g ram 1.6g CPU. Do you think it will be fast enough to do what you are doing here?

EDV
03-30-2011, 11:40 AM
This platform (Fit-PC2) has the same CPU as in my NetTop and also has as much memory. So, I think that AVM will work there as well as in my platform.

bonmot
03-30-2011, 11:41 AM
Thanks EDV. I just ordered the Fit-PC2. Good to hear that it will be able to do the minimum.

EDV
05-03-2011, 04:19 PM
This materials was shared by user aka Scorpio (original article in Russian (http://roboforum.ru/forum51/topic9799-15.html#p205691))

First let’s try to make some sketch of navigation to the charging dock station with using only “Object recognition” mode. It is clear that the image in the face of robot when he is located into the dock station will not be recognized from far distance. Therefore we will train module on several pictures that is located on the different distances. In our case we will use approximately four pictures (Base0...Base3). The training was in the following order: the robot was set exactly against dock station and interest area for learning was centered in horizontal plain and also was pointed at "smile" image in vertical plain. It is clear that the module can see simultaneously several objects. We specially did it with covering that the robot could not lose sight of landmark.

Also I made some script for robot control. There is global variable NV_COURS that indicates difference between landmark and center of screen in this program. For landmark we will choose recognized object with the biggest value of similarity rate.


nvObjectsTotal = GetVariable("NV_OBJECTS_TOTAL")
if nvObjectsTotal>0 then ‘ If any object was found
nvArrObjRectX = GetArrayVariable("NV_ARR_OBJ_RECT_X")
nvArrObjRectW = GetArrayVariable("NV_ARR_OBJ_RECT_W")
nvArrObjSim = GetArrayVariable("NV_ARR_OBJ_SIM")
'nvArrObjIdx = GetArrayVariable("NV_ARR_OBJ_IDX")
imageWidth = GetVariable("IMAGE_WIDTH")
SimMax=0:ObjIndex=0
for i=0 to nvObjectsTotal-1 ‘ take object with biggest similarity
if nvArrObjSim(i)>SimMax then
SimMax=nvArrObjSim(i)
ObjIndex=i
end if
next
ObjCentre=nvArrObjRectX(ObjIndex)+nvArrObjRectW(Ob jIndex)/2
ObjCourse=CInt(ObjCentre- imageWidth/2)
SetVariable "NV_COURS", ObjCourse
else
SetVariable "NV_COURS", 999 ‘ If object was not found
end if

Video of the parking test:


http://www.youtube.com/watch?v=cEFzDclh6sc

Finally we see that the robot confidently is come to the dock station.

EDV
05-13-2011, 02:51 AM
I was wondering if there is anymore information on the Route Recorder module? It is in RoboRealm but there is no explanation or anything about how to use the module on the documentation page.

Thank you very much,

-Ben

I hope you speak about AVM Navigator plugin:
http://www.roborealm.com/help/AVM_Navigator.php

I plan to share some emulator program (based on Quke 3 mod) for acquaintance with "Marker mode" and "Navigation by map" modes.


http://www.youtube.com/watch?v=9opAZYRsDxU

But unfortunately this emulator is not ready for publishing yet.

So, let’s try to use this modes directly on real robot (step by step).

1. Connect your robot to AVM Navigator:

Just use the variables that described below for connection of your robot to AVM Navigator:

Use variable NV_TURRET_BALANCE for camera turning: NV_TURRET_BALANCE - indicates the turn degree amount. This value range from -100 to 100 with forward being zero.

Use for motor control NV_L_MOTOR and NV_R_MOTOR variables that have range from -100 to 100 for motion control ("-100 " - full power backwards, "100" - full power forwards, "0" - motor off).

If you use Lego NXT platform then you should just adjust variables NV_L_MOTOR and NV_R_MOTOR to the range of NXT parameters (motors range from 0 to 255 with 128 being neutral) by VBScript:
http://www.roborealm.com/forum/index.php?thread_id=4086#

2. Call the dialog window of AVM Navigator (click on it at video processing pipeline) and then switch to “Nova gate mode”. Now you can control your robot by arrow keys and also you can turn robot camera by "Delete" and "Page Down" keys ("End" key will set the camera in front position).

3. Move your robot to the start position. In this position will be placed origin of navigation map.

4. Reset markers data information by click at "Clear/mark data" (if it needed).

5. Switch to "Marker mode" and then you should manually lead the robot along a route.

6. In the end of route you should switch to "Nova gate mode" and then save route data (click "Save") to the file that will has different name with default "avm.dat".

7. Return to the start position and switch to "Navigation by map" mode then you should point robot to the target position at the navigation map (click the left mouse button on the map) and robot automatically will navigate to this point.

You should repeat items 2-6 several times for good detailing of navigation map.

EDV
05-13-2011, 03:00 AM
In new version of AVM Navigator v0.7 (http://www.roborealm.com/help/AVM_Navigator.php) is added two modes: "Marker mode" and "Navigate by map".

Marker mode

http://www.roborealm.com/help/Navigator_wnd_4.png

Marker mode provides a forming of navigation map that will be made automatically by space marking. You just should manually lead the robot along some path and repeat it several times for good map detailing.

Navigation by map

http://www.roborealm.com/help/Navigator_wnd_5.png

In this mode you should point the target position at the navigation map and further the robot plans the path (maze solving) from current location to the target position (big green circle) and then robot begins automatically walking to the target position.


http://www.roborealm.com/help/Navigation_map_2.png

For external control of "Navigate by map" mode is added new module variables:

NV_LOCATION_X - current location X coordinate;
NV_LOCATION_Y - current location Y coordinate;
NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians);


Target position at the navigation map
NV_IN_TRG_POS_X - target position X coordinate;
NV_IN_TRG_POS_Y - target position Y coordinate;

NV_IN_SUBMIT_POS - submitting of target position (value should be set 0 -> 1 for action).

Examples


http://www.youtube.com/watch?v=5qKQg5e4wbc


http://www.youtube.com/watch?v=WdfWt66rEsk

EDV
05-13-2011, 06:12 AM
We are trying to track an octagonal object using AVM. According to project we must follow the object from different location like from its right or back etc. So we draw 8 eight different patterns to each face of the object so we can understand where our robot is according to object. To do that we trained 3 face at once so we can track the object at all 360 degrees. But training 3 face takes time turning the object etc. and doing it fast causes low quality on recognition.
Object's image and the script we are using (http://www.roborealm.com/forum/program.php?post_id=22527) are posted.

Soner

http://www.roborealm.com/uploads/22527_1.png



The training time of AVM is not a problem in this case. If you not have enough time for training on some object then you should just repeat training one more time with the same object (chooses the same object from "Enter object name" dialog window).

EDV
05-14-2011, 11:06 AM
Hello, how does it works the plugin dectection of obstacle in the avm navigator used within the navigation by map, i often have a red triangular with the message Obstacle and the robot go backward, but there's no obsctacle... Is it possible to desactivate this detection or bypass this dectecion of the obstacle?

Thanks for your help.

Fabrice.

How does work obstacle avoidance algorithm?

The robot processes the motion history of input images for the previous second. If any commands were given (i.e. the "forward" command) but no changes where seen in the input image (nothing happened) then the robot is assumed to be stuck (i.e. the robot is pushing against a wall). When stuck the module will set the movement variables to move backwards. The stuck situation is indicated by a red rectangle with a circle in the center.

I think your robot see picture that has too few details (not enough contrast objects) and when robot give command "forwards" he see that no changes happen. This situation is caused by less lighting (robot see darkness) or by cleared walls without any textures.

Unfortunately but current version of AVM Navigator has no way for disabling of this algorithm but I will include this option in next modification.

So you could try to bring some objects with contrast texture to robot environment or you would try to increase lighting.

EDV
05-17-2011, 09:08 AM
Hey I was just wondering if it was possible to merge my existing RR program with AVM. I want be able to control my bot with my Playstation remote (which already works) and still have AVM do its thing. Attached is my file (http://www.roborealm.com/forum/program.php?post_id=22553) please take a look an let me know if it can be done.

I assume I would use the AVM - NV_L_MOTOR & NV_R_MOTOR variables in some way?


I think for merging of manual and automatic control by Navigator is needed changes in your VBScript program like this:


' this program translates the joystick range of -1000 and 1000 to 0 and 255. This number
' can then be feed directly to the robot for speed and turing

' amount of y joystick determines speed
speed = 128 - CInt(((GetVariable("joy_y") * 128) / 1000))
' amount of x joystick determines rotation
turn = CInt(((GetVariable("joy_x") * 128) / 1000))

' determine intermediate values
manual_left = speed + turn
manual_right = speed - turn

'ensure they are within the range 0 - 255
if manual_left<0 then manual_left=0
if manual_right<0 then manual_right=0
if manual_left>255 then manual_left=255
if manual_right>255 then manual_right=255

nv_L_Motor = GetVariable("NV_L_MOTOR")
nv_R_Motor = GetVariable("NV_R_MOTOR")

nxt_L_Motor = round(128 + (nv_L_Motor/100.0)*127)
nxt_R_Motor = round(128 + (nv_R_Motor/100.0)*127)

manual_control = GetVariable("MANUAL_CONTROL")

if manual_left <> 128 OR manual_right <> 128 then manual_control = 100

if manual_control = 0 then mleft = nxt_L_Motor else mleft = manual_left
if manual_control = 0 then mright = nxt_R_Motor else mright = manual_right

if manual_control > 0 then manual_control = manual_control - 1

' finally set the variables to be used in the joystick module
SetVariable "left_motor", mleft
SetVariable "right_motor", mright
SetVariable "MANUAL_CONTROL", manual_control


If manual control is activated then it set the value of MANUAL_CONTROL variable to 100. Control will be switched to manual in this case and will stay in this status during about 4 seconds after stop of manual commands (while MANUAL_CONTROL will countdown).

EDV
05-18-2011, 03:09 AM
Hi there its CJ i just wanted to ask i am trying to use AVM with my nxt and i currently have got my robots head on a tunable base however i make it learn an object and i make motor a variable equal turret balance but it continuesoly turns left it wont stop or turn right or anything do i need a vb script if so could you please point me in the right direction as i have never used these could you please make one for me if necessary so i can study it thanks in advances

You should use servo for camera turning and use variable NV_TURRET_BALANCE in this case. But if you use just simple motor instead servo then you could use variables NV_TURRET_LEFT and NV_TURRET_RIGHT and you would use VBScript program like this:


nv_L_Motor = GetVariable("NV_L_MOTOR")
nv_R_Motor = GetVariable("NV_R_MOTOR")
nv_Turret_Left = GetVariable("NV_TURRET_LEFT")
nv_Turret_Right = GetVariable("NV_TURRET_RIGHT")


nxt_L_Motor = round(128 + (nv_L_Motor/100.0)*127)
nxt_R_Motor = round(128 + (nv_R_Motor/100.0)*127)

nxt_Turret_Motor = 128

if nv_Turret_Left <> 0 then nxt_Turret_Motor = 64
if nv_Turret_Right <> 0 then nxt_Turret_Motor = 192


SetVariable "NXT_L_MOTOR", nxt_L_Motor
SetVariable "NXT_R_MOTOR", nxt_R_Motor
SetVariable "NXT_TURRET_MOTOR", nxt_Turret_Motor

For more detail see attached program file (http://www.roborealm.com/forum/program.php?post_id=22564).

EDV
05-19-2011, 12:59 AM
Thanks so much I finally got it working just had to make a few number changes as the head is moved by a worm gear thanks again :)

But servo would be better solution for camera turning because it participates in initial localization of "Marker mode" and "Navigation by map" modes for camera getting of state "look forwards".

EDV
05-19-2011, 01:08 PM
But it's a nxt motor and under servos I don't see nxt so I have to use the method above right?

In dialog window of RoboRealm "Lego NXT" module there is "Servos" page. So, if you have lot of servos amounting to your NXT then it would be better to use it with camera turning.

EDV
05-27-2011, 08:01 PM
Next modification of AVM Navigator v0.7.1 is released.

Changes:
1. New parameters is added "Move/Turn speed" to Navigator dialog window:

http://forums.trossenrobotics.com/attachment.php?attachmentid=2617&stc=1&d=1306544338

Move/Turn speed – Definition of maximal speed for robot moving and turning.
These parameters can limit values of control variables.

2. Option of disabling obstacle detection is added to Navigator dialog window.
3. Special control variables is added (range from 0 to 255 with 128 being neutral).
You can use it immediately with Lego NXT or other control modules of RoboRealm:

-== Additional control variables for Lego NXT ==-
NV_L_MOTOR_128, NV_R_MOTOR_128 - motors control
NV_TURRET_128 - control of camera turning
NV_TURRET_INV_128 - inversed control of camera turning

EDV
06-04-2011, 12:31 PM
Quake 3 Mod

http://www.roborealm.com/help/AVM_Navigator/Quake_Thumb.jpg

Don't have a robot just yet? Then click here (http://www.roborealm.com/help/AVM_Navigator/avm_q3mod_help.html) to view the manual that explains how to setup RoboRealm
with the AVM module to control the movement and processing of images from the Quake first person video game.
This allows you to work with visual odometry techniques without needing a robot!

The additional software needed for this integration can be downloaded here (http://www.roborealm.com/help/AVM_Navigator/Setup_avm_q3mod.exe).

EDV
06-06-2011, 01:32 PM
I purchased the AVM Module plugin from Roborealm website last Saturday.
Now, I am able to use the Navigator Plugin to track the eye (front face position) under Navigator Mode. As a next steps, I plan to drive the Turret PTU motors using the Control Variables from AVM Navigator.

However, I am not sure about generating a position command (0 to 255) for the Pan and Tilt mortor positioning using these variables. Is any sample program.robo file available so as to understand the position command generation using the above variables to begin with?

I think you should use VBScript program like this:


' Get turret control variables
turret_v = GetVariable("TURRET_V_CONTROL")
turret_h = GetVariable("TURRET_H_CONTROL")

nvObjectsTotal = GetVariable("NV_OBJECTS_TOTAL")

if nvObjectsTotal>0 then ' If any object was found
' Get image size
img_w = GetVariable("IMAGE_WIDTH")
img_h = GetVariable("IMAGE_HEIGHT")

' Get array variables of recognized objects
nvArrObjRectX = GetArrayVariable("NV_ARR_OBJ_RECT_X")
nvArrObjRectY = GetArrayVariable("NV_ARR_OBJ_RECT_Y")
nvArrObjRectW = GetArrayVariable("NV_ARR_OBJ_RECT_W")
nvArrObjRectH = GetArrayVariable("NV_ARR_OBJ_RECT_H")

' Get center coordinates of first object from array
obj_x = nvArrObjRectX(0) + nvArrObjRectW(0)/2
obj_y = nvArrObjRectY(0) - nvArrObjRectH(0)/2

' Get difference between object and screen centers
dX = img_w/2 - obj_x
dY = img_h/2 - obj_y

if dX > 40 and turret_h > -128 then
' The object is at left side
turret_h = turret_h - 1
end if

if dX < -40 and turret_h < 127 then
' The object is at right side
turret_h = turret_h + 1
end if

if dY > 40 and turret_v > -128 then
' The object is at the bottom
turret_v = turret_v - 1
end if

if dY < -40 and turret_v < 127 then
' The object is at the top
turret_v = turret_v + 1
end if
end if

turret_v128 = turret_v + 128
turret_h128 = turret_h + 128

' Set turret control variables
SetVariable "TURRET_V_CONTROL", turret_v
SetVariable "TURRET_H_CONTROL", turret_h
SetVariable "TURRET_V_128", turret_v128
SetVariable "TURRET_H_128", turret_h128

=============================
TURRET_H_128 - horizontal turret control
TURRET_V_128 - vertical turret control

See attached program (http://www.roborealm.com/forum/program.php?post_id=22666).

EDV
06-06-2011, 02:12 PM
Is it possible to play with virtual robot in "Navigation by map" mode?

Yes!

http://www.roborealm.com/help/AVM_Navigator/q3mod_route_training.png

Just look into documentation (http://www.roborealm.com/help/AVM_Navigator/avm_q3mod_help.html) and download the "AVM Quake 3 mod" installation (http://www.roborealm.com/help/AVM_Navigator/Setup_avm_q3mod.exe).

Stobs
06-06-2011, 09:50 PM
I don't know if it's because I'm unfamiliar with RoboRealm, but I find this thread to be quite impressive. Thank you for your posts EDV!

Regards

EDV
06-07-2011, 01:58 AM
I believe that the machines finally will be able to see real world in the near future and this faith help me in my developing.

You're welcome and thank you for your support!

EDV
06-20-2011, 02:38 AM
Next modification of AVM Navigator v0.7.2 (http://www.roborealm.com/help/AVM_Navigator.php) is released.

Changes:
The "Navigation map" was placed into main Navigator dialog window.
Also was added map scrolling and resizing abilities.

http://forums.trossenrobotics.com/attachment.php?attachmentid=2654&stc=1&d=1308555285

EDV
06-27-2011, 01:10 PM
Some user video of face tracking by AVM:


http://www.youtube.com/watch?v=rZv5k3YIXVQ

EDV
07-10-2011, 02:32 PM
Next modification of AVM Navigator v0.7.2.1 (http://www.roborealm.com/help/AVM_Navigator.php) is released.

Changes:
Visual odometry algorithm was updated:

http://www.roborealm.com/help/AVM_Navigator_movie_11.jpg

http://www.youtube.com/watch?v=Ggv8MaeJcTo

EDV
07-14-2011, 06:05 PM
What exactly are the steps that you go through to do the "follow me"? I see the video, but don't see the robo file.

Thanks,

Mel

1. Connect your robot to AVM Navigator.

Just use the variables that described below for connection of your robot to AVM Navigator:

Use variable NV_TURRET_BALANCE for camera turning:
NV_TURRET_BALANCE - indicates the turn degree amount.
This value range from -100 to 100 with forward being zero.

Use for motor control NV_L_MOTOR and NV_R_MOTOR variables
that have range from -100 to 100 for motion control
("-100 " - full power backwards, "100" - full power forwards, "0" - motor off).

You also can used alternative control variables
(motors range from 0 to 255 with 128 being neutral):

NV_L_MOTOR_128, NV_R_MOTOR_128 - motors control
NV_TURRET_128 - control of camera turning
NV_TURRET_INV_128 - inversed control of camera turning


2. Further you should train AVM to some object in "Object recognition" mode
and then switch to "Navigate mode" for activation of control variables (NV_L_MOTOR,
NV_R_MOTOR, NV_TURRET_BALANCE and others). The "follow me" algorithm will be activated
(Navigate mode) when you show object which was learned earlier in front of your robot.


http://www.youtube.com/watch?v=HTxNlOpm11U

EDV
07-23-2011, 02:06 AM
Hi,

I am using the AVM_Navigator plugin for human eye tracking in my academic project.
Upon eye object recognition and tracking, it has been observed that the Navigator also displays
a trajectory of tracking within an object image. This becomes a hurdle as I am interested to take
a snapshot of the tracked eye object. Is there any setting in AVM_Navigator window to avoid
the tracking trajectory when a particular object is being tracked. I am attaching a sample image
with trajectory highlighted in sky-blue color?

http://www.roborealm.com/uploads/22931_thumb_1.jpg (http://www.roborealm.com/uploads/22931_1.jpg)

Secondly, how can I reduce the window size (use to learn an object
in recognition mode) of recognition module?

Navigator package is updated now and you can download next modification of AVM Navigator v0.7.2.2 (http://www.roborealm.com/help/AVM_Navigator.php)
from your account link.

And now you can disable object trajectory in Navigator dialog window by checkbox "Show the object trajectory"
in “Object recognition” mode.

For reducing of object windows size:
Click "Set key image size (New)" in Navigator dialog window (object recognition mode) and then
answer "No" on question "Do you want to set 80x80 recommended key size?".

Further choose 40x40 key image size (click the left mouse button) and answer "Yes" in next dialog window.

Now you can train AVM on objects with resolution 40x40 pixels.

EDV
08-09-2011, 01:54 PM
I have done new plugin for RoboRealm (http://www.roborealm.com/help/EDV_DVR.php):

http://www.roborealm.com/help/EDV_DVR_thumb.jpg (http://www.roborealm.com/help/EDV_DVR.php)

Digital Video Recording system (DVR) (http://www.roborealm.com/help/EDV_DVR.php)

You can use the "DVR Client-server" package as a Video Surveillance System in which parametric data
(such as VR_VIDEO_ACTIVITY) from different video cameras will help you search for a video fragment
that you are looking for.

You can use the "DVR Client-server" package as a powerful instrument for debugging your video processing
and control algorithms that provides access to the values of your algorithm variables that were archived
during recording.


Technical Details

- ring video/parametric archive with duration of 1 - 12 months;

- configurable database record (for parametric data) with maximal length of 190 bytes;

- writing of parameters to database with discretization 250 ms;

- the DVR Client can work simultaneously with four databases that can be located at remote computers.


http://www.roborealm.com/help/dvr_principle.gif

EDV
08-16-2011, 01:46 AM
Is it possible to have the missile launcher track an object, and when centered on the object fire a shot?

Ben

I think you could use AVM Navigator (http://www.roborealm.com/help/AVM_Navigator.php) with VBScript program like this:


' Get turret control variables
turret_v = GetVariable("TURRET_V_CONTROL")
turret_h = GetVariable("TURRET_H_CONTROL")
turret_f = GetVariable("TURRET_FIRE")


nvObjectsTotal = GetVariable("NV_OBJECTS_TOTAL")

if nvObjectsTotal>0 then ' If any object was found
' Get image size
img_w = GetVariable("IMAGE_WIDTH")
img_h = GetVariable("IMAGE_HEIGHT")

' Get array variables of recognized objects
nvArrObjRectX = GetArrayVariable("NV_ARR_OBJ_RECT_X")
nvArrObjRectY = GetArrayVariable("NV_ARR_OBJ_RECT_Y")
nvArrObjRectW = GetArrayVariable("NV_ARR_OBJ_RECT_W")
nvArrObjRectH = GetArrayVariable("NV_ARR_OBJ_RECT_H")

' Get center coordinates of first object from array
obj_x = nvArrObjRectX(0) + nvArrObjRectW(0)/2
obj_y = nvArrObjRectY(0) - nvArrObjRectH(0)/2

' Get difference between object and screen centers
dX = img_w/2 - obj_x
dY = img_h/2 - obj_y

threshold = 40

if dX > threshold and turret_h > -128 then
' The object is at left side
turret_h = turret_h - 1
end if

if dX < -threshold and turret_h < 127 then
' The object is at right side
turret_h = turret_h + 1
end if

if dY > threshold and turret_v > -128 then
' The object is at the bottom
turret_v = turret_v - 1
end if

if dY < -threshold and turret_v < 127 then
' The object is at the top
turret_v = turret_v + 1
end if

' Is the target locked?
if dX < threshold and dX > -threshold and dY < threshold and dY > -threshold then
turret_f = 1
else
turret_f = 0
end if
else
' Back to the center if object is lost
if turret_h > 0 then turret_h = turret_h - 1
if turret_h < 0 then turret_h = turret_h + 1
if turret_v > 0 then turret_v = turret_v - 1
if turret_v < 0 then turret_v = turret_v + 1

turret_f = 0
end if

turret_v128 = turret_v + 128
turret_h128 = turret_h + 128

' Set turret control variables
SetVariable "TURRET_V_CONTROL", turret_v
SetVariable "TURRET_H_CONTROL", turret_h
SetVariable "TURRET_FIRE", turret_f

SetVariable "TURRET_V_128", turret_v128
SetVariable "TURRET_H_128", turret_h128


=============================
TURRET_H_128 - horizontal turret control
TURRET_V_128 - vertical turret control
TURRET_FIRE - fire signal

See attached program (http://www.roborealm.com/forum/program.php?post_id=23024).


I tested it on the prototype of "Twinky rover (http://www.youtube.com/watch?v=Fkjpma1oWAg)" and it works fine as you can visibly understand :happy:


http://www.youtube.com/watch?v=pt2y7xkiTXo


It is a testing of the enhanced tracking algorithm that takes into consideration variable servo speed:


http://www.youtube.com/watch?v=ueqDhuHiR-E

See VBScript program (http://www.roborealm.com/forum/program.php?post_id=24293) and diagram below for more details:

3966 (http://www.roborealm.com/uploads/24293_1.png)

EDV
08-16-2011, 12:46 PM
I never learned VB Script, but I think I semi understand it. To help out a newbie, where does the motor control come in?

Thanks!!!

Ben

The source data for turret control is deviation of the object center from center of screen (variables dX, dY).
Further it influence on status of intermediate control variables "turret_h" and "turret_v" that has range from -128 to 127.


if dX > threshold and turret_h > -128 then
' The object is at left side
turret_h = turret_h - 1
end if

if dX < -threshold and turret_h < 127 then
' The object is at right side
turret_h = turret_h + 1
end if

if dY > threshold and turret_v > -128 then
' The object is at the bottom
turret_v = turret_v - 1
end if

if dY < -threshold and turret_v < 127 then
' The object is at the top
turret_v = turret_v + 1
end if

The variable "threshold" is working like hysteresis that prevents self-oscillation of turret.


if dX < threshold and dX > -threshold and
dY < threshold and dY > -threshold then
turret_f = 1
else
turret_f = 0
end if

When object is found in center of screen then it activate fire signal (variable turret_f).

EDV
08-16-2011, 01:33 PM
To use that VBscript you wrote for real, what would need to be modified?

I think that this VBScript program is complete and it not needs in any changes.
You can use variables TURRET_V_CONTROL and TURRET_H_CONTROL (range from -128 to 127 )
or variables TURRET_H_128 and TURRET_V_128 (range from 0 to 255 with 128 being neutral)
for servos pan-tilt control. Also you can use Scale_Variable module for scaling into any convenient range:
http://www.roborealm.com/help/Scale_Variable.php

And you can use serial module for transmitting of variables value to your robot/turret:
http://www.roborealm.com/help/Serial.php

or any other module like these:
http://www.roborealm.com/help/Lego_NXT.php
http://www.roborealm.com/help/Endurance_Servo.php

EDV
08-16-2011, 02:06 PM
What’s the range for AVM Navigator?

The AVM Navigator (http://www.roborealm.com/help/AVM_Navigator.php) is an additional module that is distributed along with RoboRealm
and VBScript program use variables NV_OBJECTS_TOTAL, NV_ARR_OBJ_RECT_X,
NV_ARR_OBJ_RECT_Y, NV_ARR_OBJ_RECT_W, NV_ARR_OBJ_RECT_H
that was produced by AVM Navigator as the source data for object pointing:

NV_OBJECTS_TOTAL - total number of recognized objects
NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
NV_ARR_OBJ_RECT_W - width of recognized object
NV_ARR_OBJ_RECT_H - height of recognized object

EDV
08-16-2011, 02:46 PM
Ok so for scale variable, my motor does not have a range (it is the motor used in electronic scooters/wheel chairs). So how do I use that?

However let's try another variant of VBScript program:


' Set the turret control variables
turret_h = 0
turret_v = 0
turret_f = 0

nvObjectsTotal = GetVariable("NV_OBJECTS_TOTAL")

if nvObjectsTotal>0 then ' If any object was found
' Get image size
img_w = GetVariable("IMAGE_WIDTH")
img_h = GetVariable("IMAGE_HEIGHT")

' Get array variables of recognized objects
nvArrObjRectX = GetArrayVariable("NV_ARR_OBJ_RECT_X")
nvArrObjRectY = GetArrayVariable("NV_ARR_OBJ_RECT_Y")
nvArrObjRectW = GetArrayVariable("NV_ARR_OBJ_RECT_W")
nvArrObjRectH = GetArrayVariable("NV_ARR_OBJ_RECT_H")

' Get center coordinates of first object from array
obj_x = nvArrObjRectX(0) + nvArrObjRectW(0)/2
obj_y = nvArrObjRectY(0) - nvArrObjRectH(0)/2

' Get difference between object and screen centers
dX = img_w/2 - obj_x
dY = img_h/2 - obj_y

threshold = 40

if dX > threshold then
' The object is at left side
turret_h = - 1
end if

if dX < -threshold then
' The object is at right side
turret_h = 1
end if

if dY > threshold then
' The object is at the bottom
turret_v = - 1
end if

if dY < -threshold then
' The object is at the top
turret_v = 1
end if

' Is the target locked?
if dX < threshold and dX > -threshold and
dY < threshold and dY > -threshold then
turret_f = 1
end if
end if

' Set turret control variables
SetVariable "TURRET_V_CONTROL", turret_v
SetVariable "TURRET_H_CONTROL", turret_h
SetVariable "TURRET_FIRE", turret_f

=================================
TURRET_H_CONTROL - horizontal turret control (0 - motor off, 1 - right turn, -1 - left turn)

TURRET_V_CONTROL - vertical turret control (0 - motor off, 1 - turn it upward, -1 - turn it downward)

TURRET_FIRE - fire signal

See attached program (http://www.roborealm.com/forum/program.php?post_id=23034).

EDV
08-17-2011, 01:15 AM
I'll load it up and try it. Which reminds me, where exactly do I load VBscript?

First you should download The Microsoft® Windows® Script Control (http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=1949).

See VBScript module guide (http://www.roborealm.com/help/VBScript_Program.php) for more details.

Further download the VBScript program (http://www.roborealm.com/forum/program.php?post_id=23034).

Start RoboRealm and load "program.robo" file (click "Open" button in main dialog window).

Slugman
08-17-2011, 03:57 AM
Hi EDV. Sorry if I am asking a silly question, but how do I load the Navigator plugin in Roborealm?
I now have Roborealm (Thanks Trossen!) which has the plugin on the CD, but if I open the RR program, & look in "Contents" on the left hand side of the screen, then open the "Plugins" folder, there are only two plugins - "Sample_Config" & "SwapColor". The "DVR_Server" & "Navigator" plugins don't appear at all.
If I use the search button & type "nav" then nothing comes up. :sad:
If I type "Plugin" into the search button, it first finds "Photoshop_Host", then when I press enter, it opens a box for importing plugins, but it only wants plugins for photoshop, so when I find & select the Navigator DLL in the RR program folder, it won't accept it. I can't find anything in the RR website on how to load your plugin, so it can't be that hard.
What am I doing wrong? :confused:

EDV
08-18-2011, 02:36 AM
I forwarded your question to Steven and received reply:

Yes, the copy they have from Trossen will NOT include the AVM plugin as
it is a commercial copy and NOT a trial version. They need to register
their licensed copy from which they installed via the received CD (by
accessing Options Dialog->Download button) and then purchase the AVM
plugin after registration is complete. They should use the same email to
register that they would use in paypal in order to make the purchase
process easy.

If they have any problems with this process they can contact us directly
using the site's contact form and we'll help them sort things out.

They would still need to purchase the AVM plugin though as this is not
included in the RoboRealm CD.

STeven.

Slugman
08-18-2011, 05:37 PM
OK, thanks. I have already registered, so I just need to purchase the AVM plugin. I have had issues before with Windows 7, & had a horrible feeling that it might be yet another compatability problem. The main reason I went to Roborealm was because I had seen your videos of what was possible! I'll contact them soon-ish. :happy:

EDV
08-19-2011, 01:35 AM
Thanks, I'll play around with it. So AVM Navigator is loaded when you load the vbscript right?
Yes, of course the AVM Navigator will be placed to video-processing pipeline when you will load "program.robo" file.


Have you ever tried mapping with a kinect?
The main idea of AVM Navigator project is the using only visual images from one video camera as input sensory information
for robot navigation (it does not stipulate for using of any other sensors like kinect, laser, infrared or acoustic detectors and etc).


So does AVM navigator "warn" itself before bumping into anything?
You can use emulator program (based on Quake 3 mod) for acquaintance with "Marker mode" and "Navigation by map" modes.

See "AVM Quake 3 mod" manual (http://www.roborealm.com/help/AVM_Navigator/avm_q3mod_help.html) for more details.


You can download it from: www.roborealm.com/Setup_avm_q3mod.exe
(http://www.roborealm.com/help/AVM_Navigator/Setup_avm_q3mod.exe)
How "Navigation by Map" works.

Obstacle avoidance

The robot processes the motion history of input images for the previous second. If any commands were given
(i.e. the "forward" command) but no changes where seen in the input image (nothing happened) then the robot
is assumed to be stuck (i.e. the robot is pushing against a wall). When stuck the module will set the movement
variables to move backwards. The stuck situation is indicated by a red rectangle with a circle in the center.

Odometry / localization

The robot sets the marks (it writes central part of the screen image with associated data to AVM tree).
Marker data (inside AVM) contain horizon angle (azimuth), path length from start and X, Y location position
(relative to the start position). Information for the marker data is based on marks tracking (horizontal shift
for azimuth and change of mark scaling for path length measurement). Generalization of all recognized data
of marks in input image gives actual value of azimuth and path length. If we have information about motion
direction and value of path length from previous position and x, y coordinates of previous position then we
can calculate the next coordinates of the current position. This information will be written to the new mark
(inside AVM) when it is created and so forth.

The set of marks inside the AVM gives a map of locations (robot see the marks and recognize its location).

EDV
08-19-2011, 12:41 PM
Is their anyway to map out entire rooms? (Or houses?) So you could click to go to this corner of a room etc. So instead of paths, rooms?
You just set up visual marks that will be associated with map location coordinates in "Marker mode" (but it not create any path to somewhere)
and then robot in "Navigation by map" mode will plan the path from the current location to the target position (maze solving) that you point to.

Thus if you could show every inch of your rooms to robot (in "Marker mode")
then it would allow to move robot to any random position in your house:


http://www.youtube.com/watch?v=5qKQg5e4wbc

EDV
08-22-2011, 11:13 AM
Scorpio presented his great project of the robot "Vanessa" that also used AVM Navigator (http://www.roborealm.com/help/AVM_Navigator.php) for space orientation:


http://www.youtube.com/watch?v=LyBPsznn0Fc


Originally Posted by Scorpio:

"Vanessa" is an interactive mobile robot designed for debugging algorithm orientation in closed rooms
with a flat floor. As well as realize experiments in the area of communication between people and robot.
The main difference between this robot and other ones consist in the employment of a tablet computer.
It´s light, has a recognizable possibility of continuous operation without recharging the battery, which,
in combination with high capacity makes the tablet a very convenient object for building robots.

Another feature of the tablet, its vertical screen - is used in this project, where the tablet - it's not only
the "main brain", but also the "face" of the robot. The ability of a robot to talk and express emotions
makes it more esthetically attractive.

The robot was built by one person at home. Therefore the wheelbase was bought. In addition to this,
the dual motor driver was installed, as well as AVR microcontroller and some sensors of different functions.
The robot has a growth of 130 cm and weighs about 10 kilograms. The maximum speed can reach 3 km per
hour and the maximum duration of the battery is about 2 hours.

To detect obstacles, Vanessa has 3 bumpers, 4 Infrared Sensor Switchs, and an ultrasonic sonar.
In addition to this, during the movement the current of each of the motors is controlled. If the robot gets stuck,
the movement will be stopped.

The robot is able to distinguish between people and furniture or walls. For this purposes Vanessa uses a directional
infrared thermometer and a motion detector. The main task of the robot is to freely orientate in the room and always
know the position. To determine the distance traveled, the robot uses a pulse encoder, and to determine
direction - a digital compass. Just underneath the robot, the scanner of floor labels is located, which allows precise
positioning at the point marked with a special sign.

But, the using of computer vision gives better results. For realizing this task, an extra high-resolution camera
is functioning with the possibility of rotation in two planes. Thanks to it, and using image processing software
"Robo Realm" it became possible, for example, to use separator lines on the tiled floor as a grid map when moving.
Also, the use of the Navigator module is a high-perspective process, which gives you a possibility to recognize
and store routes which depend on visible landmarks.

The possibilities of computer vision are not limited only to navigation functions. This robot also can memorize
faces of people and images of objects.

Of course, as any autonomous robot, Vanessa has a charging base for recharging the batteries. The charging
base is arranged simply, and the robot has two separate contact groups, on for each battery. The states
of the main 12-volt battery and tablet battery are continuously monitored during the operations. When the battery
charge is insufficient, the robot itself finds the charging base, fully charges the battery, and then continues
with the previously programmed task.

The main mode of the robot is basically the autonomous working. The robot performs certain tasks in accordance
with a given schedule. For the works performed in dim-light places a lamp and a laser pointer are provided.
Also, the robot can control home appliances using an infrared emitter.

At any time, you can switch to control the robot via the Internet. For these purposes, Skype messenger is used.
Skype API allows you not only to communicate with the robot by text messaging, but also to switch the video
from one camera to another, which provides additional comfort for the operator. Through the Internet you can
manage all of the functions, send him phrases for speak, order the reproduction of emotions and others.

Tuning and testing the robot can be performed using a touch screen of the tablet, or using an infrared remote control.
Also, the robot understands some voice commands.

These types of robots can be used, for example, for the house security, excursions, as a home assistant,
or just for having some fun.

Hardware:

Tablet PC THD PX1 (Atom N450 CPU, 160GB HD, 1GB RAM, 1024 x 600 10″ multitouch)
Battery: 12V, 7 AH
Wheelbase: HCR (Diameter of wheels 140 mm)
Dual motor driver: MD25 (modified )
Microcontroller: AVR-CRUMB168-USB
ets...

Software and technology:

HP InfoTech S.R.L. CodeVisionAVR
Microsoft Windows XP
Microsoft Visual Basic
Microsoft Agent
Microsoft Text-To-Speech Engine
Microsoft Speech Recognition Engine
Acapela group computer voices
Guile 3D Studio Vanessa MSAgent character
Skype SDK
Robo Realm
AVM Navigator module for Robo Realm

Credits:

Scorpio wishes to thank
to resource Roboforum.ru for support
and to Dmitriy Yeremeyev for the given AVM Navigator module

EDV
09-19-2011, 12:53 AM
Hey EDV!
I finally got my hands on a Roomba robot that I could try with the Nav programs. I went through all of the items and tutorials. When I placed the robot in the NAV mode, it moved. All others did not move unless I used the arrows to train them. The Nav by map mode showed the progress, but I could not get it to move by clicking on Left mouse. When I clicked on Left mouse it did not do anything. I would like to make that work. Can it work on it's own, or do I have to train it in one of the other modes?I prepared simple video tutorial "Route training and navigation by map":


http://www.youtube.com/watch?v=qVz9iBazqug

Don't forget that you should use servo for camera turning and use variable NV_TURRET_BALANCE in this case. It is important for initial localization of "Marker mode" and "Navigation by map" modes.


Thanks, that is a nice tutorial. I was wondering if we can have it work automatically without a map in an unfamiliar environment. Just obstacle recognition driving anywhere. Sort of like your Quake3 demo but with real life.I am working in this direction (walking mode (http://forums.trossenrobotics.com/showthread.php?3510-Autonomous-robot-s-navigation&p=44205#post44205)) but it not so easily to get it...

EDV
10-02-2011, 02:25 PM
See more details (http://www.roborealm.com/forum/index.php?thread_id=4246#) about tuning of "Marker mode" and "Navigation by map" modes.

3507 3508

EDV
10-16-2011, 11:22 AM
It's test of new algorithm for AVM Navigator v0.7.3. In this video was demonstrated
how robot just try to go back on checkpoint from different positions of the learned route.


http://www.youtube.com/watch?v=wj-FKhdaU5A

EDV
10-24-2011, 12:32 AM
First in video the robot has received command: "go to the checkpoint" and when robot arrived to the checkpoint then I brought robot back (several times) in different position of learned route. When robot noticed the changes then it indicated that robot was displaced because any commands were not given by robot to his motors however changes were seen in the input image.

Then robot started looking around and localized his current position. Further the robot just calculated path from current position to the checkpoint and went there (and so forth).

EDV
11-30-2011, 02:26 AM
AVM Navigator v0.7.3 is released!

Navigator package is updated now and you can download next modification of AVM Navigator v0.7.3 from your account link.

Changes:
- The new "Back to checkpoint" algorithm was added in "Navigation by map" mode.
http://www.youtube.com/watch?v=wj-FKhdaU5A

- Also new "Watching mode" was developed.
And robot can move to direction where motion was noticed in this mode.
http://www.youtube.com/watch?v=c1aAcOS6cAg

Also common usability was improved.

By the way I received new video from user that succeed in "Navigation by map":
http://www.youtube.com/watch?v=214MwcHMsTQ

His robot video and photo:
http://www.youtube.com/watch?v=S7QRDSfQRps
http://roboforum.ru/download/file.php?id=22484&mode=view
http://roboforum.ru/download/file.php?id=22280&mode=view
http://roboforum.ru/download/file.php?id=22281&mode=view

I believe that you also will have success with visual navigation by AVM Navigator module ;-)

EDV
12-02-2011, 01:00 PM
Yet another video from user whose robot has extremely high turn speed but AVM Navigator module could control robot navigation even in this bad condition!


http://www.youtube.com/watch?v=G7SB_jKAcyE

His robot video:


http://www.youtube.com/watch?v=FJCrLz08DaQ

EDV
01-19-2012, 12:45 PM
Now AVM is working appreciably faster after recent algorithm update. It showed real-time action in "Object recognition" mode (less than 40 ms per frame) on resolution 960x720 pixels and 20 ms on 640x480 pixels with approximately 500 associations inside AVM search tree (it was tested on computer with CPU Intel Core 2 Duo E6600).

So, I advise to download recent version of RoboRealm + AVM Navigator (http://www.roborealm.com/help/AVM_Navigator.php) for your best results.

EDV
03-01-2012, 02:37 PM
Playing with Twinky rover that was controlled by AVM Navigator:


http://www.youtube.com/watch?v=4tpwAvcmZf8


http://www.youtube.com/watch?v=pt2y7xkiTXo

EDV
03-07-2012, 04:59 PM
Twinky rover presentation:


http://www.youtube.com/watch?v=Fkjpma1oWAg

EDV
04-18-2012, 03:43 PM
Why the total number of path on navigate mode is limited to 7?
Historically first was developed "Navigate mode" and "Nova gate mode" that provided walking of robot from gate to gate. The gate is an image (from robot camera) that associates with specific data inside AVM tree. The gate data contains weights for the seven routes that indicate importance of this gateway for each route. At the bottom of the screen is added indicator "horizon" which shows direction for adjust the robot's motion for further movement on the route. Field of gates is painted blue if the gates do not participate in this route (weight rate 0), and warmer colors (ending in yellow) show a gradation of "importance" of the gate in the current route.

So, why are there only 7 routes?

Size of gate data inside of association (associative base) is 24 bytes:


// Gate data structure that is kept in AVM
#define cRouteTotal 7
struct TGateData192U {
ushort Weight[cRouteTotal]; // Weights of routes
byte Hit[cRouteTotal]; // Counter of hitting to routes in gate
byte Reserved1; // Not used field
byte CheckpointNum; // Number of check point
byte Reserved2; // Not used field
};

And thus number of routes is limited by size of this structure.

I left these old modes within AVM Navigator (http://www.roborealm.com/help/AVM_Navigator.php) just for compatibility with previous versions:


http://www.youtube.com/watch?v=xbCpthKrL0o


You should use more advanced navigation solution such as
"Navigation by map" with "Marker mode" for route recording:


http://www.youtube.com/watch?v=F3u0rTNBCuA


But you can use "Nova gate mode" for manual robot control with helping of arrow keys.

* How to get start

1. Open RoboRealm dialog window.

2. Make sure that "Camera" button is pressed in RoboRealm dialog window
and also you should check out the camera resolution (it must be 320x240 pixels).

3. Call the dialog window of AVM Navigator (click on it at video-processing pipeline)
and then switch to "Nova gate mode". Now you can control your robot by arrow keys
and also you can turn robot camera by "Delete" and "Page Down" keys ("End" key will
set the camera in front position). If camera was turned you have to press "End"
for alignment before continuing of robot moving.

...

Find out more
(http://www.roborealm.com/help/AVM_Navigator/avm_q3mod_help.html)
Also you can try to train with "AVM Quake 3 mod (http://www.roborealm.com/help/AVM_Navigator/Setup_avm_q3mod.exe)" for acquaintance with "Marker mode"
and "Navigation by map" modes.

You should use arrow keys for robot control in "Marker mode"
(it is important for route recording (http://www.roborealm.com/forum/index.php?thread_id=4253#)):


http://www.youtube.com/watch?v=qVz9iBazqug

EDV
05-04-2012, 12:12 PM
AVM Navigator help page was updated! (http://www.roborealm.com/help/AVM_Navigator.php):veryhappy:
(http://www.roborealm.com/help/AVM_Navigator.php)

adamsya
07-16-2012, 02:25 AM
Hi, I have install roborealm 2.46.0 after I format and install new window 7

But I cannot find AVM Navigator module? old version I can find the AVM Navigator in the plugin menu

I open my old AVM Navigator project but the AVM module didnt show up. Did AVM Navigator has change name? or must install separately?

adamsya
07-16-2012, 02:31 AM
Btw, can Roborealm be install in Raspberry Pi (Debian OS)?

EDV
07-17-2012, 02:33 AM
I open my old AVM Navigator project but the AVM module didnt show up. Did AVM Navigator has change name? or must install separately?

This answer was given from RoboRealm forum (http://www.roborealm.com/forum/index.php?thread_id=4511#):

The AVM has not changed. You probably were using the trial version of the software which includes the AVM module for testing. After the trial period the AVM module has to be purchased in addition to RoboRealm as it is a 3rd party module (see purchase page). If you wish to add this module to your account please contact me via email.


Btw, can Roborealm be install in Raspberry Pi (Debian OS)?

No, not currently. The PI comes with a Unix based OS which is not compatible with RoboRealm. It may be possible to install Windows on the PI and run RoboRealm but that is not something we have tried. Because of its 400Mhz rate it may also not be sufficient enough power to run any real time video analysis.

We have been busy porting parts of RoboRealm to the BeagleBone which at 750Mhz shows more promise. This has not yet been launched and host a smaller/simpler version of RoboRealm.

EDV
07-31-2012, 12:31 AM
It is enough difficult route that was passed by robot
with help AVM Navigator (http://www.roborealm.com/help/AVM_Navigator.php) (route training and passing):


http://www.youtube.com/watch?v=1-w3lSLTnjM


Autonomous navigation view from outside:


http://www.youtube.com/watch?v=GD_g0q_I6NQ


Twinky rover and apple (color tracking with RoboRealm)


http://www.youtube.com/watch?v=YBHYeuT51bA

EDV
10-08-2012, 05:11 AM
AVM Navigator v0.7.4.2 update

Changes:

- The indication drawing was carried to ::Annotate method

http://www.youtube.com/watch?v=bATkgI3oP30

- Into camera view was added 3D marker of target position of robot

http://www.youtube.com/watch?v=CjRe7Kd7ZSw

...

See here (http://www.roborealm.com/forum/index.php?thread_id=4639#) about all other changes.

EDV
10-09-2012, 01:16 PM
Fun with AVM Navigator (http://www.youtube.com/watch?v=4uywp5TNrZk)

http://i1.ytimg.com/vi/4uywp5TNrZk/mqdefault.jpg (http://www.youtube.com/watch?v=4uywp5TNrZk)

It's little demo of object recognition and learning from motion with helping of AVM Navigator.

All object rectangle coordinates are available in RoboRealm pipeline from external variables:
NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
NV_ARR_OBJ_RECT_W - width of recognized object
NV_ARR_OBJ_RECT_H - height of recognized object

So you can use it in your VBScript program.

See here (http://www.roborealm.com/help/AVM_Navigator.php) for more details.

EDV
10-11-2012, 02:32 AM
Hi,

I noticed in the youtube video that you jiggle the object (your face or the book) slightly during the learrning phase. Is this an advantage?

Cheers,

Alan

In fact the AVM algorithm is not invariance to rotation and you should show the object for memorizing to AVM search tree under different angles during training for further correct recognition.

See also an example of using of Canny module as background for AVM Navigator (http://www.youtube.com/watch?v=6oSJbwO-qp0):

http://i3.ytimg.com/vi/6oSJbwO-qp0/mqdefault.jpg
(http://www.youtube.com/watch?v=6oSJbwO-qp0)

EDV
10-16-2012, 11:42 AM
Tomy Omnibot & AVM Navigator (http://www.roborealm.com/forum/index.php?thread_id=4652#)


http://www.youtube.com/watch?v=iAE7NXQgwC0


http://www.youtube.com/watch?v=4s9bQi8Y828