Page 1 of 6 12345 ... LastLast
Results 1 to 10 of 55

Thread: Camera / Vision Processing (within the HR-OS1 Framework)

  1. #1
    Join Date
    Mar 2015
    Location
    Texas
    Posts
    342
    Images
    15
    Rep Power
    26

    Camera / Vision Processing (within the HR-OS1 Framework)

    This afternoon I decided I wanted to test and experiment with the USB camera provided with the HR-OS1. Looking at the current projects in the repository there wasn't anything utilizing the camera so I took at peek at what was available in the original Darwin-OP framework and found a perfect example called head_tracking in the Linux/project/tutorial directory. Recently I had already worked on mjpg_streamer and the httpd within the OS1 framework to enable and improve the walk_tuner's webpage interface so working with the camera was just one more small step to take.

    At first things were pretty rough going but I spent a few hours tweaking and now I'm satisfied enough to move on to something else. About mid way through the process I took a video..



    At the time of the video the tracking was pretty good but a little slow. You can see the web interface to head_tracking and the streaming view of the video and the detection overlay. Not bad for a postage stamp processor Though the Edison is under a fairly heavy load while this is running. Some optimization is likely possible but I don't think it's getting more than 15fps as is with one core tapped out.

    Click image for larger version. 

Name:	Screen Shot 2015-09-13 at 7.21.21 PM.jpg 
Views:	178 
Size:	81.0 KB 
ID:	6164

    Given time I'll be pulling all my bug fixes and new features into the main Interbotix repository but if you can't wait I'm currently keeping the code in an experimentation branch https://github.com/r3n33/HROS1-Frame...xperimentation
    01001001001000000100110001101111011101100110010100 10000001010010011011110110001001101111011101000111 0011

    My Instagram
    My YouTube

  2. #2
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    88

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    Sweet! I have been waiting for something like this. Seems like a waste having that camera and not using it for anything... Great work so far.

    Would this code work on the Raspberry Pi?

    DB

  3. #3

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    Pretty cool, Will have to try it out. My assumption on the Edison that you needed to add some small USB hub? As the Edison only has one USB port. You probably also printed yourself a holder for it?

    Great job!

    DarKback - It probably should work, best way to find out would be to give it a try.

    Kurt

    P.S - I think I will try putting the Edison in mine to see how that changes things talking to Arbotix Pro. Or I wonder if I should just go ahead and try with Odroid C1...

  4. #4

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    I wonder why the Trossen guys have put out zero code for there camera? I have been looking for a different sensor all together because of that very reason. Thanks for your hard work . I'm embarrassed for not having looking into the Darwin-OP framework code to see what is there myself. On the raspberry pi I have been have the same frame rate issue. That is why I have been experimenting on the xu3 and xu4 for some more processing power. The XU3 was looking good before it went out of production and the xu4 is bloody power hungry making the edison and raspberry pi the all around better choices.
    Last edited by LloydF; 09-14-2015 at 11:41 AM.

  5. #5
    Join Date
    Sep 2010
    Location
    ಠ_ಠ
    Posts
    2,172
    Images
    27
    Rep Power
    266

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    Quote Originally Posted by LloydF View Post
    I wonder why the Trossen guys have put out zero code for there camera? I have been looking for a different sensor all together because of that very reason alone. Thanks for your hard work .
    Because a single, cheap USB webcam is not very useful for most robotics applications. Soccer is just about the only thing because the ball is of a uniform size and color, so tracking the color is easy enough. Its distance from the camera can be estimated from the size of the blob in the image, and its location/bearing relative to the camera can be estimated from the blob's position in the image. Pretty much everything else requires some sort of depth sensor (structured light like original kinect; time-of-flight; stereo cameras; etc.).

    Since things like RoboCup prohibit active sensors and many eye-safe active sensors do not work very well in direct sunlight, there are not many options for reliably acquiring depth information except by producing disparity maps from multiple sensors viewing the same scene from different locations (e.g. stereo vision). Producing a decent disparity map requires accurately finding/matching a very large number of common elements in each image, which is not easy to do. Long story short, computer vision is still an area of intense research requiring lots of processing power.


    OT: The summer-fall transition this year was scary fast. Friday was 80s~90s and sunny. Saturday was 60s~70s and cloudy. Sunday was 60s~70s with sun creepy low in the sky. I do not like.
    Please pardon the pedantry... and the profanity... and the convoluted speech pattern...
    "You have failed me, Brain!"
    bleh

  6. #6

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    If this little bot would play soccer even badly most of us would be very happy . Then I would need Two or more of them. (Surely someone can see the dollar signs here)
    Last edited by LloydF; 09-14-2015 at 12:33 PM.

  7. #7
    Join Date
    Mar 2015
    Location
    Texas
    Posts
    342
    Images
    15
    Rep Power
    26

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    Thanks all

    Most likely the camera code will work on the Raspberry Pi just the same as the Edison. I know Kurt has dabbed into testing the web interface I added to walk_tuner on his RPi and it worked as expected. This isn't too different.

    I did have to add a USB hub so I could plug two things into the Edison (camera, arbotix). It was nice to find external power on the hub wasn't required At the moment all the cables are hanging off the robot.. I haven't even cut the camera cable yet So no fancy 3D printed holder. Actually, looking into mounting the hub. I don't think I'll be satisfied with anything external but what I had on hand was pretty small. If I can't fit the hub inside I'll have some considerations to make before going too much further.

    I still have hopes the OS1 can play soccer at a level well enough not to be embarrassed to show it off Since starting on Chap I've improved my walk tune, detect fall overs in ps3_demo, created pages for standing up from stomach or back, and now the ability to detect a color blob. When my replacement gears arrive today I'll be back up and running and the next thought is to make him get up, find a ball, and then walk to the ball's location. So we shall see how that goes or if I get distracted with something else along the way.
    01001001001000000100110001101111011101100110010100 10000001010010011011110110001001101111011101000111 0011

    My Instagram
    My YouTube

  8. #8
    Join Date
    Dec 2007
    Location
    Whidbey Island, WA
    Posts
    1,694
    Images
    456
    Rep Power
    88

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    One thing that I noticed about the current neck design is that it doesn't let the camera look down very far. For soccer the ball may get lost as it gets close to the robot.

    DB

  9. #9
    Join Date
    Mar 2015
    Location
    Texas
    Posts
    342
    Images
    15
    Rep Power
    26

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    Click image for larger version. 

Name:	hopes_deleted.jpg 
Views:	768 
Size:	38.3 KB 
ID:	6169

    You are right, the neck design doesn't allow the robot to see it's feet

    I had to do a test (my camera isn't mounted to spec but close enough). This is as close as a small object can sit in view:

    Click image for larger version. 

Name:	IMG_2062.jpg 
Views:	89 
Size:	121.7 KB 
ID:	6170Click image for larger version. 

Name:	IMG_2064.jpg 
Views:	67 
Size:	127.9 KB 
ID:	6171

    Before getting too discouraged I realized a bigger ball may work just as well (at least temporarily)...

    Click image for larger version. 

Name:	IMG_2065.jpg 
Views:	62 
Size:	109.6 KB 
ID:	6172
    01001001001000000100110001101111011101100110010100 10000001010010011011110110001001101111011101000111 0011

    My Instagram
    My YouTube

  10. #10
    Join Date
    Sep 2010
    Location
    ಠ_ಠ
    Posts
    2,172
    Images
    27
    Rep Power
    266

    Re: Camera / Vision Processing (within the HR-OS1 Framework)

    The "Ay can't see mah feet" issue can be easily solved by changing the bracket that mounts the camera to the tilt servo to include a fixed angle offset depending on needs (angled down for soccer; unmodified for general use; angled up for interacting with humans). IIRC, the linkage-based tilt mechanism is supposed to be much more stable than a tilt mechanism made from AX servos and stock dynamixel brackets.

    Whenever I get my mitts on one, I might try sticking the pan servo in the torso between the shoulders and attaching the tilt servo to the top plate with a thrust bearing like the Interbotix arms since I've already got the pin roller thrust bearings and washers from spares I bought for Darsha's kinect turret. Then make a really long C-bracket to have the tilt axis at the back of the neck while keeping the camera mounts above and/or in front of the servo's case while not losing too much of the servos range of motion. Might also try timing belt and pulleys to have the full range of motion with a solidly mounted pivot on top of the servo for better rigidity than a long C-bracket.
    Please pardon the pedantry... and the profanity... and the convoluted speech pattern...
    "You have failed me, Brain!"
    bleh

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Remove Compiler Warnings from HR-OS1 framework...
    By KurtEck in forum HR-OS1 Development and Discussion
    Replies: 5
    Last Post: 06-03-2015, 09:44 PM
  2. Raspberry Pi 2 image recovery + HR-OS1 Framework
    By panthallion in forum HR-OS1 Development and Discussion
    Replies: 17
    Last Post: 04-30-2015, 02:36 PM
  3. framework for robotics
    By iacoposk8 in forum Robotics General Discussion
    Replies: 2
    Last Post: 10-11-2013, 06:52 PM
  4. Processing to arduino with key press help
    By jiokl in forum Software and Programming
    Replies: 3
    Last Post: 09-01-2012, 05:19 PM
  5. Theory behind stereoscopic camera vision(?)
    By jrowe47 in forum Sensors
    Replies: 14
    Last Post: 07-20-2008, 04:42 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •