That stuff looks really cool!
That stuff looks really cool!
This is amazing progress. Great job!
Thank you Kurt and Kevin
This morning I spent a little time working with Gazebo. While not 100% complete I was able to have a little fun and pretend to kick a ball
One thing that really stood out was the render quality in simulation. Quite nice on the eyes I must say. I'm also really looking forward to working with the feature set offer by using Gazebo. This is going to be fun!
Last edited by r3n33; 01-30-2016 at 11:58 AM.
Looks like you are having a lot of fun!
Just out of curiosity, are you running everything on the OS5? i.e. on the NUC or are you running things like Gazebo on your main development machine?
There are times I think about mounting something like a NUC instead of the Odroids or RPI2 on the HROS1. Maybe something like the Jaguar One board which just completed their kickstarter and is supposed to start shipping next month. But should talk about that on a different thread.
Yes indeed. Up to this point the Intel NUC inside the robot has been my main development machine for Timmy. Sometimes I catch myself chatting and watching youtube on the robot
Everything from Sublime Text to Gazebo is run on Timmy with a monitor, keyboard and mouse attached. I have in testing launched Rviz (visualization) on a remote machine as well as VNC to the NUC to use the robot without a bunch of wires attached to his back.
Thanks for the link to the Jaguar SBC, I've not seen them before
Last edited by r3n33; 01-30-2016 at 12:07 PM.
Yesterday morning an exciting box arrived at the door containing an Intel RealSense F200 sensor. This is different from the previous RealSense sensor that I was using in that it can see depth at a short distance. My first thought was to add the camera to Timmy to see who objects in his hands and at arms reach would appear.
After getting the F200 to work in Linux and with ROS I placed it on his head and loaded my compliance demo. Allowing me to position his hands in front of his face and remain rigid enough to hold his controller I received my first depth view from the robot's perspective.
And another item.
Seems fun.. so I tried a couple items next. Timmy took this opportunity to take a selfie with my phone.
Super cool, right? There sure are a lot of depth points for these small objects at this distance So other than properly mounting the sensor and defining its position on the robot the next logical step for me was to take a stab at perception. I quickly found ORK, Object Recognition Kitchen, and began studying. It didn't take long to understand the instructions and gain the confidence to carry out the tutorials and just like that I was detecting planar surfaces and trained objects ( in this case the tutorial's coke can ). I combined what I learned in tutorials with the rest of the ROS project and captured a little video showing Timmy recognizing and localizing (roughly) a soda can.
Thanks for letting me share. Today I've been trying to understand why I experience performance issues with MoveIt while running the object detection scripts in parallel without a whole lot of luck. Boo. If it was easy I guess everyone would be doing it
Again great stuff. So they actually had an F200 in stock!
The F200 and the R200 seem to come in and out of stock frequently so you have to keep a close eye on availability to catch one. I suppose you could give them your email address and receive notification as well but I never trust those things
Actually earlier I did put my email address in, received notification a few weeks later while I was on my machine, went to site: Out of stock :lol:
Maybe you were too quick? It would be very interesting if there was that high of a demand or that low of quantity added when the notifications are pushed to subscribers.
There are currently 1 users browsing this thread. (0 members and 1 guests)