PDA

View Full Version : [News/Announcement] Fast tracking system with two mirrors instead of moving camera.



Dan_2013
07-24-2013, 08:18 PM
Just FYI, may be it will be usefull for somebody. Implementation looks not very difficult.


http://youtu.be/qn5YQVvW-hQ


This camera system can track very fast moving objects, keeping them in the center of the screen at all times. Currently under development by the Ishikawa Oku Lab. at the University of Tokyo, this latest version captures Full HD video and can be used outdoors.

http://www.diginfo.tv/v/13-0049-r-en.php

tician
07-24-2013, 10:25 PM
Mirrors have been used for many years. Nothing new other than the very high speed vision processing to control the small mirrors. Considering that they are probably using a newer version of the high-speed vision system used by this thing (https://www.youtube.com/watch?v=-KxjVlaLBmk) for all object tracking control inputs to the mirrors, it not likely to be easily implemented by hobbyists. They've been working on this stuff (http://www.k2.t.u-tokyo.ac.jp/vision/ims/index-e.html) with lots of people and money since at least 2006.

Xevel
07-24-2013, 10:40 PM
Implementation looks not very difficult.

Sure, if you happen to have a camera with a massively parallele inferface (which as far as I know only these guys have)... >_>

Dan_2013
07-25-2013, 04:03 AM
Mirrors have been used for many years. Nothing new other than the very high speed vision processing to control the small mirrors. Considering that they are probably using a newer version of the high-speed vision system used by this thing (https://www.youtube.com/watch?v=-KxjVlaLBmk) for all object tracking control inputs to the mirrors, it not likely to be easily implemented by hobbyists. They've been working on this stuff (http://www.k2.t.u-tokyo.ac.jp/vision/ims/index-e.html) with lots of people and money since at least 2006.

Ok, thanks for info. The solution principle looks very nice in it's simplicity. By the way I've seen similar schema with mirrors in laser 3D printers, which uses photopolymers - almost same mirrors controls the laser beam which drawing layers in resin.

Dan_2013
07-25-2013, 04:09 AM
Sure, if you happen to have a camera with a massively parallele inferface (which as far as I know only these guys have)... >_>

Yes, 1000 shots pers second :) And additional optics for reflected picture with high performance processor.

gromit
08-02-2013, 12:53 PM
Actually that's a pretty good idea for hobbyists. One of the problems I ran into object tracking with a webcam on a pan and tilt system was image blur, probably from using cheap hardware (servos & webcam). Keeping the camera stationary while mirrors move might improve that, although it would need more processing for tracking complex objects. Tracking a solid blob of pure color like in this photo (http://www.diginfo.tv/assets/13-0049-pics/2.jpg) isn't hard either way.

jwatte
08-02-2013, 01:21 PM
Keeping the camera stationary while mirrors move might improve that

I doubt that, because the motion blur comes from the change in illumination of the CCD sensor, rather than actual physical motion of the camera. Moving a mirror causes the same change in illumination, and thus the same motion blur.

Speaking of which: I'm having a heck of a time finding particular webcams with specific features. For example, if I want a webcam with very fast shutter speed, how would I find one? Or if angle of view is important (more is better,) how would I compare various Logitechs vs various Microsofts vs various white label Chinese imports?

tician
08-02-2013, 03:51 PM
Very few webcams actually give many specifics of the sensor, which is unfortunate. If you buy one and take it apart, you might be able to identify the actual sensor used in the camera and get the specifics from the sensor manufacturer's datasheet. Commercial/industrial cameras usually include more details (like the specific sensor used in the camera), but cost lots and rarely have generic drivers. Either way, you can always strip it down to the bare sensor and mount a new lens on it (the NimbRo-OP uses the same logitech webcam as the DARwIn-OP, but has a wideangle lens relacing the original optics).

As to image blurring and tearing, using moving mirrors only permits the use of smaller motors to move the smaller mass and having no risk of damaging the camera with the movement. Blurring and other visual distortions/artifacts can also be caused by cheap and/or poorly designed optics, but tearing is fixed only by using a better sensor. Far more important than shutter speed is shutter type: global versus rolling. Rolling shutter is the most common (cheapest to design and manufacture) and basically samples every pixel in the sensor in a sequential manner, even if all of the pixels are reset/blanked simultaneously. Global shutter blanks all pixels simultaneously and then samples all pixels by very quickly transferring the light intensity data into buffers (per pixel, per row, per column, etc.) that can be sampled at will without distorting the image (methods of measuring, transferring, storing, and retrieving the intensity information depends on sensor technology and cost; CCD or CMOS, and designs vary greatly within each).

escott76
08-02-2013, 07:33 PM
Video in low light situations is always tricky, and indoors with artificial light is almost always considered "low light" when it comes to cameras, be they film or digital. A better camera or servos won't make nearly the difference in quality as a couple of spaced out bright lights illuminating your subject. Not always practical, but it's definitely a factor. Higher light means you can increase the shutter speed and or decrease the aperture (given that the camera in question could change these), which will decrease motion blur and increase depth of field respectively.

shimniok
08-22-2013, 12:21 AM
Here I've been working on ideas for two 30fps low camera systems thinking that would be the bomb. :rolleyes: Ok, not really. But anyway I tried finding 60fps cameras and even that was tough. And lots of MCU horsepower required too.

Centeye has a different sort of vision sensor that might work better with this sort of concept. Not quite sure how much "blur" one of those would get for fast moving objects.

Dan_2013
08-22-2013, 02:04 PM
My 5 cents. As I know the blur appear because of large exposure time - http://en.wikipedia.org/wiki/Shutter_speed. And modern sensors has electronic shutter, and it can work very fast -

1/4000 s: The fastest speed available in consumer (http://en.wikipedia.org/wiki/Consumer) SLR cameras (as of 2009); also the fastest speed available in any leaf shutter (http://en.wikipedia.org/wiki/Leaf_shutter) camera (such as the Sony Cyber-shot DSC-RX1 (http://en.wikipedia.org/wiki/Sony_Cyber-shot_DSC-RX1)) (as of 2013). Used to take sharp photographs of fast subjects, such as athletes or vehicles, under good lighting conditions and with an ISO setting of up to 800.[6] (http://en.wikipedia.org/wiki/Shutter_speed#cite_note-6)
The more thin mechanisms also influencing to blur appearance (like serial reading of each point of sensor), but I think they are not significant in comparrison with shutter time. And as I see the most problem is in electronics, for example the cameras controllers can not produce signal at higher frame rate than 30FPS. May be I'm wrong. But I think there is an idea with mirrors which probably can help to use few low speed cameras as one with high speed. the idea is simple - two (for example) low speed cameras should be connected to optical splitter (semitransparent mirror under 45 degrees to light flow which will split it in two flows with 50% power each) so each camera will receive this flow and process it with their hardware speed at 30FPS, and with shutter speed 1/250s for example, and another camera will do the same but with 0,5s delay. The most difficult part - syncronize their work and combine their data on central computer to process it as one video flow. And it is possible to split source light flow with cascade of mirrors to more cameras, but the source light flow should be powerfull enough. May be this will work.

Or just make 2-dimensional array of sensors with controllers which will look at one reflected (projected) picture, but this will ask distortion correction (with optics or with software) for all cameras but central.

jwatte
08-23-2013, 12:30 PM
The problem with short shutter times is that your sensor needs to be a lot more sensitive, so you need more light, or get more noise.

Damn that Physics, always meddling :-)

Dan_2013
08-23-2013, 04:38 PM
The problem with short shutter times is that your sensor needs to be a lot more sensitive, so you need more light, or get more noise.

Come on, non-professional cheap cameras has shutter time 1/1000s-1/2000s as standard. And even this shutter time sometimes too long for bright scenes like sandy beach or snow field in sunny day.

Yes, of course electronic sensors works on similar to chemical films principal - i.e. they are accumulating charge from light stream (CMOS and CCD chips). And for example if there is a device which can measure light stream in continous mode, without accumulating charge from light, then the shutter will not needed at all, just quick "reader" of light signal. I don't know, probably this kind of devices exists. But anyway - available electronic cameras provide good enough sensitivity at high shutter speed even for middle level of illumination. The problem is slow electronics, and looks like it is possible to use "parallel" solution in this case :)


Damn that Physics, always meddling :-)

Sometimes helps, not very often :)

UPD. I was trying to find schema of lens and prisms for optical keystone correction, but looks like optical correction not very wide-spreaded. I've find this patent with schemas - http://www.google.com/patents/EP0984319B1?cl=en and this device - https://www.schneideroptics.com/ecommerce/CatalogSubCategoryDisplay.aspx?CID=1067 so it mean that optical correction possible. The problem with it - lens adjusting mechanism. But if to use array of non-movable keystone correctors for prdefined positions of camears and projected image, it will be possible to create this array non-movable lens construction made as injection molded plastic optic part. And place it in front of cameras which will scan reflected image and with optical correction receive the same picture (instead of splitting it with optical splitters). It's intensity also will be lower, than direct light, but in this case you will be able to split it to more cameras with lower loses of light. The question is what amount of light will go to each camera after projecting source stream light to surface (not mirror, to get the same picture in all cameras). I don't know exactly, but even it will be 10% it will be possible to make cheap enough speedy camera with 150-300 FPS.

tician
08-23-2013, 06:41 PM
Here I've been working on ideas for two 30fps low camera systems thinking that would be the bomb. :rolleyes: Ok, not really. But anyway I tried finding 60fps cameras and even that was tough. And lots of MCU horsepower required too.
We bought a dozen of these (http://www.theimagingsource.com/en_US/products/oem-cameras/usb-cmos-color/dfm22buc03ml/) for use in the revamped Machine Vision course partly because they were capable of 150fps at lower resolution (and thus useful for research work), but they have only windows drivers and are quite expensive. There are lots of sensors capable of 60fps at relatively high resolution, but not always accessible through the device drivers. The RaspberryPi camera board can do 720p at 60fps, but the mmal software interface is still a bit not so easy to use. I am pretty sure it could capture at even higher rates when shifted to lower resolution and configured correctly.


Centeye has a different sort of vision sensor that might work better with this sort of concept. Not quite sure how much "blur" one of those would get for fast moving objects.
Those are pretty cool little sensors. The newer versions use a 'logarithmic' photosensor design, so, as I understand it, they do not really have an exposure time or shutter speed and you are essentially reading the incident light intensity at the time of sampling (basically just an array of photodiodes without any additional capacitors as in common CCD and CMOS imaging sensor designs, so only factor is response time of the photodiodes and the array control transistors). Since there is no simultaneous sampling and buffered storage of intensity data, it behaves very much like an electronic rolling shutter which sucks something fierce for high speed motion. Since the pixels in the array are always sampled in a serial/sequential manner, the blurring and shearing of the image will depend on how quickly you can sample the entire image with your microcontroller.

They do offer custom vision chips that have more onboard processing, which could probably rectify all of those issues with the right amount of cash thrown at it.