some camera advice/findings

WayOutWest
Posts: 198
Joined: Thu Jul 16, 2015 12:18 am
Location: Washington State, USA

some camera advice/findings

Post by WayOutWest »

Hi, I've been working a lot with camera setups lately and thought I would share a bit.

I've been accessing the cameras directly using libUVC and this has taught me a few important things:

1. Each camera decides what matrix of { resolution , compression, framerate } it supports. The matrix may be sparse!
2. Low-quality cameras support only uncompressed frames, which quickly eat up the USB bus bandwidth, even at 480Mbit/sec since apparently the host controller doesn't allow the entire 480mbit/sec to be isochronously allocated. Right now I can easily run four cameras (three at 640x480, one at 1600x1200, all at 30fps if desired) over a single USB 2.0 cable through the dragchain, so long as all of them use MJPEG compression. But every single camera must support MJPEG; if even one of them forces uncompressed frames then it will lock all the others out of the bus (on linux: usb_submit_urb returned -28). The numbers say this shouldn't happen, so I think the low-quality cameras (see next point) are buggy and allocate way more bandwidth than they need.
3. Low-quality cameras support only one framerate, usually 30fps, which can be a real waste of USB bandwidth or CPU power (USB puts a very heavy burden on the host CPU).
4. Lower framerates give longer exposure times and therefore better image quality using a given amount of light. In particular, you want very low framerates on the downcam.
5. You definitely need to lock out the camera's auto-exposure and auto-framerate mechanisms to get consistent images; libUVC lets you do this, not sure if other APIs do.


As I've mentioned before, my favorite camera by far is still the Andonstar originally recommended by @thereza. It does 1600x1200 with MJPEG compression and has a very high-quality focus knob mechanism. It seems to cost around $80-$100 in the US. Be careful, though, there are a LOT of cheap knockoffs! I bought one for $25 and found out it only supported uncompressed video at 30fps! This is totally useless since it basically hogs the entire USB bus to itself.

My second-favorite camera is the Logitech HD Webcam C270, which is not quite as good as the Andonstar, but it is insanely cheap at only $22 in the US. I bought four of them! If you unscrew the plastic cover you'll find that it actually has a focus lens inside that the user can't normally access! After adjusting this it's possible to get a very wide focus range, and the camera is very flat (after you rip off the plastic housing) so it makes a fantastic up-looking camera. You do need to disable the annoying green LED though, otherwise you'll see it in reflections on shiny parts (see upper-right quadrant in image below for an example). Having a good upcam matters for fine-pitch parts: you need a camera with pixel resolution much higher than the mechanical resolution of the XYZ axes to get <0.5 degree rotational correctness.
DSCF1144.JPG
DSCF1144.JPG (41.16 KiB) Viewed 9246 times
And it does 1280x720 with MJPEG compression at 30fps -- definitely the least expensive camera that can do that, by a very wide margin. I apologize if somebody else mentioned this camera here first; I definitely remember first hearing about it on a PNP forum but I don't think it was this one (I could be wrong!). Finally, each camera has a unique USB iSerial number, so the software can robustly know which camera is which, even across reboots or hub topology changes.

Now I have a four-camera setup, shown below.
DSCF1147.JPG
DSCF1147.JPG (155.64 KiB) Viewed 9246 times
The two standard cameras are:

- downcam: high-resolution Andonstar downcam (lower-right)
- upcam: Logitech C270 (upper-right)

To this I have added two other cameras:

- plancam: Logitech C270 (upper-left)
- needlecam: Logitech C270 (lower-left)

The needlecam is strictly for debugging and checking component placement; no vision operations ever happen through this camera. It's very very helpful for debugging placement problems, and in the future I may decide to snap a photo of every placement to see if misalignments are the result of misplacement or rather from shifting during reflow.

The plancam is mounted to the gantry head, on the very short piece of extrusion that runs between the front and back plates of the gantry head. It's as high up as possible giving it the widest possible field of vision. On startup my software homes the device and then quickly "scans" the head across the build area to collect a single huge stitched image of the entire workspace. This stitched image is then updated incrementally; whenever the head passes over a region, whatever part of it the plancam can see is updated. This giant stitched image is then displayed in the UI, and the user can pick and select various things (components, tapes, solderball reservoirs, etc) by zooming and clicking on parts of the gigantic stitched image without having to move the head.

The three head-mounted cameras (downcam, plancam, and needlecam) all run to a USB hub strapped to the back of the placement head, and through this they share a single USB 2.0 cable running down the dragchain.

The images you see are rendered live at full framerate using DirectFB. At 30fps on all four cameras using MJPEG it takes almost 40% of one core of a very powerful 2.4ghz Intel Xeon to just get the images off the cameras, decompressed, and onto the screen (although I can run the downcam at 1600x1200@30fps, in practice I use 5fps to get a longer exposure time). This is why I eventually gave up on having the CPU near the placer... now I use an eight-core rackmount server with an OpenCL GPGPU for image processing, but it's so loud and noisy I have to put it in another room. Fortunately USB 2.0 (for control) and HDMI (for the image display to the user) can run pretty long distances using digital repeaters, although threading those wires through the walls was a real hassle. So now the liteplacer, screen, and trackball sit next to the user in one room, and all the wires run through the wall to the machine that actually does all the thinking.

I'm pretty happy with it. My only disappointment is that I have been totally unable to find any software that supports the USB UVC "take snapshot" command (VS_STILL_IMAGE_TRIGGER_CONTROL). Linux's V4L and libuvc definitely don't support it. Most libraries/applications instead turn on video streaming, grab one frame, then turn streaming off. The problem is that in streaming mode the camera is continuously sampling the pixels as they are read out in scan-order, so if the head is moving you'll get a warped image. The camera does this to maximize the exposure time per pixel: each pixel is exposed for a full frame-time, but the exposure times are staggered according to the scan-out. This makes sense for video, where maximum exposure time means best image quality for a given illumination level. But if the camera or scene is in rapid motion and you want a still image this is the wrong strategy, which is why the "snapshot" command exists. The "snapshot" command tells the camera to capture all pixels simultaneously, then read out the captured pixels. So you can get non-warped images from a moving head this way. But there doesn't seem to be any software that supports it, just lots of software that claims to but really uses the streaming commands behind your back :( Also the Logitech camera purportedly can take snapshots at 3x higher resolution than its video capture.
- Adam
mrandt
Posts: 407
Joined: Mon Apr 27, 2015 10:56 am
Location: Stuttgart, Germany

Re: some camera advice/findings

Post by mrandt »

Hey Adam,

thanks for sharing your insight.

Your findings regarding USB camera modes and data transfer are particularly interesting - I think I might achieve faster detection and placing by exchanging my downward camera once again... Will probably give it a try.

I am considering "Supereyes B003+" which is advertised to have 2,0 MP CMOS which would be more than enough. I suppose it supports MJPEG, but I can order and give it a try - and return if it does not.
www.amazon.de/dp/B0066H40IO/

They also have the B005, but despite the higher number it seems to have only 0,3 MP CMOS which probably has to low resolution - and from my other experiments with borescope camera probably also worse sensitivity too light:
http://www.amazon.de/dp/B0066HK96Q/

For the up camera, I use the C270 you are referring to and found it to be near perfect. Biggest improvement came with proper lighting, I found that monochromatic red works way better than white for the upcam.

Check it our here:
http://liteplacer.com/phpBB/viewtopic.p ... rt=10#p212

In addition to modifying hardware, I think camera control in LitePlacer software should be improved. Check out the following issue on GitHub (was closed as Juha implemented at least correct zoom behaviour):
https://github.com/jkuusama/LitePlacer-DEV/issues/11

I provided a rather comprehensive writeup regarding resolution settings and physical resolution vs. viewport (also helpful if camera lense distorts image near the edges).

Could you add a description of the settings necessary in your opinion? Like exposure lock, framerate setting etc? Then we could ask Juha to reopen that issue and cosider it for backlog.

Thanks and regards
Malte
WayOutWest
Posts: 198
Joined: Thu Jul 16, 2015 12:18 am
Location: Washington State, USA

Re: some camera advice/findings

Post by WayOutWest »

mrandt wrote: I am considering "Supereyes B003+" which is advertised to have 2,0 MP CMOS which would be more than enough. I suppose it supports MJPEG, but I can order and give it a try - and return if it does not.
http://www.amazon.de/dp/B0066H40IO/

They also have the B005, but despite the higher number it seems to have only 0,3 MP CMOS which probably has to low resolution - and from my other experiments with borescope camera probably also worse sensitivity too light:
http://www.amazon.de/dp/B0066HK96Q/
The B005 is the crappy camera I got that only does 30fps (no lower speeds) and no MJPEG. It is the worst webcam I have ever seen! I don't know about the B003+, but after the awful experience with the B005 I will not buy from them again.
mrandt wrote: For the up camera, I use the C270 you are referring to and found it to be near perfect. Biggest improvement came with proper lighting, I found that monochromatic red works way better than white for the upcam.
I've heard the advice to use red light before; do you know the reason for this? I thought it was a throwback from the days when all PCB soldermasks were always green (no other colors), so red light wouldn't illuminate the board as much. Is there some other reason why red is a "special" color?
mrandt wrote: Could you add a description of the settings necessary in your opinion? Like exposure lock, framerate setting etc? Then we could ask Juha to reopen that issue and cosider it for backlog.
I use libuvc. I wrote my own Java wrapper around it (and OpenCV and DirectFB too). Here's what the code looks like (including some commented-out alternatives not currently in use):

Code: Select all

        LibUvcDevice needleCam = new LibUvcDevice(0x046d, 0x0825, "1B042C80");   // new needlecam                                   

        downCam.setAeMode(1);          // 1=all-manual, 2=all-auto, 4=manual-exposure,auto-iris 8=auto-exposure,manual-iris         
        downCam.setExposureAbs(10);    // units = 100us                                                                             
        downCam.setAePriority(0);      // 0=force-framerate, 1=allow-varied-framerate                                               

        downCam.setIris(10);         // units = fstop*1000 (?)                                                                      
        downCam.setGain(0);
        downCam.startStream(true, 1600, 1200, 5);
        //downCam.startStream(true, 1280, 960, 10);                                                                                 
        //downCam.startStream(true, 800, 600, 30);                                                                                  
        //downCam.startStream(true, 320, 240, 5);                                                                                   

        upCam.setAeMode(1);
        upCam.setExposureAbs(10);
        upCam.setAePriority(0);
        upCam.setDigitalWindow(0, 0, 200, 600, 1, 1);
        upCam.startStream(true, 1280, 960, 5);

        planCam.setAeMode(8);
        planCam.startStream(true, 1280, 960, 5);

        needleCam.startStream(false, 640, 480, 30);
        ///needleCam.startStream(false, 320, 240, 30);                                                                              
        //needleCam.startStream(false, 160, 120, 30);                                                                               

        //planCam.enableAutoBlit(0, 0, 640+1, 480+1);                                                                               
        //upCam.enableAutoBlit(641, 0, 640+1, 480+1);                                                                               
        //downCam.enableAutoBlit(641, 481, 640+1, 480+1);                                                                           
        downCam.enableAutoBlit(1200, 0, 160, 120);

        //needleCam.enableAutoBlit(0, 481, 640+1, 480+1);                                                                           
- Adam
mrandt
Posts: 407
Joined: Mon Apr 27, 2015 10:56 am
Location: Stuttgart, Germany

Re: some camera advice/findings

Post by mrandt »

WayOutWest wrote:The B005 is the crappy camera I got that only does 30fps (no lower speeds) and no MJPEG. It is the worst webcam I have ever seen! I don't know about the B003+, but after the awful experience with the B005 I will not buy from them again.
Thanks, if "Supereyes" are in fact crappy cameras, I will not spend money on them.

I am currently using a FullHD webcam in a 3D printed enclosure. While the camera optics, sensor and compression are OK, the printed case is not stiff enough so it can bend slightly and thus move camera image in relation to gantry - which of course affects precision.

I will try to find a souce for Andonstar camera recommended by thereza. Might have to buy from China.
WayOutWest wrote: I've heard the advice to use red light before; do you know the reason for this? I thought it was a throwback from the days when all PCB soldermasks were always green (no other colors), so red light wouldn't illuminate the board as much. Is there some other reason why red is a "special" color?
I am not an expert on optics and computer vision... From my limited understanding, the reason too use a monochromatic light source (like red LED, about 660nm wavelength) is to reduce the effects of ambient light and reflections on the parts. Ideally, one would put a narrow width optical filter in front of the camera optics and thus eliminate most other wavelengths but the one expected (same as emitted by LED). However, for me software filtering has also worked fine so far. Basically, having only red and black pixel in the picture reduces complexity and noise for needle tip, edge and lead detection - which is what the upcam is supposed to do.

I think most other colours (e.g. green?) would work as well - I just chose red because I've seen it in many commercial computer vision systems. Also I guess many CMOS cameras seem quite sensitive to that wavelength.

By the way: I only use red light for the upward camera and found white to work better for downward camera, also for the human operator.
Picky
Posts: 16
Joined: Tue Mar 29, 2016 2:48 am
Location: Seattle, WA

Re: some camera advice/findings

Post by Picky »

Let me dig this thread up. Understand it's a shot in the dark, but still...

Adam and all - have you guys ever dealt with industrial USB 3.0 cameras? I mean the proper ones designed for modern machine vision?

I'm pondering the idea of ordering a new 5MP monochrome unit and try a few unusual concepts for PnP application. Something like this camera should be interesting to try, but it's a $750 toy: https://www.ptgrey.com/chameleon3-50-mp ... sonyimx264

I think I need 30+ frames per second at 5MP and 10-12 Bit uncompressed. Already have suitable lens waiting for it.

Things I'd like to understand before ordering is whether DMA with USB 3.0 really frees up my CPU and I can do some real-time processing on smaller regions of interest for servo feedback.
-Kirill
Picky
Posts: 16
Joined: Tue Mar 29, 2016 2:48 am
Location: Seattle, WA

Re: some camera advice/findings

Post by Picky »

mrandt wrote:...
WayOutWest wrote: I've heard the advice to use red light before; do you know the reason for this? I thought it was a throwback from the days when all PCB soldermasks were always green (no other colors), so red light wouldn't illuminate the board as much. Is there some other reason why red is a "special" color?
I am not an expert on optics and computer vision... From my limited understanding, the reason too use a monochromatic light source (like red LED, about 660nm wavelength) is to reduce the effects of ambient light and reflections on the parts. Ideally, one would put a narrow width optical filter in front of the camera optics and thus eliminate most other wavelengths but the one expected (same as emitted by LED). However, for me software filtering has also worked fine so far. Basically, having only red and black pixel in the picture reduces complexity and noise for needle tip, edge and lead detection - which is what the upcam is supposed to do.

I think most other colours (e.g. green?) would work as well - I just chose red because I've seen it in many commercial computer vision systems. Also I guess many CMOS cameras seem quite sensitive to that wavelength.

By the way: I only use red light for the upward camera and found white to work better for downward camera, also for the human operator.
There are many theories on why red was the color of choice in machine vision. In my opinion green solder mask had very little to do with it.

I think the major influencing factors were (in order of importance):
1. First CCDs were mostly sensitive to red (decades ago)
2. First LEDs were implemented in red
3. Ambient lighting spectrum (fluorescent and sodium) was mostly green/yellow
4. Monochromatic light was used because of color aberrations in optics and improves resolution/contrast

Back to modern days. If I were to use color cameras for machine vision and I'd care about resolution and signal/noise ratios, I would chose green illumination (525-550nm) because modern CMOS chips are made mostly sensitive to green to mimic sensitivity of the human eye. For this same reason they put twice as many green cells as red and blue on the color CCD/CMOS in Bayer pattern (google that). Broad spectrum (white) illumination would be my second best choice and most practical in a simple applications because this will exercise all cells in the chip (RGB) and improve resultant resolution after "de-Bayer" interpolation.

If you use only red illumination for a color camera, you will effectively cut your camera pixel count by 4x because there are only 25% of cells that are sensitive to red. One in every other line and every other column in Bayer pattern.

Real men use monochrome cameras for a reason! [that's me talking myself into buying another expensive toy to play with]
-Kirill
WayOutWest
Posts: 198
Joined: Thu Jul 16, 2015 12:18 am
Location: Washington State, USA

Re: some camera advice/findings

Post by WayOutWest »

Picky wrote: I think the major influencing factors were (in order of importance):
1. First CCDs were mostly sensitive to red (decades ago)
2. First LEDs were implemented in red
3. Ambient lighting spectrum (fluorescent and sodium) was mostly green/yellow
4. Monochromatic light was used because of color aberrations in optics and improves resolution/contrast
Fascinating, thank you for sharing. This is the most sensible explanation I've heard yet.
- Adam
mrandt
Posts: 407
Joined: Mon Apr 27, 2015 10:56 am
Location: Stuttgart, Germany

Re: some camera advice/findings

Post by mrandt »

Hi Kirill,

thanks for the detailed explanation - I learned quite a lot from that and reading up on CCDs on the web afterwards :-)
Picky wrote:Back to modern days. If I were to use color cameras for machine vision and I'd care about resolution and signal/noise ratios, I would chose green illumination (525-550nm) because modern CMOS chips are made mostly sensitive to green to mimic sensitivity of the human eye. For this same reason they put twice as many green cells as red and blue on the color CCD/CMOS in Bayer pattern (google that).
I believe you are right in terms of resolution; but given the HD CMOS cams most of us are using, resolution is not really our primary concern. Using red light helped to eliminate some of the influence of ambient light in my case. Green might not have the same effect, at least if you use LED or fluorescent light in your workplace.
Picky wrote:Broad spectrum (white) illumination would be my second best choice and most practical in a simple applications because this will exercise all cells in the chip (RGB) and improve resultant resolution after "de-Bayer" interpolation.
That is what I am currently doing. I switched my red OSRAM LED for white ones. White light also has the advantage of being able to apply color filters e.g. to eliminate green needle shades (JUKI nozzles) from picture during processing.
Picky wrote:Real men use monochrome cameras for a reason!
Yeah, I can see the advantages - probably much higher sensitivity, less noise, better resolution and more FPS... Anyone tried using infrared?

On the other hand - cheap color CMOS cam paired with proper lighting seems to get the job done; so unless one has very specific requirements this would be more for the fun and learning experience :-P

Cheers
Malte
WayOutWest
Posts: 198
Joined: Thu Jul 16, 2015 12:18 am
Location: Washington State, USA

Re: some camera advice/findings

Post by WayOutWest »

mrandt wrote: White light also has the advantage of being able to apply color filters e.g. to eliminate green needle shades (JUKI nozzles) from picture during processing.
Hey Malte, have you had any luck with this? I've found it very difficult to do.

Although the Juki nozzle shade is very pure green it is also very contoured and curvy, so it's almost impossible to ensure that it is uniformly lit (and besides, lighting design should be focused on uniform illumination of the part, not the shade!). The end result is that there are inevitably parts of the shade that are pretty far away from 0x0000ff00 (purest green).

Maybe I'm just coding the chroma-key incorrectly.

The other annoying problem I'm dealing with is that the black tip of the nozzle is somewhat shiny, and light can bounce off of the outer edge of the very tip of the nozzle, showing up as a white circle that is very hard to distinguish from the part that it's holding. This is a problem whenever the tip of the nozzle is even slightly larger than the part (which is the case for the recommended nozzle-sizes for 0402 and 0603 components).
- Adam
vonnieda
Posts: 30
Joined: Sun Jan 03, 2016 7:05 pm

Re: some camera advice/findings

Post by vonnieda »

I've been having very good luck with this by using the HSV color space. I subtract pretty much everything between yellow and blue which gets the vast majority of the nozzle and then the threshold gets everything that was too dark to discern a color from. You can see an example here https://www.youtube.com/watch?v=iUEP0bILAU0

In that video I also have a green paper disk above my nozzle, but I'm getting the same results when the nozzle body is visible, too.

Jason
Post Reply