X/Y positioning is simple relative to the details of feeders, vision, nozzle changers, part release, pickup fault detection, fast part change over, tall parts, tube parts, etc are where things get challenging. All of those challenges start getting in the way of other things and the machine gets crowded and complicated very quickly. It seems that most discussions focus on the placement of the parts, where I would focus on how to deal with delivery and pickup of parts which where the real challenges are.
Well, I'm not sure x/y positioning is simple when that positioning needs to precise AND reliable AND repeatable on every machine you ship. And by "reliable" I do
NOT mean 1% errors like some folks in this forum seem to. I'm not exactly sure what I mean, but I'm guessing something like 0.01% (1 out of 10,000... and that still seems much too high to me).
HOWEVER, I distinguish between errors that make the PCB "wrong" and those that "stop the process, BEEP loudly, and tell the operator what went wrong so he can fix the issue in a few seconds, then hit a "continue button". These kinds of error cases I'm more accepting of, because they don't lead to bad PCBs.
Along these lines, the more "real world vision" a system has, the more errors can be detected and corrected automatically (or with loud beeps and operator notification and intervention, bypassed or compensated for without PCB errors).
For example, in the vision-heavy design I described previously, after the nozzle picks up a component, it moves to an up-looking "component camera". If the nozzle failed to pick up the component, the software will easily be able to tell it is looking at an empty nozzle (without component) rather than a component on the nozzle. So failures to pick up a component at the tape will be noticed. This also catches the case where components are picked up [but not securely enough] and thus fall off when the nozzle accelerates and decelerates to place itself over the up-looking "component camera". As long as there's no crucial mechanics or anything between the tape and [fixed] "component camera", the software can simply go back and grab another component and not worry so much about the lost component (unless it is expensive or otherwise problematic, in which case the machine can BEEP loudly to summon the operator).
The same principle applies to components placed on the PCB. After a component is placed, the down-looking "nozzle camera" can move over the component and make sure the component is on the PCB in the position and orientation the component belongs (and didn't fall off the nozzle before it got over the final location (or something)). Though the software can't be sure the component is precisely where it should be, it can certainly detect significantly offset or rotated components (due to component sliding or rotating on the nozzle tip due to bad vacuum seal between nozzle and component or other issue).
I don't have enough experience to anticipate every possible error case, but clearly vision approaches can be made able to detect errors and problems that systems without [as much] vision inherently cannot detect.
The PCB has fiducials and sits perfectly still the whole time. In a practical sense, there is no concern for vision focus, flex, or temperature on the placement end of the process.
You may need to kick my butt... or perhaps my head... if I've become too enamored or impressed with the "PCB feeder mechanism" (or whatever it is called) on the neoden4. The one that moves one PCB out of the working area and into a staging area (for operator pickup and inspection, or potentially directly onto the reflow-oven conveyor)... and then moves another empty PCB to where components can be placed on it.
Actually, having said that "out loud" (so to speak), I realize that I prefer not to support moving PCBs directly to the reflow-oven. To do that implies a level of confidence that even I don't hope to achieve... like no errors of any kind on 99.999% of PCBs. I think anything we hope to achieve here should be inspected by a vision system AND a human being before it is allowed to move on to the reflow-oven [conveyor].
Actually, having said that "out loud" (so to speak), it may be potentially feasible for a very, very, very advanced and heavily developed vision system program to automatically inspect the PCBs. On first, second and third thought this seems absurd for our machine. But on fourth thought, this is not completely beyond what
MIGHT be practical
EVENTUALLY. The reason is, an algorithm that has the benefit of dozens if not hundreds of images of that same PCB that have been later verified as FULLY FUNCTIONAL == ERROR FREE could be leveraged into a fairly reliable vision system driven error detection process.
Having said all that, what I mean to say is, we almost certainly won't get that point by the time the machine is completed and on the market, but could possibly hope to achieve that "someday", and should therefore probably not design such capabilities OUT of the machine (design the machine in such a way that "PCB feeders" cannot be supported in the future).
Maybe you or others have comments about that.
The feeders, on the other hand, have a much more challenging task of moving the tapes to the exact position and picking up the parts without them flipping on end or diagonally. The pickup part of the effort is where I have all of my problems - not putting the parts down. Most (if not all) high-end machines put an enormous effort into the feeders to reduce the mis-picks to a very low number. This effort makes the feeders very expensive and sophisticated. In my case (and many other businesses like mine) we need a lot of feeders on the machine so saving money here is a way to drastically reduce the overall solution cost.
Unless you tell me otherwise, 99.9% of these problems occur with fairly tiny components (0805 and smaller). True or False?
You see, one of the reasons I become even more sold on the vision approaches I've been advocating is precisely these kinds of feeder issues. As I already described not far above in this message, the vision system I described way back on page 1 or page 2 of this topic (the message that lists the several steps in
process A, B, C, D, E, F), vision makes it possible to at least DETECT mis-picked and dropped components (and take corrective actions), and perhaps more opportunities to remove the stringent burden and requirements on feeders that we haven't even recognized yet. For sure the approach I described in that old message can assure the center of the nozzle tip comes down
PRECISELY on the center of [tiny] components. I have to assume the main reason components tip over and such is due to the center of the nozzle not being quite exactly over the center of the component. True?
Also, feeder banks would be on my ideal machine so that I can be loading a job of 40 feeders while the machine is using 40 different feeders. When it is time to switch - 2 banks of 20 feeders are swapped and ready in a minute, instead of loading each feeder one by one. Each feeder should have a QR code or bar coded that allows the machine to identify it and associate the part that was loaded. The QR code should be visible to the placement vision camera so it adds next to no cost at all. The software (including offline software for setup) is where the codes get matched with the parts.
I very much agree with everything you say here about feeders and feeder banks, and swapping feeders and feeder-banks while the machine is placing components on PCBs from other feeders and feeder-banks.
And yes, we'd be CRAZY not to support something like the QR code. That's something the down-looking "nozzle camera" should be able to read. You should tell me what QR stands for, but I understand it to be a code that lets you know what components are on the installed reel. Probably not the # of the component in the BOM, but something more general, right?
Having said all this, we definitely must be careful that accidental bumping and forces imposed by the operator while changing feeders and feeder-banks doesn't screw up any ongoing operations or processes, and doesn't make them less precise somehow (by bending, twisting, jiggling, vibrating or otherwise altering the fixed relationships of all the parts of the machine that need to stay in fixed relationships to work properly). I worry about this a lot, because it is a typical kind of situation (from my experience) that can royally screw up a robotics device but not be detected. Of course, I've never worked on a pick-and-place machine, but my past experiences with even sturdier devices than I imagine our machine raise a "caution flag" in front of me.
The placement speed is generally not a limitation in a small entry level environment - it is machine setup and parts management. If I could have 40-60 parts in the machine on banks and swap out the banks in a minute or two to run a totally different job - the machine would be gold. The feeders would have to be very clever to achieve the precision and the low-cost which would allow a working set and a standby set of feeders to swap super fast.
Yes, I'm willing to give up as much placement speed as we need to achieve other goals... unless it starts to get absurd (beyond 5x to 10x slower than low-end commercial machines that place 0201s and 0.50mm pitch).
To the rest of the above paragraph I say "damn straight". And I more-or-less replied-to this part elsewhere.
I have made arc-second accurate positioners and very repeatable mounting fixtures for imaging (optical) - This is where I would put most of my efforts. After a feeder/pickup scheme is developed - I would then move on to the XY gantry and other systems.
Sounds like you've worked with telescope systems before too! That's how I originally got into everything I do. I was totally hooked on space, astronomy, telescopes and related instrumentation by the time I was 8 years old. The pursuit of that interest led to everything else. Learning to program (my first program was an optical design and analysis program). Learning to design optics (with my program). Learning to develop microcomputers (to run my program and operate instrumentation and later automate telescopes and instrumentation). Learn to write compilers to make my software faster. Learn to fabricate diffraction limited optics (to build the optical systems I invented and designed). Learn interferometers to design and fabricate my own optical testing equipment. Learn photography and fancy darkroom techniques (to capture images of astronomical objects). Learn image processing (to improve and extra information from these photos)... and I'm just getting started on how far that process has taken me. It really is amazing to look back and that whole path, which I rarely do. I've spent enormous quantities of time at astronomical observatories, including a 7 year stint as sole occupant of a remote self-sufficient mountaintop observatory. Where I got absolute oodles of work and projects done, and totally cemented my hermit tendencies forever.
Anyway, if you want to tell me a bit about what aspects of optical systems you got into, that would be cool.[/font]