... continued from above ...
I definitely prefer DC servos for many reasons, but as long as the system is closed-loop somehow, steppers are an alternative. Note that in the past I developed devices that had high-count encoders that I made by copying encoder disks onto super-high-resolution microfilm and building the encoders into the mechanics itself. So I got around the expense of high-count encoders that way. Not sure the tradeoffs are the same today, but perhaps.
However, my hope is, we can leverage relatively inexpensive cameras into replacing expensive linear encoders (or rotary encoders for that matter). I did that on my mechanical version a couple years ago, and I'm fairly confident we can figure out ways to do the same (or analogous) with a device we design from scratch. This WILL require we add new code path to OpenPnP if we adopt that (which I assume we would, though not necessary), but that's not such a big deal. Incidentally, another fellow here and I already mentioned introductory ideas for how to replace expensive encoders and/or mechanics by means of [somewhat] novel application of cameras in the design. Maybe that "other fellow" was you (not sure).
Let me just point out one opportunity that addresses the exact problem you mention in your message... the pick-up nozzle/nozzles doesn't come down precisely in the middle of the component. Well, that can easily be solved with a down-looking camera plus a slight change in technique. How so?
If the "downer camera" (forgive the invented term) attached to the moving nozzle/nozzles assembly is at a fixed and exactly known x,y distance from the pick-up nozzle/nozzles, (and mounted close to the pick-up nozzle/nozzles), then the machine can move the camera over the component first, find or note the exact position [and rotation] of the center of the component still in the tape (and for conceptual and visualization purposes, then move the exact center of the field directly over the center of the component), then the exact center of the nozzle can be moved over the exact center of the component simply by moving the nozzle assembly the exact x,y distance/distances we know it is from the nozzle/nozzles. Problem solved... at the expense of speed, because we need to perform this extra step for every [small] component the machine needs to place.
But... what about precision of x,y motion that we will need? Well, one answer is to do what I did before... create super-cheap 1~2 micron linear encoder scales on microfilm. But, there is another way that also takes advantage of the characteristics of cameras. For visual and conceptual purposes, assume the camera has no "field distortion" (meaning points on the PCB imaged 1000 and 2000 pixels from the exact center of the field have a linear relationship). In other words, that point on the PCB that is imaged 2000 pixels from the exact center of the field is precisely twice as far away from the point on the PCB at field center as the point on the PCB that is imaged 1000 pixels from the exact center. Hopefully you intuitively know what "field distortion" means so you don't need to untangle my extremely clumsy description!
BTW, this does not need to be true (we only need to know what the field distortion is for the camera and lens), but for visualization and conceptualization let's make this assumption.
My claim is, this can replace linear encoders with no loss of x,y position precision. How so? Well, to visualize and conceptualize the situation before I cut to the chase, assume we DO have linear scales on the machine, but instead of being inside mechanical housings along with LEDs, light sensors and quadrature masks, we just "glue" the scales to the x and y "rails" or shafts the nozzle assembly slides against as it moves in x and y. Now imagine we have an "x-axis upper camera" and "y-axis upper camera" attached to the moving "nozzle/nozzles assembly" pointing at these linear scales that are fixed to the x and y "rails" or shafts. If we slide the "nozzle/nozzles assembly" back and forth on the x and y "rails" or shafts and look at the images these cameras send to a display monitor, we will see the lines on the linear encoder scales move back and forth as the "nozzle/nozzles assembly" moves. If we were truly demented (and the motion wasn't too fast for cameras), we could write software to count the scale lines that pass, and even perform quadrature encoding on appropriately separated pixels.
Now that's a rather stupid idea so far, because the LEDs, light sensors and quadrature masks will probably cost no more and be no more hassle than these two cameras.
BUT... notice this. We don't need the linear scales at all !!!!!
How so? Well, let's now assume we simply placed very narrow marks along those two x and y "rails" or shafts approximately (but not precisely) 4 inches apart. There are much better ways to accomplish this than scratch or paint marks, but we'll ignore that practical issue. Essentially what we've done is remove 3999 out of every 4000 marks on the scale, and leave one out of every 4000 marks alone. Except unlike the linear encoder scales, the marks won't be exactly 4 inches apart, they'll be 4 inches plus or minus something like 0.0100" apart. In other words, not at precise intervals (which means, easy and cheap to accomplish but not precisely spaced like the encoder scales).
So... what is this supposed to do for us? Well, think about it. Before one mark exits one end of our 4096 wide/high pixel image, the next mark appears on the other end of the image. Now, let's say that even though the marks are not separated by precise distances, we nonetheless KNOW how far apart they are (measured after assembly and before we test the machine). Since we now know the precise separation of those marks, and the 4096 pixels on the image have a fixed relationship to distance on the x and y "rails" or shafts, when we watch any mark move from camera pixel to camera pixel on the camera images on the monitor, we know precisely how far the "nozzle/nozzles assembly" has moved in x and y. And since the next mark always appears in the camera images before the previous mark vanishes, we can ALWAYS keep track of the exact position of the "nozzle/nozzles assembly"!
In other words, once we know how many microns on the x and y "rails" or shafts corresponds to 4000 pixels on these two camera images, we can keep track of the exact position of the "nozzle/nozzles assembly" as it moves by reference to these camera images!
Essentially what we need to do is establish that relationship... how many microns on the x and y axes corresponds to 4000 pixels on the camera images. We can do this in several ways, so I won't wear you out by mentioning the ways I've thought up so far. Maybe you have even better ideas for this.
Do you see how that works? We now have the precision of super-duper precise linear encoders for the price of two cheap cameras. In other words, we have a fully closed-loop scheme. While we might (in some versions) need a precise gig at the factory to establish the distances between the marks, nothing expensive needs to ship with each machine!
BTW, there are several tweaks on this principle, including the fact it might be cheaper, more lightweight and bulky, and easier mechanically if the sensors in those cameras are 1D sensors instead of 2D sensors. In other words, they can be 4096 by 1 sensors instead of 4096 by 4096 sensors (or whatever). The example I gave generates 0.001" AKA 25u (25 micron) resolution without quadrature tricks, and about 0.00025" AKA 6.25u resolution with quadrature and/or mark versus pixel estimation/interpolation. Personally I'd prefer to shoot for 1u, 2u or 4u resolution (which requires the marks be closer together and/or more pixels on the sensor (perhaps 8192 or 16384 pixels if linear sensors).
The feeders are another area where we probably need to "think far outside the box" and invent some better scheme. That may be made easier by the "downer camera" I described above, since the position of components no longer needs to be precisely located. But hopefully we can do much better than that! For sure we need some way to prevent feeders from increasing the cost of real systems very much.
I have at least one idea about that, but I'm sure there are many others. Rather than wear you down now, we can discuss this idea in our next message if you want to continue with this brainstorming. I'd like to support both desires in this topic. You want to support boatloads of feeders without excessive cost, which we should support. While other folks might be willing to change feeders after each part. PS: What I mean to suggest here is that the software places ALL instances of each component on the PCB, then beeps to tell the operator to install the next component reel. Though actually there should probably be two reel positions instead of one, so it can move on to the next component reel while the operator changes the other position to load the subsequent component reel (a classic "pipeline" approach).
PS: I don't like used equipment... too potentially problematic for me! Plus, to place 0201 and smaller components, most non-current machines are not capable.
How much is my time worth? Hahaha. No idea (other than smart aleck responses). I guess my answer is this. I'm in a position to do this project if I want to (or just retire and live a frugal but comfortable life). So the real answer may be annoying, but truly is "if it seems worthwhile" (in "coolness" and "potential" at the very least). I am annoyed that so many aspects of the hardware development process have been taken over by huge corporations by means of "scurvy tricks". This project would defeat one of those... assuming we develop a sufficiently novel, cost effective and capable device.
Your turn!
FYI, the first two PCBs have already are: 8-layers, 4-mil traces, 0201s, 0.50mm BGAs/QFNs. One is 200mm square and the other 100mm square. I figure they are fairly representative in character to others that will follow in the next 3 or 4 years (at least).