Here's an old link some might find useful. Be sure and visit the sites indicated, as there's quite a bit of information. We've been doing this for a couple decades now
https://www.eevblog.com/forum/projects/diy-focus-stacking-for-macro-photography/
Best,
I wish I had known about you guys before I started this journey. I have fallen into quite a lot of traps doing this project. Up until I saw your focus stacked image of the corner there, I hadn't really found an example of a "true to reality", real color die shot and thought I had a unique edge. But my research must have been affected by confirmation bias. Due to the questions I get here in this thread, finding your work etc I feel compelled to tell the story so you can understand my motivations.
Approximately four years ago I got hold of a quite nice wafer prober. Mitutoyo optics and a rock solid mechanical platform with motor control in xyz. It had been sitting around for years though so dust had creeped in and the grease had dried up and hardened. I spent a year learning microscopy, regreasing the mechanics and optimising the optical path. I knew that my edge wasnt really the objectives, they are NIR optimised so they have a slight disadvantage in the blue part of the spectrum and they are long working distance meaning they have a less than optimal numerical aperture. The advantage I saw to the wafer prober was the mechanics, although old and manual they made it possible to capture a large number of exposures controllably. I set out to capture "the highest resolution die shot ever taken"(tm).
When I started taking the images I thouight that taking the images would be the hard part, stitching them together would be easy considering that there is a variety of panorama software out there and people are making gigapixels panoramas of landscapes all the time. I did a couple of captures before I got the optical path optimised enough, but they all suffered from heavy vignetting which gave the software a clue on how to stitch them which meant it seemed to work doing pseudo-automatic stitches of parts of the chip. But when I finally had optimised the optical path enough so that the vignetting wasnt a problem anymore, the software started making mistakes. Most automatic stitching software can do fairly correct stitching of parts of the image but it is nowhere near accurate or reliable enough to do a pixel perfect stitch of something this size. When I started realising this I hade already sunken several 100s of hours into the project. I thought that the images coming out of the project where amazing. But they always fell way short of what I wanted. I saw the potential but I also saw a lot of flaws. So I persisted. The problem is that you have to go through the whole process, from raw-format conversion through to stitching in order to see a final result. The cycle time is large from raw files to final images. And its not before you see the final image that you know for sure what is wrong and how it will be perceived. So you start over. And over. And over.
After trying Microsoft ICE, PTgui, Affinity photo, Fiji(imagej) etc etc I ended up using Hugin in the end. It is free, open source and full featured. Once you have learned the slightly less than intuitive interface you have all the manual control you could ever need. And it is reliable enough to do large projects. But the automatic algorithms make way to many mistakes with this material. Also due to optical aberation, there is no perfect coordinate system. There is no way to _perfectly_ place all the images it is a slight compromise even in the best of circumstances. Also the image is way to large to process all at once. I divided the image into approximately 30 parts, stitched them in separate projects and then stitched the 30 parts into four parts due to tiff-file format limitations. Then I used photoshop to stitch the four parts into the final original. All of the parts have to be perfect since a small error in one of them means that there will be a mismatch visible in some other part of the image since angular errors amplify with distance. I must have gone through the process of raw file conversion, stitching parts, and then stitching the final image 20-40 times before I got a result that could be considered as close to flawless as I could make it. The repeating features on a lot of parts of the CPU means that what you are looking for to set control points are not features of the CPU but specks of dust, scratches etc in a lot of parts. And you have to set them with pixel accuracy. The software can do statistics on how far off a control point is but due to optical errors that statistics will never go to zero error no matter what you do.
Ive discussed at length with people online if I could have done something differently but as far as I know the best way to avoid all that manual labour would be to keep track of the exact position of each exposure while taking them with encoders. That is the strategy I would chose if I would do this again. I really think that if I can get the energy and the funds, I could eliminate the enitire manual process described above. That would turn the cycle time from years down to a week of the microscope doing the work for me. And getting there would take less time than doing this whole process again with the experience I have.
So its really a sunken cost fallacy. Once I had gotten deep enough into this I really didn't want to give up. The images I was making was really cool but not cool enough to be considered a really nice photo. So I kept trying. By the time the main photo was created I was totally spent in all sorts of ways. Unfortunately this has meant that the video (probably my main end goal) only got a couple of weeks or perhaps a month of work, with me learning blender and finding a method in blender as I went along. I spent approximately two days making some music and that was it. The video is therefore lacking in storytelling and not really as interresting as it should be. However it is the best I can do under the circumstances. And I have yet to see an example of a photographic deep zoom similar to this. Ofcourse someone has made it already somewhere but I haven't seen it.
So yeah there you have it, three years of work, 800 pseudoreluctant views, some criticisms and a few prints sold thats it. Now I have to find a way to recover. Im posting this here because I need to learn my lesson, getting for instance Mawyatts perspective helps me move forward and analyze what I did wrong.
My motivations where not to reverse engineer, I am an electrical engineer by trade but reverse engineering isnt something I would do for fun. I wanted to capture what the silicon looked like in a way that would show the overwhelming complexity of silicon. Something I actually think I succeded in doing. The image looks stunning when printed in a large format. Because of the high resolution, it would be possible to print it in 4 by 6 meters and still maintain 300 DPI. Here is a photo of a 1 meter wide print: