Just after the turn of the millennium I was involved in creating the Redlake HG-100K camera (Google it if you are curious), a super high-end camera capable of taking video at up to 100K FPS while being subjected to repeated 100G impacts. It too used C-mount lenses, which generally tore right off during extremely violent events, with no damage to the camera.
It was designed to serve two main markets: Being inside cars during crash tests (it had to survive being crushed), and being very, very close to weapons tests (it had to survive being literally blown away).
While none of the alpha builds could capture at full speed, the second beta camera could, and we found ourselves running around the lab trying to find something we could shoot at 100K FPS. We needed a small extremely fast moving target with tons of light.
We tried the conventional "pop the water balloon" test, but that gets boring at 2K FPS, and we couldn't come close to getting enough light past 10K FPS. We shot directly at a fluorescent bulb, but that got boring at 5K FPS, though we did get useful captures at 30K FPS.
But fully illuminated 100K captures eluded us. We considered aiming it straight at the sun, but elected to not melt our first working camera. We held a meeting to work through the issue, with a powerpoint presentation to review all we had tried so far. At the second slide we all looked at each other, then grabbed the projector and ran into the lab, where we took it apart to expose the bulb.
We slapped on a tele-macro lens and immediately got absolutely gorgeous video of the arc wandering within the bulb. I ran it through some video analysis software to quantize how the arc volume, position and velocity changed with time, generated a quick data overlay, then posted the video on our website.
The next morning we got a call from Japan asking for an on-site visit. The manufacturer of the projector we used was having optical path issues they had traced to the arc behavior, and they wanted to bring some prototype bulbs for us to image. Of course we said yes, and their larger than expected team arrived two days later. The Japanese scientists and executives were so impressed that they offered us an obscene amount of money to let them take one of the beta units home. I mean really obscene. Of course we said yes! Our very first production unit went to them so we could get the beta unit back.
One of the things we had learned with prior high-speed digital cameras was the importance of good IR filtering. We didn't have bright and cool LED lighting at the time, so we were using arc lamps, which generate a ton of IR. Most lenses are more than willing to focus IR right onto your sensor, which will promptly overheat and melt. We didn't let that happen, of course, since we had the foresight to build temperature sensors into our custom imager die.
Yes, we had to design custom silicon for this beast, and we went to the top pixel designers on the planet (at the time) to handle our needs. Until then, all cameras above 1K FPS (including our own) used CCDs. We wanted to go with CMOS for several reasons, but we needed to confront many issues to do so, the most important being pixel noise, shutter control, and readout issues. The sensor had many more readout channels than any other CMOS sensor of the day. It also had multiple shutter modes (including rolling and global).
The most critical decision was what foundry and feature size to use. We intentionally went with a prior-generation feature size at a foundry that had truly mastered it, which not only reduced our risks, but also gave us a far better sensor in the end.
We also had to consider what to put on top of the silicon. We needed microlenses to improve light gathering, and we wanted the option of a Bayer mask for color imaging (we got triple resolution and quadruple sensitivity in monochrome, but almost everyone wanted color).
Once the overall system design was finalized, my job became the design and implementation of the software control interfaces within the camera: The outward-facing interface (protocol/API) for our GUI control app and the customer's industrial automation software to access, and the inward-facing interfaces to the imaging pipeline control FPGAs and other hardware subsystems such as the Ethernet controller, video encoder, and so on.
The GUI team was working extremely hard to find ways to intelligently provide access to the huge number of camera features. The external camera interface evolved at a rapid pace to support them. Many camera settings had useful ranges that varied based on the values of other settings. The overall state machine was a true nightmare, far beyond what the GUI folks could support in a reasonable amount of time (and impossible, truth be told, since the camera FPGAs were still being tweaked).
We needed the camera to provide an "always valid" operational state, which meant returning errors when the user tried to change a parameter to a value that would cause an invalid configuration. But this made the configuration process extremely delicate and error-prone. We needed a way to interactively "evolve" the configuration past/through temporarily invalid intermediate configuration states.
So I abstracted the external API to permit it to virtualize itself and provide a "What if?" configuration mode that the GUI could use to provide error-free interactive feedback to the user. This required minor FPGA changes to support a "try but don't die" configuration mode, which later turned out to have extremely valuable uses beyond the initial camera configuration.
I also wrote the software for the production test gigs, which was a total blast. We needed to keep things simple for the assemblers and testers while collecting a ton of data for QA and QC purposes, the initial goal being to rapidly evolve our design during the first production run to make it easier and cheaper to manufacture.
Unfortunately, the flood of pre-orders we expected (and were counting upon) failed to materialize, and the company had to sell itself to get the funds needed to push the HG-100K into production. After which the entire engineering department was laid off, since the new owner wanted only the existing products, not the development team.
I did make one big mistake a month after the layoff: I was offered a lucrative short-term consulting contract, which I turned down out of spite. Silly and stupid, for multiple reasons. It would have taken the pressure off my job search while letting me once again work on a product I loved, neither of which had anything to do with the new owners. Who weren't bad folks: They did offer jobs with relocation benefits to our entire production team and to most of the customer support / field engineering team.
I suppose I should take a look at the Chronos hardware and software...