A large (e.g. 64 test) can be dead trivial. On one contract I was working on software to convert data from one package into other software input formats. First test case was to convert all 8 formats into all 8 formats. I then added tests if there were other things that the program did.
That was a 500 KLOC VAX FORTRAN port to Unix on Intergraph, Sun, HP, IBM, SGI & DEC systems. The first year of use there were fewer than a dozen user submitted bug reports. The programs were instrumented to report usage to a company server in Dallas. There were well over 250,000 program runs during that first year. Usage went up as we added programs and features and user bug reports went to zero. After a merger, the package continued in use for another 6+ years before it had all become obsolete without any support *or* problems.
The primary requirement is designing for test. Metrics such as CC may be useful or not depending on the situation. A large case switch is not complex no matter what some metric might say.
For an HPC code I once wrote a several hundred line computed GOTO because it was the simplest and fastest solution. This was on a DEC LX164 in 1998. It was 3-5 times faster on floating point than any other processor on the market.
Truth is, I didn't hand code the thing. I generated it using a short awk script. Testing showed that if I unrolled the loop this was embedded in 2x I could get 10-15% faster throughput. There were a number of parameters that changed depending upon how i configured the program, most of which was written in C using MPICH with only the floating point code in FORTRAN. The loop was too complex to have the compiler unroll it which was the reason for writing a program to generate the code.
BTW this was a code with run times on clusters that ran for days. As a practical matter common practice is to break such jobs into 7-10 day runs despite checkpointing to allow restarts if there was a system failure which is quite probable if you have several thousand machines working on the processing job.
Reg