Have
you ever done an electronics project just to see if it is possible to do that specific thing
better than anything already existing on the market? What was it? For what definition and value of "better", other than cheaper/smaller BOM/easier to manufacture?
In the programming world, one of my interests is finding out ways to speed up certain types of simulations (high-performance computing stuff). It is
not a commercially viable approach, because it is much easier to spend $10
x in hardware than $
x in wages. However, there is a fraction of programmers that do this constantly, just the same. See
git as an example. It was not a case of
NIH as some believe, but more borne out of frustration of lack of proper tools that could handle the Linux kernel code base distributed over dozens or hundreds of developers.
(I don't mean NIH is not a thing with us code monkeys, because it is. This is, however, unrelated to NIH: this is about doing stuff better than before, for some definition of better that is not related to commercial value.)
Recently, I
asked a question here about a similar situation with USB wall warts and LED lights. Essentially, I was wondering what is currently technically possible and feasible; ignoring what makes sense commercially. I noticed that all answers basically seem to hint that a thing makes sense if and only if it makes commercial sense, and that if a thing does not make commercial sense, there is no reason to think about or discuss such thing. (At least that is the way I read the answers; but do note me fail English often.) I am wondering whether that is a prevalent way of thinking in the electronics world, or whether there are designers and hobbyists that do like to push the envelope just to see how far it stretches, ignoring any considerations for commercial viability.
Don't get me wrong: it is not a bad thing, to consider only viable stuff. It is a very rational way of looking at things. I'm just asking, in this particular question, who and how electronics designers and hobbyists "push the envelope", so to speak.
Note that the "maker" world is a different thing altogether, because they are trying to solve some problem, or achieve some specific purpose, and not just to see if the thing can be done better. (You could say that
git is then not a very good example. True. It was just the closest thing I could come up with, other than my own experiments and computational schemes for molecular dynamics simulations. I could show the way I devised for MCMC simulations using Metropolis-Hastings algorithm, allowing efficient parallelization and simultaneous communication and computation, without compromising overall detailed balance; but it's extremely dull for anyone not particularly interested in that, and even the simplest version uses 21/64 sub-steps for each cycle with a sliding 3D window to the data, with detailed balance retained over full cycles only. It's too deep into software engineering to fit into computational papers, and too much about a specific data structure and use case to interest any computer scientists. So there is no hope for even publishing it. Yet, it beats the shit out of current widely-used simulators in efficiency and speed in comparison. But, because the code is about two orders of magnitude more complex, it is not really viable even in the simulation world. As an example, I have still not proven, mathematically, that detailed balance is truly retained. I can only show it numerically, in the statistical sense. So, objectively speaking, it's just me waving my hands around vigorously.)