Your script simply extends the capabilities of the GUI. It doesn't change the principle.
It's not "my script", it's how EagleCad, Wildifire, Maya, and a lot of tools do thier job.
And I have never written it has to change the principle.
But I did, kinda sorta: it is a different approach. That's the key point here.
Lets compare OpenSCAD to say Fusion 360. Fusion 360 lets you parametrically define the features of the objects, and create objects parametrically. OpenSCAD generates objects and collections of objects using a simple language based on solid geometry; the language isn't even procedural, although it has loops and such. For individual features or sub-objects, you can create modules.
What's the difference, then?
In one, if you find your commands or script didn't do what you wanted, you delete and rewrite it.
In the other, you simply modify part of the description of the modeled system.
The same difference exists between SVG and other vector formats used on the web, like web fonts. SVG is verbose and human-readable, but it is also modifiable in real time by Javascript on the web page; not so for web fonts.
The core difference I am trying to convey here, is that in EagleCAD etc., the scriptlets
produce objects as if they were created by hand in the GUI. If you rerun the scriptlet, you get another object. The scriptlets are essentially
macros for human GUI actions played back at amazing speed.
In OpenSCAD and NorthGuys PCB program, the script actively
describes the objects; by modifying the script, you modify the object.
Why isn't the latter implemented in GUI-based tools?
Because it is rarely possible to translate the human actions via the GUI back to the underlying script describing the objects.
It is possible for 2D (demonstrably; Inkscape does it that way, and even allows you to modify the source by hand during editing), and it is possible for solid geometry in general, but usually the "human by hand" and "generated via GUI" are very different, for a number of reasons.
(As an example, even Inkscape has a "Save as Optimized SVG" mode, which more closely resembles how humans and efficient scripts generate SVG code, but the end result lacks many of the niceties of the default/GUI-generated objects, like rotation centres, often layers, and so on. I personally tend to even fine-tune that further. And it isn't hard to crash Inkscape by careless modification of the underlying XML code either.)
This means that a tool which used an underlying domain-specific descriptive language exposed for user editing to describe the work at hand, but allows simultaneous modification to the underlying language by hand and via the GUI, is going to have to make compromises that make no sense to users that use predominantly one approach over the other.
That does not mean one is superior to the other:
it depends on how the tool is used. The
root principle, or paradigm.
You well know the idiom "if all you have is a hammer, all problems look like nails." As I've already told you, asking dozens of "house frame-builders" (who predominantly use a hammer and a saw to make the wooden frame, in this analog), you for sure get the answer that screwdrivers and industrial adhesives are
meh because a hammer does the job much faster and easier. OpenSCAD in particular solves a different problem than Fusion 360: it lets one describe the object needed via solid geometry, and it'll show you how it looks like; whereas Fusion 360 lets you design any object with set constrains that you can imagine.
While designing a PCB is seemingly a single problem, NorthGuy has already mentioned they wouldn't use their own tool for analog designs; and rather created it to simplify working with a specific type of digital circuit (to simplify, lots of pins and simple traces). If you think about it, that actually makes sense to describe using a simple language.
A good example is a circuit I've been playing with recently: a simple Arduino-programmable display module accelerator. An ARM microcontroller in QFP64/LQFP64 package with 22 or so pins connected to a display module, and the rest of the I/O pins exposed as a pin header, with a crystal/resonator, bypass capacitors, and
maybe a 5V-to-3.3V LDO or DC-DC converter on the board.
There are
16 orientations for the microcontroller with respect to the display module flex connector pins: 4 aligned and 4 diagonal on the same side of the board, and 4 aligned and 4 diagonal on the other side of the board. The flex connector pins are basically the only thing fixed on the board, everything else is up to me.
To be honest, I'm very tempted to write a simple Python program to investigate those 16 orientations, to find out how many trace crossings each one involves, and whether there is a simple two-layer solution – i.e., to qualify each of the 16 orientations before I really start looking into how to draw the traces. It is easier for me than trying to do all 16 in an EDA, or even to just pick one and work it through even if it is sub-optimal.
When I know which one has least problems, I'll be quite happy to use any GUI EDA tool – I like EasyEda – to design the actual board.
(Also, I
know that if someone has read this far, they have ideas on how to accomplish this using their own favourite tools. That is perfectly okay, but it is important to realize that having a favourite tool does not make it
optimal. And just like there are *lots* of say woodworkers using only hand tools, it is perfectly okay to use the tools you like; but it isn't okay to use that as a basis for claiming there is no need for any other kinds of tools, or even that you can do everything those other tools can do with your own favourite ones, because the underlying preference is dependent on personal features and varies from individual to individual. It is, however, perfectly okay to tell how oneself likes to do those things using their own preferred tools, because that can give others new ideas without asserting a comparison between tools, and we all stand to gain.)