It seems to me that the multiplicity of numeric types will ultimately lead to a trainwreck.
One thing that I find problematic with Python is that variables are not declared, they just happen. Given a variable name, you have no idea what is being represented. Could be a string, a float, an int, probably a vector, maybe a matrix and even if you thought you knew the shape, even that could change during execution. How far back do you have to look to find where it was last modified?
Yes, Python is dynamically typed, types are determined at and verified at runtime, has benefits but also non trivial costs as you've found. Interpreted languages often have dynamic typing for ease of use.
Same thing with MATLAB for that matter.
APL too, powerful but has its downsides.
One thing I like about Modern Fortran is 'implicit none'. Every variable has to be declared and no more of the leading character determining the type unless otherwise declared.
I also like the 'intent' attribute:
http://www.personal.psu.edu/jhm/f90/statements/intent.html
Yes, that "intent" is potentially very useful indeed, with sensible use it can reduce subtle errors.
I'm not sure what to think about functions returning multiple values and the ability to ignore pieces of the return values. Returning two ndarrays and only keeping one seems bizarre. But so does using indentation as a syntactic element;
Some of these ideas emanate from functional language theory and mathematics, "tuples" for example comes from that world. In such languages they feature very naturally too. C# now has tuples, they are pretty useful because if one wants to return several things then prior to tuples you had to define a struct or class just to wrap these elements. I find them useful in that situation, the need to return multiple - often disparate - things in ways that were not initially anticipated.
I do like the idea of slicing arrays and being able to concatenate rows or columns.
Yes, I agree, there are some things that can be done regarding "slices" that incur very little runtime cost. Some of the strengths of Fortran were carried over into PL/I by the language designers, that too has some nifty ways of dealing with arrays, stuff not seen explicitly in C or C++.
Although I grumble about the lack of declarations, I do like Python's ndarray.
Is white space going to be significant? At least in Fortran IV, it wasn't. The statement:
DO10I=1,4
could be the beginning of a loop or just another real variable being set to a value all the way up to the comma. I never tested that concept but it was a side effect of the fact that Fortran ignored white space. Apparently in fixed format Modern Fortran, white space is still ignored but in free form it is significant
https://community.intel.com/t5/Intel-Fortran-Compiler/spaces-not-ignored-in-free-format/td-p/1112317
Embedded spaces (or under scores) in long numeric strings can be useful.
SpeedOfLight = 186_000 miles per second
I've not looked much at all at Python myself. It seems it lets us create arrays whose rank isn't known until runtime, and I can see the convenience of that for some domains and in an interpreted language. Of course arrays are (or can be) just contiguous blocks of memory with a function that can convert n subscripts into an offset into the array, so in that sense they are illusory, just a "way" of perceiving how data is organized.
Spaces? yes I have heard of Fortran's tolerance of that, I guess some of that goes back to the nature of the industry at the time, keyboards were scarce, most people then (including me even in 1982) wrote code onto coding sheets, those had to be transcribed by a team of "punch" operators and so on.
So it's likely Fortran tried to be flexible because of these limited input methods. The grammar I have in mind, borrows from PL/I in several respects. PL/I was designed with a thorough analysis of Fortran, Cobol and Algol, taking some of the best ideas in those and developing a uniform grammar that captured these varied concepts.
Spaces can't be totally ignored, it all depends on what it takes to recognize a language token. Some tokens are simple enough to never be ambiguous, but other are not, like identifiers, these can be arbitrary and any length.
Like Fortran though, PL/I and the grammar I've been exploring, is free of reserved words, like Fortran an identifier can be anything even a keyword, and the grammar rules make it straightforward to resolve.
Here's an
interesting historical article that sheds some little known light on how Fortran and Cobol and Algol served as the basis for PL/I.
Note:
Still it was the first language that contained decent string handling capabilities, pointers, three types of allocation of storage -- static, automatic (stack-based) and controlled (heap-based), exception handling, and rudimentary multitasking. While the idea of preprocessor (a front end macro generator) was never cleanly implemented (as PL/1 did not stored "macro source" line numbers and without them matching preprocessor and "real statements" was difficult) it was also innovative and later was inherited and expanded by C.
All-in-all PL/1 was and probably still is one of the most innovative programming languages in existence and some of its feature are still not matched by "new kids" in the programming language block.
Pointers, ->, while/until, break, the /* */ comments, exceptions, multitasking in the language, a powerful preprocessor, semicolon as statement terminator and more, all originated in PL/I and some were carried over into C.