In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.
Well, I understand you don't care much about the packager and end user experience. I'm not going to subject my package maintainers to the pains of maintaining any Python dependencies for my software. And end users having to pip install anything is causing them unnecessary pains not the mention the problems of which major python version they need to pick. I simply will not go down that road.
I am not asking you to - re-writing your terminal emulator in Python just for the sake of it would be obviously silly.
I am not sure how did you determine that "I don't care about the packager or end user experience". Python packages and software are routinely packaged for distros with no issues. If the package has proper build using distutils, it is no problem whatsoever. E.g. my Mageia 6 has over 1000 Python packages listed in the repository, including major software like GnuRadio, the entire Numpy/Scipy stack, etc. Packaging Python applications is no different from packaging C/C++ - you have to handle dependencies there as well.
And end-users are
not supposed to build from source (even if it is not difficult at all), I think I have been rather explicit about it.
Yes, my focus is mostly GNU/Linux systems and when I said cross platform I mean for example Linux vs BSD vs Hurd vs odd GNU distributions etc.. However, adding Windows to the mix does not make it easier to distribute applications with dependency on Python. In Windows you don't have package managers the same way we have in Linux so users end up having to sort out dependencies themselves and make sure to install the correct major version of Python to make things work and after that install any missing Python modules whatever they might be. Normal Windows users can run into all sorts of trouble with these things and some do. You can dismiss it if you like but that is reality for you.
I would never choose such solution for a professional application but that is my opinion.
Whether I like it or not is beside the point (btw, I am a Linux user since 1994). If you deal with software for business, you will have to deal with Windows too, that's just a fact of life. I had to deal with various Unix systems over time, but yet have to see anyone using a Hurd. So it is cool you are thinking about portability to it (seriously, your Unix/Linux TTY apis are going to work on Hurd? Ehm ...)
Just this week I had to actually prepare some software for a Mac, even. Some clients use even that.
Re package management - pip & anaconda work just fine in Windows. Anyhow, you are missing the point again - you don't distribute your application as just source code for Windows but build a binary so that the user doesn't need to build it themselves. Yes, that is possible with Python. Then you have a normal, self-contained exe file.
I am not dismissing anything, my point is that you are making a portability argument - and use an ancient build system that is pretty much making cross-platform (porting from one Linux to another Linux is not really cross-platform in my book) portability impossible. If you were using something like CMake it would be more believable.
Please notice that I said that _pure_ Python will always be much slower than C/C++. Yes, Python can make use of all sorts of precompiled support libraries/modules and then become faster that way but that, in my opinion, goes against the point of using a scripting language for writing an application.
What? That's a bit like saying using the standard C library goes against the point of using a C language for writing an application ... I didn't know that only writing everything from scratch is the acceptable form.
One specific Numpy numerical calculation performance test is not very convincing and the use case for Numpy is very limited to scientific computing.
Right. So go tell that to folks like Google or Facebook that are building most of the deep-learning stuff using this. Or to financial analysts building investment plans using tools like Numpy & Pandas, etc. Or people doing any sort of data mining (practically everyone today - Numpy is free and much faster than Matlab which used to be the tool of choice).
There are numerous benchmarks available online that show how C/C++ is generally a factor or more faster in most scenarios. Even with clever jit technologies like Numba it is, in general, nowhere near as fast as C/C++. Sure, in some very specific benchmarks it can get close but in general use no. In the end Python/cython/numba/numpy etc. can't beat the fact that in C/C++ you have full control of cache lines and memory layout and with these mechanism you can achieve the best possible performance.
I'm the kind of developer that cares about performance in any context and I'm not going to compromise my performance by writing my applications in Python and then have to depend on various Python modules to minimize the performance gap. That is my opinion, and in professional context I feel even stronger about this.
The problem is that what you are saying is relevant if you are writing close to the metal application (btw, I do wonder how you perform a "full control of cache lines" - you can at best ask for continuous memory allocation and certain alignment). None of this is at all relevant for an application which is spending 99.9% of its time waiting for a character to appear on a file descriptor - such as a terminal emulator. Or pretty much any application that has an UI.
What matters a lot, though, is developer's productivity, because it is directly related to how costly (or not) is the project for the company. The entire time is money etc. thing. Once your job becomes writing complex algorithms that involve a lot of math, networking or anything else not covered by the standard C/C++ libraries you will start to appreciate stuff like a good set of libraries and an expressive programming language (be it Python, Julia, Haskell, C# or whatever else).
Why do you think Java and C# became so popular? They have large runtimes, Java is terribly verbose and both are horrible for anything system level. However, both have enormous libraries of code available that make handling common tasks a breeze in them. Try to do e.g. any sensible (aka complex) networking in C/C++ without e.g. Boost or something like 0MQ or ACE and then do the same in C# only using standard libraries and you will see what I am talking about.
I have nothing against C or C++ and I still write tons of code in them but treating them as a sort of holy grail that nothing ever comes close to is totally counterproductive. I could spend a week or two writing an embedded web server in C++ for an application. Or I can write 5 lines of Python and be done in 30 minutes and move on to solving actual problems that the client actually pays me money for.
If I have learned something over my career, it was not becoming an expert in a programming language. A much more important skill is using the right tool for the job and keeping eyes open to new things instead of being stuck on incorrect assumptions because you have seen something in the past and didn't like it.
Yet, many Python applications you find still crash with the default Python type stack trace - it's not pretty.
Yes and a lot of C/C++ applications crash with the segmentation fault error or a bus error. That is even less pretty because nobody has any clue why. But perhaps it is more acceptable because users are used to applications just closing on them? That's really a silly point to make, IMO.