Author Topic: C# and other OOP languages  (Read 66341 times)

0 Members and 2 Guests are viewing this topic.

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: C# and other OOP languages
« Reply #150 on: October 23, 2014, 11:09:18 pm »
I find it about as absurd to compare Python, Java, and C as it would be to compare a can opener, scissors, and chainsaw.

You are missing the point.

It is valid to compare those objects(languages), and particularly to compare aspects of those objects(languages), to understand the domain in which each object(language)is most appropriate.

OK, we're getting a little pedantic here.  But very well, I'll rephrase:

"I find it absurd to consider Python, Java, and C as competitors for the same space."

In the past C really was a good choice for a very wide range of projects, but recently that range has been significantly reduced by more modern tools.  Unfortunately there is still a lot of crap legacy stuff written in C,  still some dinosaurs that think C is the best tool for most problems, and other dinosaurs that don't understand the liberating power of newer tools.

Now it's my turn to be pedantic.  C still is a good choice for a very wide range of projects, but recently there are many alternatives available that may prove more convenient.  I take exception to the opinion that using C is the recourse of fossilized gray-beards who don't know better, though.  Scripting and VM-based languages have been around for a while now, and they have their place, but C gives you a very rich set of tools.  It's qualified for a great many purposes.

Depending on your comfort level, your attention to detail, the time and effort you can afford to dedicate to a project, the scope of that project, and the cost of errors, there may be no compelling reason not to use C where another language is equally capable (and possibly better suited.)  This is particularly the case in Unix-land, where there are libraries and utilities for everything.  Those tools often do ONE thing, and that one thing they do well.  So, the burdens presented by C as a language do not weigh heavily on the developer -- and the availability of a C compiler for just about everything with a CPU is a relevant factor as well.  Likewise with the ability of many languages to interface with compiled C code.

Often they are wrong about high optimisation: see the HP Dynamo project for why emulated C can be faster than -O2 and -O4 C running on the same processor. The techniques necessary to achieve that are applied automatically and invisibly to every program running in a standard HotSpot JVM. Key point: the C compiler has to be unnecessarily pessimistic because it can't prove there isn't any aliasing.

It's a neat trick, but you're talking about implementing a VM, with associated overhead, to relieve the processor from having to re-fetch data since it can't guarantee the copy in the register is still current.  I'm sure there are times where this is a benefit, but wouldn't it just as likely be a huge resource suck as well?  For that matter, hardware caching will likely make unoptimized reads a little less costly.  I'll grant you this -- in a performance-critical loop, it might be wise to avoid dereferencing pointers at all, and thereby afford the compiler better control over the instructions used.  (This does assume you have access to the source, and enough wherewithal to use the right language features.)

Still I don't see this as much a cause for condemnation of a language.  Some operations might take slightly longer than absolutely necessary.  Meanwhile, the fix is to virtualize the execution environment, analyze the code and look for patterns, and replace those patterns with more performant patterns?  Hm...  I'm not convinced this is the right approach in all but the corner-iest of cases.  (Straight-up cross-platform emulation being a glowing example.)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #151 on: October 23, 2014, 11:40:31 pm »
I find it about as absurd to compare Python, Java, and C as it would be to compare a can opener, scissors, and chainsaw.

You are missing the point.

It is valid to compare those objects(languages), and particularly to compare aspects of those objects(languages), to understand the domain in which each object(language)is most appropriate.

OK, we're getting a little pedantic here.  But very well, I'll rephrase:

"I find it absurd to consider Python, Java, and C as competitors for the same space."

And with that I completely agree.

Quote
In the past C really was a good choice for a very wide range of projects, but recently that range has been significantly reduced by more modern tools.  Unfortunately there is still a lot of crap legacy stuff written in C,  still some dinosaurs that think C is the best tool for most problems, and other dinosaurs that don't understand the liberating power of newer tools.

Now it's my turn to be pedantic.  C still is a good choice for a very wide range of projects, but recently there are many alternatives available that may prove more convenient.  I take exception to the opinion that using C is the recourse of fossilized gray-beards who don't know better, though. 

C is still an appropriate tool for many problems.

"Dinosaurs" don't recognise that, for some problems, there are now better tools than there were in the past. The naive and easily-beguiled-by-salesmen believe that all new tools are better than old tools - especially the scripting language du jour.

Quote
Scripting and VM-based languages have been around for a while now, and they have their place, but C gives you a very rich set of tools.  It's qualified for a great many purposes.

Agreed, and I will attack the use of scripting languages more often than I defend them - I'm not a fan. By and large I prefer a good language+tools with libraries appropriate to the current problem.

Quote
Depending on your comfort level, your attention to detail, the time and effort you can afford to dedicate to a project, the scope of that project, and the cost of errors, there may be no compelling reason not to use C where another language is equally capable (and possibly better suited.)  This is particularly the case in Unix-land, where there are libraries and utilities for everything.  Those tools often do ONE thing, and that one thing they do well.  So, the burdens presented by C as a language do not weigh heavily on the developer -- and the availability of a C compiler for just about everything with a CPU is a relevant factor as well.  Likewise with the ability of many languages to interface with compiled C code.

Agreed.

Quote
Often they are wrong about high optimisation: see the HP Dynamo project for why emulated C can be faster than -O2 and -O4 C running on the same processor. The techniques necessary to achieve that are applied automatically and invisibly to every program running in a standard HotSpot JVM. Key point: the C compiler has to be unnecessarily pessimistic because it can't prove there isn't any aliasing.

It's a neat trick, but you're talking about implementing a VM, with associated overhead, to relieve the processor from having to re-fetch data since it can't guarantee the copy in the register is still current.  I'm sure there are times where this is a benefit, but wouldn't it just as likely be a huge resource suck as well? 

You don't understand how HotSpot (and Dynamo) work.

The essential point is that they observe the runtime code+data patterns to see which subset of code pathways are actually executed "90%" of the time - and then HotSpot optimises the shit out of what actually happens (while leaving fallback escape route for the other 10%).

In contrast, any C/C++ compiler has to make static compile-time guesses as to what might happen at runtime - and include pessimising code just in case aliasing happens to be occurring. And God help the user when, not if, the programmer uses too aggressive compiler optimisation flags.

Quote
For that matter, hardware caching will likely make unoptimized reads a little less costly.  I'll grant you this -- in a performance-critical loop, it might be wise to avoid dereferencing pointers at all, and thereby afford the compiler better control over the instructions used.  (This does assume you have access to the source, and enough wherewithal to use the right language features.)

The L1/L2/L3 caches are notoriously badly used by any pointer+dereference based language such as C/C++ (in contrast to, say, Fortran). At least with Java and C# the VM has a chance to ensure frequently/infrequently accessed code and data is in/out of the cache. Far from perfect, but impossible in C/C++.

Quote
Still I don't see this as much a cause for condemnation of a language.  Some operations might take slightly longer than absolutely necessary. 

Sounds like a good argument in favour of Java/C# :)

Quote
Meanwhile, the fix is to virtualize the execution environment, analyze the code and look for patterns, and replace those patterns with more performant patterns?  Hm...  I'm not convinced this is the right approach in all but the corner-iest of cases.  (Straight-up cross-platform emulation being a glowing example.)

It certainly isn't the right approach in all cases. But it is, demonstrably, both good and good enough in a very wide range of cases. Embedded systems are probably the best counter-example, and also I suspect tablet-based systems.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: C# and other OOP languages
« Reply #152 on: October 24, 2014, 12:30:11 am »
Maybe you should tell NVidia to use Java instead, so they can beat Intel next year:

http://en.wikipedia.org/wiki/Xeon_Phi#Design
 

Offline vvanders

  • Regular Contributor
  • *
  • Posts: 124
Re: C# and other OOP languages
« Reply #153 on: October 24, 2014, 03:13:24 am »
...
Quote
For that matter, hardware caching will likely make unoptimized reads a little less costly.  I'll grant you this -- in a performance-critical loop, it might be wise to avoid dereferencing pointers at all, and thereby afford the compiler better control over the instructions used.  (This does assume you have access to the source, and enough wherewithal to use the right language features.)

The L1/L2/L3 caches are notoriously badly used by any pointer+dereference based language such as C/C++ (in contrast to, say, Fortran). At least with Java and C# the VM has a chance to ensure frequently/infrequently accessed code and data is in/out of the cache. Far from perfect, but impossible in C/C++.

...
You must not have a great grasp of C/C++ if you think it's impossible. C/C++ gives you *complete* control over your memory layout that let you deal with cache issues in a way that Java can't touch and C# can only get close to(with struct values). See http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf or http://channel9.msdn.com/Events/Build/2014/2-661 @ 12:15 for values types or 24:00 for where things really get interesting.

Ignore data layout and cache usage at your own peril :).
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: C# and other OOP languages
« Reply #154 on: October 24, 2014, 03:22:56 am »
To be fair, Java does better than C/C++ on multicore processors because it scales better without a lot of research in the C/C++ front. Most programs can probably use one of the cores, maybe a couple.

However C++11 does better than all of the above, and Erlang does even better than any of them, so when we have 1000 cores old C/C++ will be impossible to use if you want to use the whole capabilities of the chip.

But if you have specialists and a lot of time for a speed competition, it's hard to beat dedicated handwritten code in the most appropriate language for the task.

Edit: but then again at the lower level C++/C or assembly is where is at, even on high concurrent functional languages that scale to factors unobtainable by Java.
« Last Edit: October 24, 2014, 03:25:41 am by miguelvp »
 

Offline vvanders

  • Regular Contributor
  • *
  • Posts: 124
Re: C# and other OOP languages
« Reply #155 on: October 24, 2014, 03:26:57 am »
I'm actually a huge fan of Rust, it's got tons of bits in common with Erlang. Compiles down to x86 and has some really awesome static enforced(via RAII) resource ownership semantics.

Someone recently ported his raytracer to Rust from C++ and it was a fairly interesting read:
http://ruudvanasseldonk.com/2014/08/10/writing-a-path-tracer-in-rust-part-1

The language still has a ways to go before it's considered stable but it looks pretty interesting.
 

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: C# and other OOP languages
« Reply #156 on: October 24, 2014, 03:29:31 am »
And with that I completely agree.
...
Agreed, and I will attack the use of scripting languages more often than I defend them - I'm not a fan.
...
Agreed.

What?  Hey, cut it out.  This is the Internet, man!  ;)

C is still an appropriate tool for many problems.

"Dinosaurs" don't recognise that, for some problems, there are now better tools than there were in the past. The naive and easily-beguiled-by-salesmen believe that all new tools are better than old tools - especially the scripting language du jour.

Yeah... well....  I...  agree with that, too.

You don't understand how HotSpot (and Dynamo) work.

That's likely true.  Are you saying these tools optimize the code at compile-time, by stepping through the instructions and observing how the program would normally run?  I was (am?) under the impression they stood between the executing hardware and the bytecode to watch trends and shuffle instructions as necessary.  Like a debugger.  That seems anything but fast, so I hope I'm wrong.

Also, for what it's worth ... Ulrich Drepper has a great paper on cache-aware programming.  This discussion made me go back and revisit a couple sections from it.  It's a dense read, and at the end of the day, it may only matter in time-critical or extremely long loops.  The rest of the time you can get away with letting the compiler optimize what it can, and accept your losses for the rest.  Anyway, the paper touches on compiler flags and attributes (some of which have been the subject of debate here WRT side-effects), and C99 features that resolve some of the issues -- although there's a footnote that says compiler support isn't quite there yet.  It's a few years old now, but I assume in the scheme of things (again... C99...) not much has changed.

Hand-optimization is an alternative, which may or may not be wise -- given portability, resource constraints, the impact of improvements in compiler optimization down the road, etc etc etc...  It's a trade-off.  C's flexibility will let you do many things (for better or worse).  C# and Java have their baggage too, afterall.  You're never going to stall a C program waiting for garbage collection, for example.
 

Offline vvanders

  • Regular Contributor
  • *
  • Posts: 124
Re: C# and other OOP languages
« Reply #157 on: October 24, 2014, 04:48:54 am »
There's really not a need for non-portable hand optimizations in the types of systems where cache misses hit you hard(dram based, 400+ cycle losses). In those cases you're only optimizing the 10% that's in the domain of the compiler. The rest of it, the way you organize and access data, that's where the real perf gains are to be made. I highly recommend watching the Herb Sutter talk I linked previously, he shows quite a few examples where even simple arrays over hashmap/etc end up multiple orders of magnitude faster due to locality of data and prefetching.
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: C# and other OOP languages
« Reply #158 on: October 24, 2014, 05:02:10 am »
Sure there is a need to hand optimiz if you are competing for the fastest supercomputer in the world.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #159 on: October 24, 2014, 08:25:03 am »
Maybe you should tell NVidia to use Java instead, so they can beat Intel next year:
http://en.wikipedia.org/wiki/Xeon_Phi#Design
Now you're being silly. As I have very explicitly said, one tool doesn't fit all - use the right tool for the particular job at hand.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #160 on: October 24, 2014, 08:36:30 am »
...
Quote
For that matter, hardware caching will likely make unoptimized reads a little less costly.  I'll grant you this -- in a performance-critical loop, it might be wise to avoid dereferencing pointers at all, and thereby afford the compiler better control over the instructions used.  (This does assume you have access to the source, and enough wherewithal to use the right language features.)

The L1/L2/L3 caches are notoriously badly used by any pointer+dereference based language such as C/C++ (in contrast to, say, Fortran). At least with Java and C# the VM has a chance to ensure frequently/infrequently accessed code and data is in/out of the cache. Far from perfect, but impossible in C/C++.

...
You must not have a great grasp of C/C++ if you think it's impossible. C/C++ gives you *complete* control over your memory layout that let you deal with cache issues in a way that Java can't touch and C# can only get close to(with struct values). See http://research.scee.net/files/presentations/gcapaustralia09/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf or http://channel9.msdn.com/Events/Build/2014/2-661 @ 12:15 for values types or 24:00 for where things really get interesting.

Partly true for some static access patterns that a programmer can correctly guess before compilation. False for many common and pessimal access patterns, where there simply is no locality of reference. Such access patterns necessarily "bust the cache" (except in toy applications where the entire dataset is small enough to fit in the cache :) ).

Being free to move commonly accessed data structures around memory based on their actual measured run-time behaviour can be surprisingly beneficial. But certainly not a panacea.

Quote
Ignore data layout and cache usage at your own peril :).

Very true. Nowadays in terms of latency, cache = main memory, main memory = disk.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #161 on: October 24, 2014, 09:00:07 am »
And with that I completely agree.
...
Agreed, and I will attack the use of scripting languages more often than I defend them - I'm not a fan.
...
Agreed.

What?  Hey, cut it out.  This is the Internet, man!  ;)

Mirror :) Sorry I'll try not to make that mistake again.

Quote

C is still an appropriate tool for many problems.

"Dinosaurs" don't recognise that, for some problems, there are now better tools than there were in the past. The naive and easily-beguiled-by-salesmen believe that all new tools are better than old tools - especially the scripting language du jour.

Yeah... well....  I...  agree with that, too.

You don't understand how HotSpot (and Dynamo) work.

That's likely true.  Are you saying these tools optimize the code at compile-time, by stepping through the instructions and observing how the program would normally run?  I was (am?) under the impression they stood between the executing hardware and the bytecode to watch trends and shuffle instructions as necessary.  Like a debugger.  That seems anything but fast, so I hope I'm wrong.

HotSpot is a runtime only tool, independent of the compiler. It operates in various stages, first noting which methods+operands are common and optimising them, then for the very common methods+operands it optimises inside methods and across multiple methods.

It is fun to watch the execution time of some Java applications.
Initially they can be slow as they code is lazily-loaded into the VM (necessarily lazy since in general it can be loaded from remote machines).
Next they run at "normal" speed.
After a few thousand iterations, they suddenly speed up noticably.

That behaviour should be understood by anyone doing or reading Java benchmarks - there are too many very misleading benchmarks around. You have to "warm up" the JVM each time the program is run! Which isn't noticable for long-running applications but can be an issue for soft-realtime (e.g. telecoms) applications. Microbenchmarks should be ignored completely.

Quote
Also, for what it's worth ... Ulrich Drepper has a great paper on cache-aware programming.  This discussion made me go back and revisit a couple sections from it.  It's a dense read, and at the end of the day, it may only matter in time-critical or extremely long loops.  The rest of the time you can get away with letting the compiler optimize what it can, and accept your losses for the rest.  Anyway, the paper touches on compiler flags and attributes (some of which have been the subject of debate here WRT side-effects), and C99 features that resolve some of the issues -- although there's a footnote that says compiler support isn't quite there yet.  It's a few years old now, but I assume in the scheme of things (again... C99...) not much has changed.

IIRC it took 6 years before the first full C++99 compiler became available. I certainly remember triumphant announcements on usenet to that effect, but I had long lost interest in C++ by then. I have no idea how C11++ code will work with libraries compiled with 15..25 year old compilers and standards.

The compiler flags issue frightens me. Even if I managed to get it right for my code now, I doubt I could get it right for other people's/companies' code, and I doubt people using my code in 5 years will get it right - because they don't have time to understand how my code works internally.

As for accepting your losses on cache-busting code - yes, but let's have a go at gettingthe machine to nibble away at the problem.

Quote
Hand-optimization is an alternative, which may or may not be wise -- given portability, resource constraints, the impact of improvements in compiler optimization down the road, etc etc etc...  It's a trade-off.  C's flexibility will let you do many things (for better or worse).  C# and Java have their baggage too, afterall.  You're never going to stall a C program waiting for garbage collection, for example.
Hand optimisation doesn't really cut it in a commercial organisation where the hardware changes significantly every year, but the code doesn't - in many cases the source code is no longer obtainable!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #162 on: October 24, 2014, 09:11:06 am »
There's really not a need for non-portable hand optimizations in the types of systems where cache misses hit you hard(dram based, 400+ cycle losses). In those cases you're only optimizing the 10% that's in the domain of the compiler. The rest of it, the way you organize and access data, that's where the real perf gains are to be made.

You clearly don't understand how large organisations (plural) develop and run large applications that must continue to run in the years and decades to come.

Quote
I highly recommend watching the Herb Sutter talk I linked previously, he shows quite a few examples where even simple arrays over hashmap/etc end up multiple orders of magnitude faster due to locality of data and prefetching.

Yes, I'm sure that's true for those microbenchmarks. If you want to go down that route, note that Fortran is inherently an array based language, and in these respects it is noticably better than pointer-based languages such as C/C++/Java/C#. More snidely, it sounds as if Sutter has triumphantly re-invented Fortran!

Let me know when you have removed most pointer de-referencing from most C/C++/Java/C# programs, and I'll start to think Sutter has a point that is relevant to general C/C++/Java/C# programs.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: C# and other OOP languages
« Reply #163 on: October 24, 2014, 09:16:35 am »
Hand optimisation doesn't really cut it in a commercial organisation where the hardware changes significantly every year, but the code doesn't - in many cases the source code is no longer obtainable!

Unless you work on a locked down platform like say game consoles and keep improving the quality of the games every year on the same hardware.
 

Offline VK3DRB

  • Super Contributor
  • ***
  • Posts: 2261
  • Country: au
Re: C# and other OOP languages
« Reply #164 on: October 24, 2014, 09:41:33 am »
Very true. Nowadays in terms of latency, cache = main memory, main memory = disk.

SSD's are not disks.

Nowadays, the term software has expanded to encompass what we used to call firmware.
« Last Edit: October 24, 2014, 11:57:22 am by VK3DRB »
 

Offline HackedFridgeMagnet

  • Super Contributor
  • ***
  • Posts: 2034
  • Country: au
Re: C# and other OOP languages
« Reply #165 on: October 24, 2014, 11:19:32 am »
Very true. Nowadays in terms of latency, cache = main memory, main memory = disk.
To simplify, in terms of latency, cache = disk??! Not sure if that is what you mean.

 

Offline _Sin

  • Regular Contributor
  • *
  • Posts: 247
  • Country: gb
Re: C# and other OOP languages
« Reply #166 on: October 24, 2014, 11:35:19 am »
In terms of actual time taken, getting stuff from disk into registers (via memory) is faster than ever, so virtual memory and paging is practical (and in some circumstances, largely transparent).

But in terms of cycles taken, memory is actually slower than ever, which is why there are ever more layers of cache, and for real-time stuff, handling data properly to minimise fetching and maximise work-done-while-waiting is more important than ever.
Programmer with a soldering iron - fear me.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #167 on: October 24, 2014, 01:04:49 pm »
Hand optimisation doesn't really cut it in a commercial organisation where the hardware changes significantly every year, but the code doesn't - in many cases the source code is no longer obtainable!

Unless you work on a locked down platform like say game consoles and keep improving the quality of the games every year on the same hardware.

A good valid counter-example to my point, even if it isn't an example that applies to many programmers/companies.
« Last Edit: October 24, 2014, 01:12:22 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #168 on: October 24, 2014, 01:05:51 pm »
Very true. Nowadays in terms of latency, cache = main memory, main memory = disk.
To simplify, in terms of latency, cache = disk??! Not sure if that is what you mean.

Oh, in this case my "="  isn't transitive :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #169 on: October 24, 2014, 01:09:36 pm »
In terms of actual time taken, getting stuff from disk into registers (via memory) is faster than ever, so virtual memory and paging is practical (and in some circumstances, largely transparent).

But in terms of cycles taken, memory is actually slower than ever, which is why there are ever more layers of cache, and for real-time stuff, handling data properly to minimise fetching and maximise work-done-while-waiting is more important than ever.

We agree, but I carefully distinguish between latency and bandwidth. Main memory bandwidth has increased significantly. Main memory latency has hardly decreased.

This puts more stress on caches, benefits Fortran-style array access patterns, hinders C/C++/Java/C# pointer-dereference style access patterns.
« Last Edit: October 24, 2014, 07:21:53 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #170 on: October 24, 2014, 01:11:46 pm »
Very true. Nowadays in terms of latency, cache = main memory, main memory = disk.

SSD's are not disks.

Nowadays, the term software has expanded to encompass what we used to call firmware.

SSDs are indeed a halfway house, but that doesn't invalidate the (somewhat crude) analogy I was making.

I don't see the relevance of the software/firmware dichotomy to the analogy.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline miguelvp

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: C# and other OOP languages
« Reply #171 on: October 24, 2014, 02:35:30 pm »
Hand optimisation doesn't really cut it in a commercial organisation where the hardware changes significantly every year, but the code doesn't - in many cases the source code is no longer obtainable!

Unless you work on a locked down platform like say game consoles and keep improving the quality of the games every year on the same hardware.

A good valid counter-example to my point, even if it isn't an example that applies to many programmers/companies.

Ok, how about Rigol DS series firmware/software developers? Or any other product for that matter that receives updates on a lock down hardware, I'm pretty sure most programs run in non changing hardware.
 

Online Howardlong

  • Super Contributor
  • ***
  • Posts: 5378
  • Country: gb
Re: C# and other OOP languages
« Reply #172 on: October 24, 2014, 10:14:45 pm »
If I may add my own experiences of VM based languages, based on over two decades of developing and troubleshooting enterprise level financial trading systems, the symptoms still remain the same. Namely, indeterministic response times, and that to make these systems work, there is a huge amount of configuration to be done for every implementation, not to mention the significant specialised resources to maintain them.

In the "bad" old days of client-server, things were simpler, and it was usually fairly clear  to diagnose where a problem lay, be it desktop, network, or server-side, and configuration was minimal. Since the onslaught of VM based languages, and their even more distributed systems, configuring and maintaining them has become very time consuming, and is like treading on eggshells. When it all goes completely do-lally, restarting a complex system can take several tens of minutes, while all the code re-JITs. This JIT compilation is all very well, and I am sure hotspot-type analysis is fine and dandy, but with my troubleshooting hat on I'd far rather have a system that ran in a consistent way than one that thinks it knows better, causing a lot of head scratching.

An example of this automated optimisation has already been happening on the database side (my primary area of expertise) for over fifteen years now. When it makes the wrong decision, perhaps after randomly updating some optimizer statistics from a non-representative subset of data, it can cause major performance problems, and take an awfully long time to diagnose. But it has kept me gainfully employed as a result.

My point is that the cleverer you make your environment, in general the more complex and less deterministic it becomes, and to resolve those problems takes a more specialist set of individuals, some of whom were probably commoditised and likely don't exist anymore.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20271
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: C# and other OOP languages
« Reply #173 on: October 24, 2014, 10:54:42 pm »
If I may add my own experiences of VM based languages, based on over two decades of developing and troubleshooting enterprise level financial trading systems, the symptoms still remain the same. Namely, indeterministic response times, and that to make these systems work, there is a huge amount of configuration to be done for every implementation, not to mention the significant specialised resources to maintain them.

In the "bad" old days of client-server, things were simpler, and it was usually fairly clear  to diagnose where a problem lay, be it desktop, network, or server-side, and configuration was minimal. Since the onslaught of VM based languages, and their even more distributed systems, configuring and maintaining them has become very time consuming, and is like treading on eggshells. When it all goes completely do-lally, restarting a complex system can take several tens of minutes, while all the code re-JITs. This JIT compilation is all very well, and I am sure hotspot-type analysis is fine and dandy, but with my troubleshooting hat on I'd far rather have a system that ran in a consistent way than one that thinks it knows better, causing a lot of head scratching.

An example of this automated optimisation has already been happening on the database side (my primary area of expertise) for over fifteen years now. When it makes the wrong decision, perhaps after randomly updating some optimizer statistics from a non-representative subset of data, it can cause major performance problems, and take an awfully long time to diagnose. But it has kept me gainfully employed as a result.

My point is that the cleverer you make your environment, in general the more complex and less deterministic it becomes, and to resolve those problems takes a more specialist set of individuals, some of whom were probably commoditised and likely don't exist anymore.

That is all thoughtful and valid, and I can't disagree with any of those points. My background in this area is in telecom systems rather than financial systems, but there are many similarities.

I would, however, modulate some of those statements with some other observations which are relevant to such large systems whether or not they contain VM-based components.

Firstly such systems have become more complex w.r.t. number of components, interconnection/interdependence between components, and geographical area. (That's also true if you replace "component" with "company"). All of those considerations lead towards more fragile systems.

Secondly, the processors themselves have become more indeterministic. Almost all the increase in processing power has relied on increasingly large L1/L2/L3 caches to mask the increasing mismatch between processor speed and main memory speed. Such caches are inherently non-deterministic - and IMNSHO precludes such processors being used for hard realtime operation. (They are fine for soft realtime systems provided the protocols have been designed to be delay-insensitive, which is the case for telecom systems).

Thirdly, I have a strong suspicion that many of the components in large systems will exhibit degraded performance during startup, for the simple reason that they contain various explicit and implied caches. In such cases, soft-start operational techniques are almost mandatory.

Now long ago I made a career decision to stay away from databases (other than specialised distributed in-memory key-value pairs!), I'm pretty sure that databases are subject to these phenomena as well.

In conclusion, while needing to "warm up" VMs certainly doesn't help, in my limited experience that it is no worse that the other phenomena I noted above. And the the techniques already required to mitigate (not solve) one, also mitigates the other.

The truth is rarely pure and never simple.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: C# and other OOP languages
« Reply #174 on: October 25, 2014, 01:03:16 am »
There's really not a need for non-portable hand optimizations in the types of systems where cache misses hit you hard(dram based, 400+ cycle losses). In those cases you're only optimizing the 10% that's in the domain of the compiler. The rest of it, the way you organize and access data, that's where the real perf gains are to be made.

That's actually what I was referring to WRT "hand-optimizing".  Data sets and algorithms, not instructions.  Although, if the shoe fits...

The compiler flags issue frightens me. Even if I managed to get it right for my code now, I doubt I could get it right for other people's/companies' code, and I doubt people using my code in 5 years will get it right - because they don't have time to understand how my code works internally.

Here's the way I see it:

Software always has bugs.  Optimization problems can be sneaky, for sure, but hopefully a good test suite will catch a lot of this before the code ever sees the light of day.  If not, then maybe they will be found in the wild, investigated, and resolved in whatever way is most appropriate.  Not ideal, perhaps, but realistically that's just part of the software release cycle.

If the end-user has access to the source, it's wise to set sensible build defaults, and mention known issues somewhere in the docs (Makefile, header comments, readme)  In some cases, the project / author's site will say "before filing a bug report, make sure you've compiled the software with these flags..."  In general, there seems to be a consensus among developers and users what is considered "safe", and to go beyond that is The Road Less Traveled.

If the source isn't available, then those things should be controlled internal to the build process anyhow.

I'm not solving this problem for anyone here, obviously -- it's all well understood. I only mean to describe my own perspective on the nature and scale of the problem.

One might say, "But why exacerbate the potential for bugs by continuing to use languages where this is a problem?"  That's a fair point, One.  However, this has to be considered as one of the often many potential drawbacks of any given tool.  Java has dependency issues like a mother, for e.g.  Root help you if you have the wrong JRE.  That will often lead to the same kind of "WTF -- it works fine on my other computer!" problems as optimization flags.  .Net has its share of quirks as well.  This is not a unique dilemma to C.  :)

Maybe Fortran runtime environments are perfect though.  I dunno, never used it.

Very true. Nowadays in terms of latency, cache = main memory, main memory = disk.

SSD's are not disks.

I think there's some confusion here.  I took our consonant friend's remark to say, "cache is the new RAM, and RAM is the new disk", referring to performance expectations rather than the medium itself.

I'm pretty sure most programs run in non changing hardware.

If by "most", you mean "some ambiguous number I've pulled from my exhaust pipe"  ;)  Sorry, I couldn't resist.  No offense intended, I'm just skeptical of anyone's guess as to the nature of "most programs".  That seems as nebulous as saying "most webpages are HTML 4 Transitional" or something.  Good luck proving a sample is representative.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf