… is that the question? … not really.
From time to time I dare ask technical questions to experts in the fields of native+managed worlds so as to better understand the differences, performance-wise, between code originally written with a native language like C++ and “native images” of code written with a managed language like, as of today, C#.
Due to the novelty around the resurge of C++ due to revision 11, in one of my latest Q&A adventures, I dared ask Alexandre Mutel about eventual penalties –if any, of calling a wrapped operation in C# once the assembly gets compiled ahead of time with NGen (or its Mono equivalent, AOT compilation). Like, say, the following:
public static extern int SomeOperation(int h, string c, ref SomeStruct rStruct, uint type);
[For those of you that still don’t know him, Alexandre Mutel is the creator of, inter alia, SharpDX: “a free and active open-source project that is delivering a full-featured Managed DirectX API”, which is currently leveraging the DirectX-side of projects like Monogame and ANX, among others; being imvho the perfect choice for those of us who don’t want to go back to C++ and once embraced the old ManagedDX solution that then was called off by MSFT in order to give birth to XNA a few months later].
I won’t dare claim that Alexandre posted this impressive article because of my email question (or my prior request of DirectMath support in SharpDX due to SIMD), but I must admit that it vanishes any doubt I might have had in the past in that regard and leads me to concur that .NET must die.
In his article, Alexandre mentions an interesting detail, or fact if you’d mind, when speaking of a managed language:
… the performance level is indeed below a well written C++ application …
… and also that:
… the meaning of the “native” word has slightly shifted to be strongly and implicitly coupled with the word “performance”.
He also references two articles about the real benefits of better Jittering:
- When will better JITs save managed code? –by Herb Sutter, and
- Can JITs be faster? –by Miguel de Icaza.
And a finding on Channel9 forums, indicating that MSFT is hiring to create a unique compiler to be used on both, C++ and C#.
So, after reading all of the above-mentioned material, if you have reached a point in you programming life where you do search for performance over safeness, is still the real question whether you should go native?
Imvho, the question has then turned into “how”.
The fact that a native solution gives the performance level you are looking for, does not mean that you must only use the C++ language. Even with the additions found in C++11 (a few of them that could have arguably stemmed from managed languages), it still has a cumbersome and unfriendly syntax.
Or what is more, does neither mean that you won’t be able to use a language like C# to get an optimized native application for whichever platform you need (even the Web).
If in order to get native bits we should always stick to “low-level” languages, then we had never moved from both Assembler or even binary notation towards C and all of its offspring. The evolution of hardware and compilers, made eventually C++ a better choice than ASM for performance-oriented apps, given that, marginally over time, the penalty curve was decreasing to an extent that it became irrelevant for native programmers.
Therefore, what if you can get rid of Jittering (being replaced by a fully performance-oriented LLVM compiler) and still have an efficient GC for cases when manual memory (de)allocations are not needed?
Much as I hate Objective-C, due to its ugly syntax, its newest versions for the MAC (and lately, the iOS) platforms offer LLVM native bits with GC.
And what about a more friendly language like “D”, instead? Latest evidence leads me to believe that C-based languages are moving towards its direction.
My point is that going native does not necessarily mean that all the memory management of your program must avoid a garbage collector for efficiency. Nor that you have to use languages with cumbersome or unfriendly syntax to get the most of efficiency. It depends mainly on how compilers and memory management tech evolve side by side to get the most out of the target platform, how unsafe you can go with a given language where and when needed, and how much penalty-free you can call native operations from external binaries.
For instance, even though its limitations, you can do some unsafe programming with C# (fixed, stackalloc, etc.). The problem is that this feature is not allowed for all platforms (like WinPhone7), and in some platforms the set of operations is limited (i.e.: stackalloc is not available on the Compact Framework for the XBox 360).
And again, the D language seems to provide a friendly syntax (close to C#) while offering a power similar to C++.
Personally, I feel quite comfortable with C#; let’s be real here for a moment: I won’t be creating a Halo-like game any time soon, but I don’t want to go back to C++, say, to consume DirectX11 APIs. Having said that, I really hope C# evolves in a way that arguments from “native” programmers become trivial and the industry embrace it (as once embraced C/C++ to minimize the use of ASM). Evidence shows C# will evolve in this field, but as usual, time will tell …
To wrap it up, does going native imply that .NET should die so that a syntax-friendly language like C# would survive? …
Short answer: yes (or at least, as we know it today). Long answer: read all of the links provided in this post and see it for your self ;)
My two cents,