I have prepared an account of the history of .Net and Java that’s intended to balance more fanciful post-mortem accounts. It reads thus: Sun created Java to cash in on the success of Visual Basic and to convince development managers that C++ coders are all slobbering toddlers playing with nail guns. Sun did grant C++ dispensation for “performance-sensitive applications”, a category that covered most of Sun’s software catalogue. Microsoft created .Net to keep Java from gaining traction and to put that cross-platform nonsense to rest once and for all. One OS, one run-time, many languages was the best way to go. C#, the Microsoft alternative to Java with the honesty to use “C” in its name, still kept the pencils and paper clips away from the inmates except, of course, for those developers working on performance-sensitive applications, a category that covered most of Microsoft’s software catalogue.
Java and .Net turned all existing native software into ticking time bombs, infinitely exploitable by shadowy figures, impossible to hand from the fired to the hired and rife with blue screens, kernel panics and divisions by zero. Developers scurried off for re-training, new languages, new tools, new books, new friends and new -employers.
To the dismay of Sun, and the temporary frustration of Microsoft, C++ survived efforts to render it extinct. Microsoft’s frustration gave way to its self-preservation instinct when its own developers demanded that C++ be restored as a first-class language for in-house commercial projects. Bless the lot of them.
Where do things stand now? Fearless C, C++, and Objective-C developers have tools of their dreams that let them dig deeper than ever before into system, OS, and CPU internals. Optimisation, at which .Net and Java can only play, is hot as compilers — including the free GNU Compiler Collection — evolve from heuristic to automated empirical optimisation. Development tools watch your application run and then re-tune it based on observed behaviour.
As for the reputed dangers of using unmanaged and unsafe code, the responsibility for safety has been returned to its rightful place: put in the CPU, the OS and application frameworks. Users deserve protection from errant code, regardless of its origin.
Here’s a native code prediction that’s way under most people’s radar: we’ll see more use of assembly language. When developers dare to handcraft architecture-dependent code the performance of an application, or a tweaked open-source OS, can take off. Mac users know how far a simple change can take you; a lot of applications you wouldn’t think of as maths-intensive go stratospheric when they’re enhanced for PowerPC’s AltiVec vector maths accelerator. Developers coding for new, controlled deployments can afford to set high requirements that include a 64-bit CPU, OS and drivers. And if you know you’re coding for Opteron and you’re ready to write to that architecture, baby, life is a highway.
I’m not preaching a wholesale move away from entrenched Java and .Net, nor do I attribute their success to skullduggery or ignorance. But, it’s time for developers and IT buyers of software and development services to drop the presumption that Java’s and .Net’s training wheels are essential equipment. Java is no longer the only path to writing once and running everywhere, and .Net is no longer the only path to stable and secure Windows applications.