Cameron Purdy, vice president of development for Oracle’s application server group, has made a presentation showing how Java supplants C++ and, probably to even things up, also some cases when C++ supplants Java. The problem is not that I disagree with the first ones. They are just explained wrong way and the author seems not even know what “Java” really means. Being a VP in the strongly Java-focused software development group in one of the top software companies in the world.
1. Garbage collection
It’s been said already lots of things about garbage collection and this is always the case of either from Java fans that “every modern language must have it or it’s not modern otherwise” or C++ fans “this leads to bad practices and causes unacceptable overhead”. In the beginning there have to be said that from the technical point of view GC may have several disadvantages in the form of using more memory for the same task and making possibly overheads by running an additional process, sometimes locking the whole system’s access to memory. But it also has advantages of being more convenient for the user that they don’t have to worry about object ownership (not “releasing the memory” – if you hear that GC solves the problem of “releasing the memory” it means you talk to an idiot) and it’s also faster than manual (heap-based) memory allocation, as well as can lead to less memory fragmentation and stronger object localization. The last advantage is an important advantage over shared_ptr in C++, which suffers of poor performance and does not help in memory defragmentation.
But things told by this guy is such a pile of bullshit that it’s hard to believe how this guy became a VP:
Garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim garbage, or memory, occupied by objects that are no longer in use by the program.
No. A programmer writing a program in a language that imposes GC as the only memory management, doesn’t even “think” in the term of memory management. They just create objects that have some contents and that’s all. For them there’s just no such thing as “memory”. GC was once well described as “a simulation of unlimited memory”. Important thing about GC is that it provides this virtually unlimited memory, not that it does memory reclamation.
A significant portion of C++ code is dedicated to memory management.
False. Of course, if your program is stupidly written, it may be true, but it’s not true that this is required or that GC significantly solves this problem. For example, a program that reads a text from the input stream, processes it, and displays postprocessed things, is able to be written in C++ using completely no explicit dynamic memory allocation. It’s not possible in C or in Java. So, in this particular case, GC solves some “problem” that does not exist in C++.
Cross-component memory management does not exist in C++.
Really? How come? My function can return auto_ptr<X> (or, better with C++11, unique_ptr<X>) and I can either assign the result to some other variable or ignore the result and the object is taken care of. A memory allocated in one component can be deallocated in the other. Unless you somehow “play with allocators”, of course. But if you do, you should be prepared that you should solve much more problems. Normally you can use a unified and universal memory allocation and it works also cross-component. I have completely no clue what this guy is talking about.
Maybe this guy is talking about Windows and Visual C++, in which the debug configuration is interconnected with traced memory allocation (which is indeed stupid idea). When you have components in your application and they are compiled in “mixed” configurations, it makes of course a memory allocated in one module not able to be deallocated in the other. But this is a VC’s problem, not C++’s.
Libraries and components are harder to build and have less natural APIs.
WTF? Libraries and components may be hard to build in C++, of course, but this has completely nothing to do with the memory management. There are problems with modules, distribution, platform specifics, also maybe there can be some module that uses specific memory management (in other words: in C++ you have more occasions to make a stupid code, but this still has nothing to do with memory management). What is “less natural API”? Is that a natural API that integers are passed by value and cannot modify the value passed, while arrays are passed by pointer and can be modified in place without restrictions? Only when you think “natural” and you mean “Java”.
Garbage collection leads to faster time to market and lower bug count.
Unless you made a stupid design and need to fight design bugs.
The garbage collector can be just “a good tool for things that need it” and from the implementation point of view it can have advantages and disadvantages and that’s ok. If put this way, of course, C++ can also use gc (provided by a library). There are several things, however, that have to be pointed out:
1. From the programmer point of view, objects in a GC-only environment have two clearly defined treats:
- The GC-ed object must be trivially destructible. You should even always state that objects created in Java are never deleted. Some may argue that objects may have finalizers. But destruction means “reliable and timely reacquiring the resource” and finalization is not something even close. Finalization is closer to a “wishful thinking” about what other kind of resource requisition can be done when memory requisition is done. But as the memory isn’t guaranteed to be actually reclaimed, the finalizer is also not guaranteed to be called. That’s why if you think C++ way about destruction, GC-allocated object must be of a class that is treated as trivially-destructible. It means for example that if your object refers to a file and you no longer require the file, you should disconnect the external file resource from the local object explicitly. And indeed this is how things happen in Java – Java doesn’t close the file in finalizer, does it.
- GC means object sharing by default. It means that the pointer to an object can be passed to some other function and written elsewhere, this way being a second reference to the same object, and (unless it’s a weak reference) shares the ownership with the first one. Some idiots say that with GC you don’t have to worry about object’s ownership. That’s not true – of course you have to. At least you need to think well whether your object really has to be owned by particular reference variable, or maybe it should only view the object, not own (co-own) it, and just become null in case when the object has been reclaimed. In C++ this shared ownership is implemented with shared_ptr (and it comes with weak_ptr the same way), and despite its poor performance, it’s good enough if you shrink its use only to explicitly required situations. The important part of it is: object sharing. As you know, object sharing is the main purpose of “those really really bad” global variables. It’s something that is shared by everyone. So, with GC-ed objects you allow every kind of object to be potentially co-owned. In practice co-ownership is very rarely required, however you can get convinced about that only when you have some experience with a language that features also other-than-GC memory management.
2. There are also two interesting consequences for the difference between GC and shared_ptr:
- The shared pointer concept still means timely and reliable resource requisition. This means any kind of resource (not only memory) and any kind of object. The object is being deleted when the pointer variable goes out of scope and it was the last owner. So even if we can’t state it for sure that the object will be deleted at the end of scope, we can at least make it certain that these are the potential places when it may happen. This way shared_ptr can also use objects that aren’t trivially desctructible – in case of GC you can’t even rely on that the object will be ever deleted, nor even in which thread it will (would) happen.
- The object deletion is synchronous – that is, when the object is going to be deleted due to getting the last owner out of scope, every weak pointer to that object becomes cleared immediately. Of course, it still doesn’t change much in case when the weak_ptr user procedure is running in a separate thread, but at least it matters when you have them in a single thread. The weak_ptr becomes NULL only because it could only be a dangling pointer otherwise – but no matter that, you should never test this pointer for “nullarity”, you just shouldn’t use this pointer if you are not certain that the pointed object is still owned and still alive. However this at least isn’t dangerous in case of shared_ptr because you can always state that if the object isn’t NULL, it’s still alive. In case when the last owner goes out of scope in the GC environment, there may pass some time between it going out of scope and clearing out all weak pointers due to object deletion, during which the object shouldn’t be referred to. This is how GC simply turns the problem of dangling pointer into the problem of “dangling object”.
2. GC is hailed of being resistant to object cycles, unlike the refcount-based solutions (including shared_ptr). But if you research this topic well enough you can quickly come to a conclusion that a cycled ownership is something that… should never occur in a well-designed system! The “cycled ownership” is something that “in the real world” occurs in just one case: a company may own the other company and that other company may own the first one, of course, only as a partial ownership (so this is also a shared ownership). Actually I don’t know how this is handled in the law, I rather think that governments all over the world just don’t know how to handle this, so they just allow it without restrictions (the problem may be when the ownership level is much higher and only after passing through some hierarchy branches you can see that you come back to the company once found already). But companies are special case – there is no similar thing in case of, for example, hierarchies in management in the companies. Actually this situation is handled by GC using the method that “if A owns B and B owns A and no other object in the system owns either of them, then both have to be deleted”. In other words, cycled ownership is just treated as if there was no ownership at all (because this is like having two managers A and B in the organization, A manages B, B manages A and neither of them is managed by anyone else – can you imagine such a situation in reality, unless this is some state-held company in some banana republic?). So, if you didn’t mean the ownership, why did you use it? Shouldn’t you have used a weak reference for one of them? You don’t know, which object is more important than the other and which should own which one? If so, then you most likely have no clue, what your system is up to! So, you shouldn’t participate in the development, or you should first do your homework. For me, if there is a situation that there’s a cycled ownership, there should be some Lint-like tool that detects this situation and reports it as error. And GC should crash your program always whenever it detects ownership cycles, if you really think about helping developers make good software.
So, summing up, these advantages for the developer, hailed as a better “time to market”, only lead to worse designs and are more tolerant for logic errors – instead of helping clear them up. A C++ code, for which a good Lint-like tool is used to check things up – and, believe me, it can really reliably find all potential memory leaks, and even if this is only a great potential of memory leak, it’s still better to write this more explicitly – is much better in the term of time-to-market than Java code, where design errors are fixed on the fly by the language runtime.
One more interesting thing about GC is something that probably comes from inability to understand the difference between “Java PL” and “Java VM”. It may be a little bit shocking, so please keep your chair well.
Well, Java DOES NOT FEATURE GARBAGE COLLECTION. Surprised? Yes, really. There’s no such thing, at least in Java programming language. Of course, it’s thought of as if they were in the language, but it’s really not in the language. If you want to see gc implemented in the language, the only such things can be functional languages, including such far away from each other as Lisp, OCaml and Haskell. These are languages that feature garbage-collection. Java programming language does not feature it.
Yes, I’m not kidding. Ok, let it be, Java programming language does not provide any ability to delete the object, at least as a language builtin feature. But it doesn’t guarantee anything like that as a language. There are some Java implementations (or extensions) that use deterministic or even explicit object deletion. This can be done by using some external library, not necessarily a language extension. The Java language requires that the program in it be well formed even though the objects are created and never explicitly disposed. This doesn’t mean that Java can only be implemented on JVM – gcj provides a native Java compiler using Boehm’s GC.
It’s because garbage collection is a feature of JVM. It means that any language you’d implement for JVM (including C++, if anyone will do it in future) will take advantage of garbage collection, even though its natural way of managing objects would be manual. In C++, for example, you’d be allowed to use delete on objects; this would just call the destructor and do nothing else. And yes, of course, C++ is predicted to be used with GC, just there is no “standard GC” to be used with C++.
So, the “GC-related” difference between Java and C++ is that Java language doesn’t provide any possibility to delete objects and the language runtime is expected to take care of this by itself, while in C++ you can use different memory management policies for objects, although the default policy is to explicitly delete objects.
2. The build process
Guys, come on, come on… How much does it matter for a large project, how many build machines you have to prepare and how strong they should be? It’s significant for a home-cooked project (maybe), but not in today environment. When today running a javac command takes itself more time than to compile the files (in some cases), what’s the real difference to compiling C++? In today machines it mostly just occupies more cores to compile. It’s really funny when you hear something from fans of a language, which is already accused of having problems with performance, cleared by stating that “machines and processors are getting better and faster so it shouldn’t matter much”.
That’s still not the most funny thing – this is:
Java has more approachable build tools like Ant and Maven, while C++’s Make and NMake are considered less so.
What? Ant is suggested to be any more advanced tool than Make? Come on…
First, Ant is, regarding the features, at best the same advanced tool as make. And what is “more approachable” in case of Ant, the XML-based syntax, which is meant horrible by most of the people? It’s not Ant itself what makes things better (let’s state they are). It’s Java compiler.
The Makefile’s role is partially to define the dependencies between definitions implemented in separate C++ files. The definitions are therefore contained in header files and the header files providing definitions used by another file are defined in the Makefile rules (gcc -MM is a command that can be used to automatically generate them). In Java this thing is completely automated by not having explicit header files – however the dependency problem is handed off by the Java compiler. You just pass all the source files that may have dependencies between each other to a compiler command line. You can imagine that there can be created a tool that by the same way takes all *.cpp files and produces the “result” in the form of *.o files – taking care by itself to properly handle the *.h files and the dependencies. There’s no such tool only because there are more advanced build tools for C++ that do this and much more.
The only thing that Ant handles is to associate the name of the target (possibly default) with source files that have to be passed to the compiler. That’s all. It’s really not more advanced than just a shell script in which you’d encode a command: “javac File1.java File2.java … etc”. Ok, maybe with some CLASSPATH. Make is a no compare to Ant – Ant also cannot be used to build C++ projects because – surprise – it doesn’t feature targets connected to physical files. It’s merely because this part is exactly done by Java compiler.
Maven is different – it’s really an advanced, high level build tool. Enumerating Ant and Maven in one sentence as “two different more approachable build tools” is just WorseThanFailure(tm) and I just can’t express how stupid must be someone who says something like that. Maven, first, imposes a strict source file layout, and second, it manages external Java packages automatically. You just need to specify where your sources are and what packages you use – rest of the things is taken care of by Maven, including version control and upgrades.
But, if you point that out, you have to remember that in C++ world there are also tools that provide advanced build systems. Examples are autotools, Boost.Jam, qmake and CMake. A very important tool that solves the problem of providing modules in C++ is also pkg-config. Having that, you just add the package name to the list and you don’t have to worry about specific compile flags and linker flags – you just add the name (not all packages provide entry for pkg-config unfortunately, but it has quickly become a de-facto standard). You still have to add the include file in the sources, of course, but this has nothing to do with the build system. And, well, the syntax is awkward? Yes. But I really think that XML syntax is even worse. I have once written a make-like tool in Tcl that was predicted to be able to extend into a highly automated high-level build system, just had no resources to work on it. I’m mentioning that to point out that the lack of such a system for C++ as Maven is probably not the biggest problem in this language.
3. Simplicity of Source Code and Artifacts
Yes, I admit, C++ still needs a lot of work in development for this case. But if you are really seriously done with some Java projects, you know that saying that “Java is only *.java or *.class” files is maybe true as long as your background is at best homegrown or academic. Experienced people know that if you are doing some Java projects there will be a lot more kinds of files to deal with, like:
- *.properties files
- *.xml files
- *.jar files
- *.war files
And they are really indispensable in serious Java projects. Additionally some people state that having all methods explicitly defined inside class entities make the class unreadable. I don’t know if I can agree with that, just wanted to point out that having everything in one file is not something that can be unanimously thought of as an advantage. The Ada programming language, for example, is also using header files, despite that it doesn’t use #include directive.
You can also quickly point out that header files is the only thing that makes any addition towards what is in Java. Rest of the mappings are simple: *.cc is like *.java, *.o is like *.class, *.so (*.dll) is like *.jar. If you try to point out that there are also executable files, don’t forget to mention that in case of Java you’d either have to create it manually as a script that calls java interpreter with passing the explicit class name to run its main() or you have exactly the same situation if you compile to native code with gcj.
4. Binary standard
And this is my favourite: this guy completely doesn’t understand the difference between “Java programming language” and “Java Virtual Machine”. JVM is just a platform, a runtime platform that can run programs, for which the programs can be compiled and so on. And Java is not the only language in which you can write programs for JVM platform.
There’s a variety of languages implemented for JVM, maybe not all of the existing ones, but at least there is an interesting choice: Scala, Groovy, JRuby (Ruby), Jython (Python) and Jacl (Tcl). Interesting thing about the 3 last ones is that they normally are scripting languages. You can write programs in Tcl language, looking as normal scripting language program, in which you can create Java objects, call its methods and even create classes and interfaces – this is possible due to reflection feature. As long as there is a language implemented for JVM, you can write a program in this language, not necessarily in Java. It’s also not a big deal to provide a kind-of compiler that produces the JVM assembly code from the Jacl source.
On the other hand, Java is also just a programming language and it can be implemented for any platform, it’s not just fixed to JVM. One of compilers that can compile Java programs to native code is gcj (coming from gcc collection). I have even heard that using this compiler to compile Eclipse may result in much better performant code, comparing to compiling with javac. The resulting code is, obviously, not binary compatible with those compiled by javac.
So, unfortunately for the author, Java (as a programming language) doesn’t have any binary standard. The only one thing that has a binary standard is JVM – but heck, this is just a runtime platform, for the God’s sake, what’s the deal with binary standard! Every runtime platform must have something that is considered “binary standard”. What, you’d say it has the same form on every physical platform? Well, .NET has it too, LLVM has it too, Argante has it too, even Smalltalk VM has it, and the OCaml binary representation has it. Guys, come one, what’s special in JVM’s “binary standard”?
5. Dynamic linking
This was partially explained with the module problem for C++. Yes, C++ still uses the old C-based dynamic libraries, which simply means that if some feature is “resolved” by the compiler and cannot be put into *.o file, it automatically cannot be handled by dynamic libraries. But heck, what’s the “DLL hell”? Looxlike this is one of many guys that think that C++ works only on Windows. In POSIX systems there’s really no such thing as “DLL hell”.
But, again, this is also specific for JVM. Yes, unfortunately. Java programs compiled to native code suffer exactly the same problems as C++ on native platforms.
Or maybe this is a problem with dependencies and versioning. Oh dear… you shouldn’t’ve really suggested a problem like that. That’s one of the properties of Java’s “modules” and one of the problems you can get quickly and roughly convinced when you work with Maven. Imagine: you have a library for Java, and you are using in your code a class that this library provides. You do “import com.johndoe.bulb.*” in your code (about 90% of Java “coders” have no clue that this is the same as “using namespace com::johndoe::bulb” in C++, that is, nothing more than shortening the name). But you are using in your code some feature that is not provided in some earlier version of the library. Now… can you specify something in your sources that would require particular version to be used? Or, if you have multiple versions, slotted versions, specific versions etc. – well, this can be done, as long as Maven manages things for you. This only fixes the problem that has arisen in the programming language – kind of patch.
Do you want to see a perfect module management system? Look at Tcl. It comes with “package require” command that specifies the package and optionally its minimum required version (creating packages before 8.5 was horrible of course, but 8.5 version comes with new feature called “modules”, that is, the only thing to do to make the package available is to put *.tm file in one of search directories). In Java language you just don’t specify the package (in the physical sense, not in Java sense – it’s maybe better to call it “jackage”, just like “jinterfaces”?), the package itself is searched through all existing packages in the system to which the CLASSPATH leads, which maybe provides the symbols used in the source file being compiled. In the default environment you are just unable to specify the version of the package you are actually using. And moreover, you can accidentally have multiple such packages in your system and the first found, by searching through CLASSPATH, is considered as the required one.
Why so many words about that? Well, mostly because this exactly is what “DLL hell” means, if you just take your time and try to search for this term. By the same reason you have a “CLASSPATH hell” in Java.
This is the funniest fun ever.
Comparing a portability of programming language, which generally describes how it should be implemented with the best effort to become the fastest programming language for particular platform, and a portability of a “one platform language”?
In English language, specific for software development, the verb “port” means to take the source project and aim to adjust it to the needs of a different platform than that for which it was initially created. For example, you may have a Linux application and you “port” it to Windows or to QNX. So, “portable”, in the software development specific meaning, means that the application written in this language can be (at least there is a possibility) preferably easily, or even effortlessly “ported” elsewhere.
If you measure the true value of “portability”, the only way is to state how many runtime platforms can be thought of as currently being important on the market and have any higher level programming language implemented for that, and then realize, for how many of them the particular programming language has been implemented. If we state that there are something about 10 such platforms, say we include .NET, LLVM, Windows/x86, SunOS/Sun SPARC, SGI, MacOS/PowerPC, JVM, Linux/ARM, Argante and VxWorks/PowerPC, C++ is currently implemented for 8 of them, so its portability is 80%. For the same set of platforms, Java programming language achieves 10% of portability.
Worseover, if you really take seriously the comparison of a native-compiled language and a virtual machine, note that the JVM is also not implemented in all of these platforms. From this point of view – as long as I didn’t make a mistake – it only achieves 70% of portability. I didn’t even mention some limited platforms, for which C++ is implemented, although with a limited number of features, like for example, you cannot dynamically allocate memory (it doesn’t make C++ non-standard, it just requires that every call to “new” results in std::bad_alloc and every call to “new(nothrow)” results in nullptr).
Well, platform specifics, well, so many #ifdefs etc., guys, come on. Maybe it was true 10 years ago, maybe this is because of required some very detailed platform specifics and it’s needed for the best performance or, say, a specific lock-free code (CAS) – although in Java you usually don’t think about it simply because if you use Java you don’t think about performance. I understand that there are lots of #ifdefs used in header files, but please, if you don’t like #ifdefs, just don’t look into the header files. These header files are using these specifics in order to make your application code best performant and not having to use them also in your application code. Since C++11 standard all compilers are the best C++98 compliant and no serious software producer uses a compiler that doesn’t support it.
You may say that this portability means that in case of C++ you must perform the complete test procedure for a prospective new platform (that is, always do some porting job), while in every place where you run a Java bytecode it will always work and behave the same, so you don’t have to test this program in every platform. If you really think so, you’re more than childishly naiive… For example: try to think about just a simple thing as filesystem. Try to access a file being given its path in POSIX systems and in Windows. This is just a “no way”, in both C++ (at least standard) and Java. Windows-specific path won’t work on POSIX and vice versa. The only way to achieve portability is to take some base path out of which the other should be drawn, then add path elements so that the path can be composed. State we have the “base” variable that holds the base directory – you can do it portable (yes!) way in Tcl:
set dir [file join $base usr home system]
and in C++ with boost.filesystem:
dir = base/"usr"/"home"/"system";
but in Java you have to worry by yourself how to compose the path. In a string. Ok, the system gives you a property that returns the path separator character used in current system and you “only” have to worry about gluing the string properly (Tclers and Boost users will be still ROTFLing). Just pray that your application never run on VAX VMS, where the file path “alpha/beta/gamma/mu.txt” looks like “[alpha.beta.gamma]mu.txt”. Do you still want to say something about Java’s portability?
7. Standard Type System
Facepalm. One of the most important thing in C++ standard is the standard type system. Of course, it not always comes with a fixed size for every type, but who said that it’s not a standard type system because of that?
Actually this slide doesn’t even talk about standard type system. It talks about the features in the standard library. Maybe C++ deserves to have XML parser and database connectivity library in its standard library, but heck, this first required that the standard be modularized. It’s very hard to define the C++ standard this way and it’s hard to convince the standard committee to do it. And it can’t be done without modularizing the standard because this way lots of C++ implementation would have to declare non-compliance because a C++ implementation for some tiny ARM system to be used in a refrigerator does not feature database connectivity. Even though if this is provided, a lame-legged dog wouldn’t use it.
Actually there are lots of C++ libraries that feature XML parsing or database connectivity. Maybe just no one needed them as a standard. I would even predict that probably they are in Java because there must have been some initial libraries provided for this language or otherwise it wouldn’t attract attention. Same with GUI. I really don’t find it a problem. And somehow rarely anyone is writing GUI programs in Java. If you think about Eclipse, then don’t forget that SWT, which is used for GUI, is completely implemented in the native platform (for example, in Linux it’s implemented on top of Gtk). Why SWT, not the “standard” Swing or AWT? Well, guess.
Hello? Is anyone there? Is this reflector really lighting? Ah, well…
Did you remember to have ever used reflection in your Java application? I mean really serious Java project, where you write an application running in the web application server; no other kind of Java development is serious. So, did you? Of course not. Why? Simply because it’s a feature just “for future use”. There are some tools that use it, like for example, Java Beans. But user code, application development? Guys, come on, if this language required from the user that they use reflection in their application code, users would quickly kick it in the ass and use other language.
Reflection is something that allows a language to easily implement things like runtime support, adding plugins to the application, or having some more advanced framework. Comparing to Java, C++ has lots wider use in the software world, and not each one of them requires this feature. But, again, if it’s required, there still are libraries that allow for that, for example, Qt library features reflection, at least for QObject-derived objects, but it’s obvious that there’s no need for anything else. So, you need reflection? Use correct library. The fact that Java supports reflection by default doesn’t mean that it’s such a great and wonderful language, but that the use of Java language is limited to use cases where reflection makes sense, or at least where it can be sensibly implemented.
For a usual Java programmer, it is really of completely no use.
Although performance is typically not considered one of the benefits Java has over C++, Purdy argues that garbage collection can make memory management much more efficient, thereby impacting performance.
Rather “thereby making the performance a little bit better than horrible”. Of couse, GC improves performance of memory management (and it can be even easily shown with GC for C++, especially if you compare it to shared_ptr). It doesn’t come without a price, though. The price is increased memory use. There is usually kept some memory that is not yet reacquired. Also the unified memory management help Java a bit. But Java easily wastes this performance by having overheads provided in java.lang.Object, and all user types can be only defined using classes derived by default from this one. This one even more increases the memory usage. Think at least about that how the performance decreases also by the high memory use itself! Maybe it does not impact the application itself, but it definitely does impact the system and this way may affect the application from the back.
In addition, Java is multi-threaded while C++ does not support multi-threading.
Yes, at least when you think about the “standard C++98” (it’s changed in C++11), but it somehow did not disturb people writing threaded applications in C++. So what’s the deal? It’s easy to suggestively say “does not support multithreading”, but if you said “it’s impossible to write multithreaded C++ programs” you’d be lying. So, isn’t it better to just shut up?
Do you want to see how threads are supported by the language? Look at Ada. Do you want to see how multicore programming can work? Look at Makefile. Yes, really. This simple task automating tool (because this is generally what make is) can run tasks in parallel, as long as they are considered independent. This is the real support for threads. Java’s thread support is just a thread library plus one slight keyword that performs acquire/release for the whole method or block (which is only provided for convenience just because Java does not feature RAII/RRID – this thing in C++ can be implemented in a library, too). I completely don’t understand why this is any better than Boost.Threads but the fact that they are not in the standard library (in C++11 they are).
C++’s thread safe smart pointers are three times slower than Java references.
Well, even if it may look strange if once he says that C++ does not support multithreading and next that C++ has something thread-safe… Really, only three times slower? They are ten times slower than C++ pointers using Boehm’s Garbage Collector. And, one more important thing – this guy is probably speaking about shared_ptr, not unique_ptr, which do not use mutex synchronization, so they are just like plain pointers.
I suspect the poor performance of shared_ptr comes from the fact that it is “screwing in” the reference counter to the object; it’s said that std::make_shared should ensure better performance because this one can allocate at once one big piece of memory that keeps the refcount and the object itself in a solid block; this at least decreases the number of dereferencing. However still a compiler support may be required to optimize out unnecessary refcount modifications.
And Java has HotSpot Java Virtual Machine (JVM), which features just-in-time (JIT) compilation for better performance.
For “a little bit better than horrible” performance, did I already say that? Ok, no kidding, yes, JIT really makes the performance better; an application running on long-run application server does not suffer any big performance problems towards a similar application in C++. There are just three small problems:
- It doesn’t change the fact that these Java apps still gorge the memory with a terrible pace
- It should be a really long run. Freshly started server comes with a really poor performance and it betters as long as the server runs.
- C++ can take advantage of JIT compiling, as long as you use a specific compiler. For example “clang” compiler for LLVM virtual machine is using JIT compiling when it runs and this way produce an overwhelming performance. Important thing is that it’s doing it on a code that was already compile-time-optimized.
Well, if you are talking about Java programming language, it does not provide any safety by itself. The safety is provided by JVM, so it’s available for C++ as well (as long as I can make it with the implementation :).
As it comes to these 5 reasons why Java still has things to overcome for C++, there’s nothing I can say, but just reminding that there are still many people that state that Java can be a very good language for real-time programming, and its performance will be better and better. Well, most funny thing is that I heard these things already 10 years ago. And Java’s limitations are still valid.
Don’t treat these things on these slides too seriously. They really look as a good summary of the differences of Java to C++. But if you mention these things when applying for a C++ job (when you apply for a Java job, you are unlikely to be asked that), you may be treated as just not a professional.
Java had to rule the world of software development. It didn’t make it. Not because it didn’t manage to overcome the limitations. It didn’t make it because there urgently grew a market for C++. And the users of utility electronics, now widely using software, had no will to adjust to Java.