… and there was never any attempt for that.
Depending on various rankings, Java is either the most popular programming language in the world, or one of the most. No matter how trustful these rankings are, it’s undeniable to say that Java is big use and big business. And it has gained popularity very rapidly, if we count how old it is, and especially how big problems with performance it had since the very beginning.
It’s so funny, however, that the reason of this popularity is assigned to various treats that Java… well, doesn’t have. It’s said that it’s simpler, it’s more a high level language, it’s a true object-oriented language, and it’s more efficient for the software business (“time to market”). Actually all those explanations are bullshit, except the last one – but this last one is just a forwarding, not an explanation.
As long as “simplicity” is concerned, you can mention, of course, that the direct split into builtin value types (int, float) and librarywise defined class types, makes the language simpler. The default garbage-collected type of object and reference also makes it simpler. But if you take a closer look to Java 8 and compare it to C++11, you can quickly come to a conclusion that you can already forget any statements like “Java is simple”.
You have to realize that “high level language” is a language that uses high level constructs reflecting the logical constructs in the human mind. The function-based nature of C language, the class-based nature of Java, and the string-based nature of Tcl language, they all are the same as bytecode-command-based nature of Assembly Languages: it’s simply a low level. The low level language isn’t an “assembly language” or “system language”. It’s the language that bases on one strict simple “nature” that is used to implement everything.
So, when it comes to a language meant as “high level” what we really mean is that it should represent some basic logical concepts of data types (such as number and string) and execution environment (threads). Not necessarily unlimited integers, but at least ability to easily create a data type that limits the use to required range (in contrast to having integers for everything, with ability to use some value out of given range). One of mostly known models of such a language is Ada. When it comes to a model of an “object oriented” language, the most famous language is Smalltalk.
The problem with Java isn’t that it’s not “like these languages”. Java might have been a high level language and a true object-oriented language, still having C++-derived syntax and even possibly other treats borrowed from C++. The only reason why this didn’t happen was: the intent of Java’s creator was to make it a better C++. And I didn’t say that they did it – they at best have created a “better C”. And here are the explanations, why.
1. True object oriented language
I have an ambivalence whether saying that Smalltalk is the only true object oriented language is a truism, or exaggeration. Nevertheless it can be a very good model of an object oriented language and a source of information what a true object oriented language should have: it should rely on object itself to perform a task and expect that the object do it its own way (that’s why it’s called “method”).
In order to simplify adding common “methods”, the term of “class” has been introduced, and it’s used by majority of OO languages – not by all of them, though, which means that it’s not the obligatory part of an OO language. The most important is just this: rely on objects. Hence the name. Of course, those OO languages that relied on classes, have evolved lots of various rules about using classes, and managing the software changes around them – however, the central part of OO isn’t a class, but object – however defined.
So, a true object oriented language is the language that:
- Relies on objects (not on classes)
- Doesn’t use any other entities than referring to objects
What does it mean “relies on objects”? For example, no matter what the meaning of particular thing in the program is, all operations are defined per object, including whether the object can do it or not. So, for example, in every place where some data is required, there can be either an integer, or a string, or a file stream, or a book, or a procedure block (say, lambda function) or even a “nil” – the only limitation is that not every operation would accept it. But that’s not because of the object’s type (there’s no such thing; in some rare cases it may be its class), but rather because the operation that particular procedure would do to an object is not supported by it (“duck typing”). For example, in Smalltalk when you do a+b and assign the result to c, the + method is called for an object designated by ‘a’ with one argument, ‘b’. It really doesn’t matter what they really designate, it’s only required that the operation a+b be able to be done to them, in particular, whether + can be done to ‘a’ with ‘b’ (yes, Smalltalk supports defining operators as methods of objects, should that be of any surprise).
The “class” in such a language is only yet another object that is being used as a delegate that provides methods to be invoked when the object receives particular “message”. But the fact that the object is of some class or of a class that is derived from some class, should be rarely a checked condition. Maybe in some cases it may make sense to check if an object understands some set of messages (“protocol”). But usually it should be just called particular method for, with responding to an exception that it doesn’t understand it.
Second, in a true object-oriented language everything should be object, with no “exceptions”. In Smalltalk everything is object – an integer, a string, even a method, even a class, and even a “nil”, which is considered as something being “not an object” (say, a universal marker for a case when an object cannot be put in this place – it has its own unique class). If not, we may still talk about having a “true object-oriented flavor”, but not that it’s a true object-oriented language.
Third, as a consequence of relying on what object can do in response to a method call, the only typing in such a language should be dynamic typing. If you want to make use of anything that relies on objects, at least for this part of the program you should forget static types. So, whatever relies on static types, it’s not object-oriented at all.
There exist various OO systems, which at least can be considered “true object-oriented” even though this concerns only part of the language. There’s for example Objective-C, where the whole object system is a kind of “alien feature” applied as a patch to C language, and there’s just one static type of reference-to-object, named “id”. Similar feature is in Vala and C# – the “dynamic” keyword. You can use a variable of such a type, assign an object to it, and call a method – the call will be resolved at runtime. It’s not required that the method be known prior to compile the call instruction.
So, in Java there are entities that are not objects, it uses static types also for classes, and there’s no possibility to call a method on an object, if the static type of which the reference is, does not define it (even as alternative, as in Vala and C#). Theoretically you should be able to do it using reflection (by searching through its methods), but anyway there’s no direct language syntax dedicated for that (and to some extent some C++ libraries also feature reflection). So, the object system in Java isn’t “true object oriented” – it’s C++-like.
The creators of standard libraries in Java were likely to be completely unaware of this. The majority of all APIs in all Java libraries strongly relies on “OO features”, meaning in this language that it’s based on classes. Java has this OO feature as a “central feature” and something that the whole API relies on. Such a thing makes sense in Smalltalk, or even Objective-C – but in Java APIs like this are exactly the same clumsy as in C++ due to weak OO features (MFC is one of the most dire examples of this mistake). From the OO design point of view this is the most stupid language design decision ever made – but this has nothing to do with the business point of view.
The fact that a method can be only called when there’s a certain definition for it provided, has important consequences. For example, in Java you can keep an object through a variable of type Object. But you can’t call a method named indexOf on it, can you. Of course not. The only way to achieve it is to cast this value first to a reference to String type (say, that’s what you meant), and only then you can call this method. And it’s because there is no method named indexOf defined for Object.
This causes trouble, for example, when a framework gives you access to some stored object of some base class, but it may be an object of some class derived from it. Even though you know that this is the object of your class, you can call your new methods only after you cast it to your class. This is a typical Smalltalk way, but in C++ and Java it causes clumsy API.
This fact also strongly influences the hierarchical structure of the design and method naming. For example, if you want to call a method – which will be then overridden by the user – in C++ (and Java), you have to have some class that defines it and call the method through the pointer to that class. Then, your class must be derived from that class because that’s the only way that the method call be effectively redirected to your implementation. None of these things is true for Smalltalk. In Smalltalk you just get the object and call the method, as there’s no such thing as “pointer to some type” in Smalltalk – it’s just some variable that designates an object.
But, on the other hand, you cannot name your method just “open”, which – depending on the context – may be expected to open a file, a window, a gate of the garage, or whatever else. In C++ if you want to open a window, you get the window, which is known to be at least of a class derived from Window class, so you know that this method can be only an override of Window::open. Any File::open or Gate::open may exist simultaneously and none of these has a thing to do with each other. In Smalltalk, if you had a method named “open”, and the code would like to call “open” on the object, then any possible version of “open” (although number of arguments matters – here it’s empty) is accepted, no matter what you meant in particular case.
All these things only confirm what I’ve already stated: the object system in Java is exactly the same as that in C++. And static types for objet-oriented system is a burden, not a helping feature.
So, of course, it’s sad that it took some time the Java creators to realize that an OO language that features static types must have something of a kind of C++ templates. As Java is posing a “real OO language”, mainly by making the API depending only on objects so that everything is done OO way – but practically this has very little to do with OO itself, it’s either of: doing it within the frames of “class” term, create some weird term to excuse doing something by only playing with objects, or just patch the language with some specific unique feature that helps use some particular idiom.
If you ask a question like “what’s the damned reason for this language to have these jinterfaces” or “why must I make a whole new class just to pass a code for execution to a function”, you usually get the answer “because this is object-oriented language”. It’s really, exactly the same stupid bullshit as heard also from some undereducated C++ fans as if overloading and defining operators were “object-oriented” treats of this language.
Jinterface? Well, this is just something that the Java language understands as interface – not what interface in software development really is. The “normal” explanation what interface is is the set of, say, “ways” how to use particular type or set of types. If this would be something like “interface for a class”, at best it may be something that collects methods (and their signatures) that a class should define (“protocol”), which is said to be “conformed to”, if given class defines all of them – but not that the class explicitly declares it. If something containing basic definitions is explicitly declared as being part of the class’s definition, it’s a base class (although only from static types point of view – in Smalltalk you don’t have to declare anything to be able to call a method for an object). Java generally introduces several entities proving that its authors didn’t understand their correct meaning – like for example “jackage”. This is something like namespace, but in Java world it’s called “package”. Anyway, back to the point.
So, how much to do with OO has that “jinterface”? From OO perspective, it’s just an abstract class where all methods are abstract (and this definition is still more C++-like than OO-like – in Smalltalk there’s no such thing as “abstract method”; the method can be called without restrictions, in the worst case it just redirects to doesNotUnderstand: ). As classes are just a “helper feature” for OO, not the precondition, then so the jinterface is. The fact that this “interface” plays almost the same role as class in Java (it can be used as a type for references and provide definition of methods, so it’s enough to be logically treated as a class) only confirms that this is just a special kind of class (and it’s a class in C++ sense). In practice, it’s only a method to overcome the limitation of classes, like lack of multiple inheritance. In Smalltalk we could have at best something like Objective-C protocol, that is, a set of methods of which all should be implemented in the class. But the conformance means that all methods are defined, not that a class explicitly declares it – older classes can be checked against newer protocols.
And what about listeners? If you think that this is more object-oriented than lambda functions, as lately added to C++11 (and Java 8 as well), you’re completely wrong. In Smalltalk – and likewise in Objective-C – you can treat a block of code as an object and also call methods for it. This functions more-less like lambdas. So, it looks like that these “lambdas” are much more OO than listeners. Java 8 has already admitted that as it has introduced lambdas. And these listeners, in order to be usable, had to be armed in additional language features in Java: anonymous classes and their closures (a method created in an anonymous class has automatically access to the variables of the method in which this object was created). Anonymous class that derives some explicit class, especially with this additional closure, is something completely unknown to all other OO languages. And this has still nothing to do with OO features. This is just “a set of features to make the use of listeners easier”.
That’s not all. If lambdas were in this language since the very beginning, maybe this could have been done with some special, unique name of the method. But now the creators have tried to make it able to be used with existing APIs that use listeners – so they just “adapt” them to the required class. Well, this had to be somehow composed with the existing form of class-based replacement for pointers to functions (actually a virtual method is nothing else than an index to a “virtual table” keeping function pointers) and overloading – both being the treats borrowed from C++ and not existing in Smalltalk.
In result, all OO treats in Java are:
- done C++ way and very far from Smalltalk
- armed with additional specific problem oriented features
- based on classes, not on objects
And all things that “force using OO style” practically just force to use class-based features.
And I repeat: don’t get me wrong. I’m not saying that Java is bad because it’s not like Smalltalk and much closer to C++. It would be even funny to say that a language is bad because of that, as I am a C++ developer and a great fan of that language. So let that be obvious that what I mean is that the biggest power of Java comes from the fact that it’s based on C++. It just makes me laugh this whole hypocrisy that tries to deny it.
The C language is accused of having too much roles played by integer numbers and “alike”. This “alike” includes also pointers. And these complaints are also spread to C++. Actually, in this “high level wannabe” C++ we have characters, which are integers, booleans, which are integers, bit flag containers, which also can only be integers, pointers, which also can be more less integers, and of course integers themselves. I bet that every experienced programmer, who “feels” what “high level language” really means, knows that this was done so in C just because it’s easier to implement this in the machine terms, not because it has anything to do with program logics. From high level language we should expect that it implements bit flag containers as just containers of bits, strings as value types no matter how many characters they have (including 0 or 1), booleans that are just two values of its own type, and pointers that cannot be “arithmeticized”. Of course, we not necessarily expect that integers have unlimited range (the “gmp” integers can be added optionally), but at least that integers are used only as integer numbers, we can do arithmetics on it, but nothing else.
So, out of all these things the only thing that Java has “achieved” over C++ is the lack of pointer arithmetics. All the rest of stupid things are hilariously incorporated.
Ok, let’s even admit, in Java boolean and char types are completely separated from integer types. But how many people have paid attention that it’s the “character type” itself something characteristic to low level language, not a fact that it’s treated as an integer number?
Could we have lived without char type? It’s more than obvious – of course we can. If we had a language-builtin “string” type, which is a value type, it may be empty, may contain just one character, and may also contain multiple characters – why would we need a char type? So what, what would str[i] return (or, say, the “at” method)? A string! A string with just one character. Just like Tcl does in its [string index $str $i] instruction – which is only a simplified version of [string range $str $i $i]. Moreover – thanks to that Tcl doesn’t have any “char” representation, it was just a piece of cake to add UTF-8 support to this language, which was completely transparent to all existing code, just the matter of changing the implementation. While in Java you have the name “char” coming from C (and C++), in which it was an 8-bit integer, but in Java it’s 16-bit (ha! see how smart – they declare that it’s a 16-bit, but not an integer :D). Of course, this doesn’t make it unable to use UTF encodings (Java’s String is using UTF-16 encoding internally), but what do you expect to have when some input character at specified position happened to be a 32-bit character? It’s impossible to return this character because it wouldn’t fit in char value. So, String has a method named “charAt”, which returns char value being either a character at specified position, or a surrogate, if the character cannot be represented by char value. This can be checked, of course, and if needed there is another method named “codePointAt”, which returns an integer this time, which is the numerical value representing the character. As int is declared to be 32-bit and it’s enough to represent any unicode character – but, well, not as a character, though. You can also get a string containing just one character, but heck, to get a string of one character from string s on position N, you have to do s.substring(N,N+1).
Why has Java this solution? You can look for excuses why it’s using UTF-16 representation internally, but this has some reasons – it completely doesn’t matter and doesn’t explain, why Java contains charAt() method and char type. This has completely nothing to do with converting to array of bytes because this should be treated as a “specific representation”, into which you shouldn’t have a need to look (and this is so in Java). Why would you need just one character at specified position? If it’s in order to have it glued to some other place – you can glue it as a string, too. If to convert to bytes – you have a much better solution to “encode” the string. String is heavier than a character? Smalltalk has already found a solution for that – Java could have set rightmost 8 bits to 1 and this would mean the UTF-8 character itself as a “string”. Anyway – there’s completely no “business” reasons to have charAt() method that returns a (explicitly!) 16-bit char value. But just one – to remind C++ as much as possible.
The String type is yet another flower. In a high level language it is also never an object – it’s a value. You can assign it to another variable, you can glue it and overwrite existing value. There’s no such thing as “null string”, just by the same reason as there can’t be something like “null integer” or “null colour”. And this is how std::string in C++ works.
Not in Java. In Java you have the same thing as in C, with just a slight exception, that in case of dynamically allocated strings, in Java you don’t have to free() them. String is just a pointer to something, it can be null, and this way, it should be tested for “nullarity” before doing any operation on it. Thanks to that you have lots of occasion to make mistakes and needs for testing the string for both nullarity and emptiness. Not even mentioning comparisons – even in C++ you just do a == b. Fortunately in Java you don’t have to do a.compare(b)==0 and you can’t repeat the stupid C-derived “if ( !a.compare(b) )”, but a.equals(b) doesn’t look much better, if we’d treat Java as a high level language.
Bit flags is even funnier thing. The best thing I can imagine for having a set of boolean flags is to have some container of bits. Either as a vector of boolean values, or as a constant size bit container with compile-time constant indexation. And this is even how C++ does this, with its vector<bool> and bitset. If you want to use a set of binary flags, use bitset. You can easily compare it with a mask, do shifts, selective bit replacements and so on. And you are not limited to fixed 8-based length.
So, this is exactly what I would expect from a high level language. Wanna flags? Take a dedicated type, bitset. Wanna number? Use integers.
Not in Java. Not only does not Java feature anything like “bitset” (let’s admit, in Java it’s not possible to define this thing librarywise, but it still could have been done as usual in Java – as a builtin type), but also all bit set things are implemented just like in old, good C – by integers. It has all the bitwise operators, which are predicted to work on a flag set, only for integer numbers, including bit shifting operators, moreover, right one is in two flavors – signed, when the leftmost bit is copied to itself, and unsigned, when the leftmost bit is set to 0. Is anyone using it? Of course, bit shifting is one of the operations done on integers on the machine level. But this can be at best used to get a better optimized division by 2 (shift right does the same as division by 2, but it’s much faster). Effectively this is for making algorithms most possibly efficient. What is worth such a feature of, in a language, in which performance doesn’t really matter? Moreover, it still has optimizers (even though only as JIT), so this kind of optimization still can be done. The only reason of having &, |, ^, << and >> operators in C was to provide access to low level assembly instructions. They may make sense in a high level language, as long as you explicitly declare that it’s a set of boolean flags and you are doing an operation on a value with a mask. But not as “and, or, exor, shift” – as “set bits”, “clear bits”, “extract bits” and “slice the bitset” (shifting can be used for implementing “slicing”).
Similar thing is with indexOf from String. From a high level language you should expect that if indexOf informs you that the searched character was not found, it won’t just return -1, letting you still do operations on it. You’d expect to either get an exception (bad idea in this particular case), or return some special value that would lead to this result, if found, or lead to nowhere otherwise. A high level language should afford to a concept of “optional” variables – actually the role of them is perfectly played by value wrappers. So, if indexOf returned Integer (not int), it would return null in case if not found. You still have to check it, but at least it won’t produce a stupid, but good looking positive integer if you add something to the result blindly, but do NullPointerException instead.
And, finally, the integer numbers. First weird thing is that when they have already made all the numbers the same names as in C, they still haven’t added the unsigned modifier. This changes the rules a lot (and this is why Java also has unsigned right bit shift operator), while it looks ridiculous to have types named “byte”, “short”, “int” and “long”, and they are 8-, 16-, 32- and 64-bit types respectively. Probably in future we’ll have also “quad”. Ok, I understand that there must be a type named “int”, and that it must be of 32 bits. The truth about freedom of definition for sizes of integers in C and C++ did not get into practice. Of course, there was a change between 16-bit systems and 32-bit systems, where “int” type changed its size from being equal to short to be equal to long. But the practice after introducing 64-bit machines for C++ compiler is that “int” is still 32-bit, just “long” changed the size to 64-bit, while in the C++11 standard a new “long long” type has been introduced to represent an integer wider than int or long (on 64-bit systems it’s actually the same as “long”). So, in practice, what’s the deal of giving these integers so many various names? Their use is practically none. The mostly used integer type is int, in some special situations there’s a 64-bit type (long in Java, long long in C++ – yes, I know, C++11, but long long existed a long time before as extension). Types like “short” or “byte” is something you can only see in some library that interfaces to some C library. So, the only sensible set of integer numbers for a high level language is: int, which is 4-byte by default, then integers like int1, int2, int4 (== int) and int8, or even int16 – for cases where they are really needed. So why are there these funny names? The same reason: to be like C++. The “byte” name is already something that happened to be a user-defined type assigned to “unsigned char” (alghouth in Java it’s still signed), and it was a good enough replacement for “char” from C++, for which the better assignment was to be a UCS-2 character.
I agree that this set of names is the same stupid in case of C++. Of course. But this was already seen at the times when Java was designed. C++ must have them because it’s still being implemented for various platforms and still has some of C legacy. But even C++ has int8_t, int16_t, int32_t and int64_t types (the last one defined as long long on 32-bit systems and long on 64-bit systems, causing this way problems with printf format). Java designers could have made them like this, adding just a universal “int” equal to int32_t – especially that they have predicted it to work only on one platform. They would do it, if their goal would be to be a high level language. But they just wanted to be a better C++.
3. Pointers and null
What is NULL? It’s something that has been introduced in the C language. If you think that this has anything to do with Smalltalk’s nil, you’re completely wrong. There’s no such thing as “not a pointer” in Smalltalk. Well, you can say that there are no pointers in Smalltalk (I prefer to say that all variables in Smalltalk can be of pointer to object type only), but this is how it is there: this “not an object” is just a unique object that does not respond to any calls. But you can still try to do it. This won’t result in any crash or any data destruction.
Some may say that it’s obvious. Not exactly. When you have NULL in C, you should check a pointer against NULL before dereferencing the pointer (or somehow be sure that it’s not by any other premises). In Smalltalk you can do it (for example, when your function allows nil to be passed in the place of object), but normally you don’t have to check it. You can always blindly try to call a method for this object – and it may fail because the object is nil or it may fail because the object somehow does not understand the method specification (I know it’s called “selector”, but I’m trying not to use any terminology that is specific to Smalltalk and different to what is in Java and C++ for the same things) or it may even fail because of any runtime condition – all these things should be somehow planned handling for. In C you just have all of that, but NULL is special – you shouldn’t try to derefer it because it results in an undefined behavior (at least in POSIX system, with virtual memory on, we know that it results in termination on SIGSEGV).
So, Java just changed this undefined behavior into NullPointerException (if we agree that SIGSEGV is what you get, or something similar on Windows, rather than undefined behavior, this is just a cosmetic change). For example, if you check whether a string designated as s is equal to “equal”, you should do in various languages the following thing:
- In Smalltalk, you do s = “equal”
- In Tcl, you do $s == “equal”
- In C++, you do s == “equal”
- In C, you do s != NULL && 0 == strcmp(s, “equal” )
- In Java, you do s != null && s.equals( “equal” ). Or some hackers propose “equal”.equals(s)
So, compare now the way to do that in Java with rest of the languages, and you’ll see which of them is the closest equivalent. Just by a case, the equals() method gets Object as argument, even though the intent of it would be to compare it with another string. Well, in C you can also pass a void* value as argument to strcmp.
Before stating whether the “reflection” feature in Java makes it high-level or not, you have to realize first what the reflection is from the language implementation point of view.
So, if someone has missed that part, I’d try to remind you that both Java and Smalltalk are languages predicted to be working only on one platform, which is a Virtual Machine. It doesn’t mean that you can’t find reflection in languages predicted to be machine-compiled. It does mean, however, that when you have a virtual machine, you can plan it anyhow you’d wish – if you have a physical platform, you usually have nothing and the only way to provide any kind of “reflection” is by using some extra layer between the “train” (language) and the platform. Often at the expense of performance.
But this isn’t even important. Important thing is what advantage you have from reflection (especially important if you reconsider it in the frames of high-level language). That’s why I have to remind you one more time that Smalltalk uses dynamic typing only and the only “static type” in this language is a reference to object. Because of that, reflection in Smalltalk is just available occasionally because in this language it’s inevitable in order to provide the dynamic type system. If we have a language with static type system – as is Java, C++ and even, say, Eiffel – things change a bit. In these languages reflection doesn’t have the same usefulness as in Smalltalk and I’d even say that reflection in such language provided way more limited advantage to anyone.
The only “usage” of reflection I found so far for Java is Java Beans and the implementation of some scripting languages that deal directly with the Java objects (Jython, Jacl). So, as you can see, for things not connected to writing any software in Java language at all!
And additionally, you have to pay attention, what really happens in this particular case. The java.lang.Object type is already a kind of “object orchestra”. Reflection in Java is limited to this exactly thing – you don’t have reflection for builtin value types (Java fans will say that it’s still not possible to create global objects of these types – I prefer to say that it’s rather because it’s impossible to provide reflection for them). So, java.lang.Object is simply a core class for the whole object system, and there’s just one object system in Java. That’s all. The reflection is provided for the standard Java object system (being part of the Java standard library), NOT for Java language.
If we realize that, we can simply follow that in a statement that in C++ you can use a variety of object systems, and a designer of that object system might have predicted some form of reflection. This is done in case of Qt and Gtk+ – reflection provided librarywise.
So, now it should be clear, why C++ doesn’t feature reflection as a language – because its runtime library doesn’t predict it, as well as very little part of the language depends on its language runtime (surprise!). These parts are only exceptions and RTTI.
If you want comparison with high level language, here it is: Ada. Does Ada feature reflection? To some very limited extent, yes, but it’s generally not much more things than in C++. So, anyway, this thing does not make the language more or less high level.
Java features threads. Ha ha ha. Good joke.
Java programming language provides just one thread-related feature in the language – the “synchronized” keyword. And it’s only needed because this language does not feature RAII – with RAII this could have been still defined librarywise. All the other stuff in this language, despite that it requires some language system support, is defined librarywise anyway.
Of course, being defined librarywise doesn’t automatically mean that this is not a high level construct. But it may mean that for some language, especially when the language provides libraries with very little abilities to define an API. This is how it is in C and this is how it is in Java – because all APIs in Java must be defined basing on classes. The only special construct, as I have mentioned, is anonymous class, and this is the most advanced thing you could think of, before Java introduced lambdas.
I won’t evaluate it. Just look at examples of using the Thread class, as well as some high level concurrency tools like Future class. So, the same question as ever: what would you like to see in a high level language as an implementation for concurrency?
I would like to see something like:
- An ability to define several procedures in place, which will be done in parallel.
- An implementation of futures and promises that can look in my code exactly as if I didn’t specifically use any special tool, just try to call functions as usual, read a value or assign it to somewhere else.
- A system of running parallel tasks that can pass messages to each other and the language interface provides me with a nice view of how this is running.
- Maybe some additional logical parallel features like coroutines
For example, I’d like that my procedure look exactly the same, maybe with some slight marker, regardless if I normally call a function, or make a request-response cycle, while my procedure is waiting (when a timeout is regarded, then this “function call” results in an exception). Same thing regardless if my value comes from a usual variable, or from a promise.
Why this is important? Because first, threads are simply low-level system tools, and second, it should be the tool’s problem to spread the execution into multiple threads, and I, as a programmer, should only worry about that the task is done. Execution, splitting, joining, synchronization – all these things should be worried about by the language system. I should only define a procedure, the language system should worry about parallelizing it.
So, what do we have in Java? Even though this language has lots of things that are a purposewise language support, this one doesn’t have any dedicated language support. Future is just a class, Thread is just a class, if you want to do anything with that, then create or obtain the object of this class and call its method. You can more-less achieve the “procedure split” lookalike using the listener idiom (let’s name it so – Java is such a special language that every idiom in this language must have a dedicated language support).
Many various things can be tailored to using the object interface (using class, object creation, calling methods, also never deleting an object), but there are many exceptions. I have already mentioned String as one of such exceptions. Thread is another such exception because it’s not just simply “an object” – it’s something that comprises a part of language system; the thread object is just a reflection of it. And also not the best representation of it. For such things the object-based interface is awkward and looks… well, very low level. Because it’s just written “how to use some low level tools to achieve the result” instead of “what the programmer intent is when writing this code”.
How much does this interface differ to what’s in, for example, POSIX thread interface for C language? Only such that in Java you don’t deal with memory management. But no need of dealing with memory management is way too little to be meant high level language.
Look: I’m not criticizing Java. I don’t say that Java is a bad language or something like that. Or that Java should not be used because it’s not a high level language. I’m just speaking the fact: Java is not a high level language, and no matter how much things the JSC will pack into this language in future, it will never become even close to the meaning of “high level language”.
I haven’t written about many other things, like exceptions (and why throw-new word pair in Java is like sinister-plot in English language), weak references, or the structure of classes. Probably you may find much more. These that I have mentioned are enough to confirm the main statement of this article.
On the other hand, let’s pay attention that in many other languages, which also pose to be high level, you can find many design flaws that cause that they are not, or not fully, or their “highlevelness” is compromised. For example, in Haskell language the string is represented as… a list (while list is the basic and language supported container) of characters. You just get characters and operate with them as with a list. This way you also have a string represented as an array (ok, list) of characters. I understand that the language needs some way to iterate over each one character in the string, but Tcl can do it, too – just do [split $s “”] and you’ll get the list of strings (not characters!), each being just one character string. It’s not the same as having abilities to iterate over the string by list interface and accessing chars. These single characters are still strings, while in Haskell, just the same as in Java and C++, you have an array of characters.
Pay attention also that the “proving in practice” is something that is only valuable in the commercial software development – academics may like various languages, but in this use programming languages make no money. And the practice is that in the commercial development still it’s the C language in the biggest trust (no, I’m not talking about the use, I’m talking exactly about the trust! – yes, that’s sad!), and Java is of trust also because it’s much like C. C++ is taking some parts of it over, but this is because its primary purpose is of use here: let’s use C++ to have access to some high level constructs and this way make our work easier and faster, and when the high level construct fails, we can always fall back to low level C-like. In a high level language you just don’t have where to fall back to.
Maybe then, despite the declarations, “we don’t need no stinkun’ high level language”. Maybe people like low level languages more than high level ones. Maybe the high level language concepts do not “speak” to people. I personally admit that it also didn’t speak to me in the beginning. Before learning any languages for today computers, I have been using only some BASIC and then assembly language. This way I was more used to low level concepts than to high level. And also I still haven’t found any language that can be meant high level and well enough acceptable. I prefer C++ not because of it being any high level, but because it provides an ability to be high level and develop high level constructs. So, it may be that high level concepts are still not mature enough so that any good and well acceptable high level language can be created.
But Java designers’ attempts were far from any approach like this. Java is as it is – designed to be very similar to C++, designed to remind the low level things from C and C++, designed to do everything just one way, designed to provide the user with a flail-simple way of coding to encode complex things. All the concepts that can be meant high level, known even at the time when the idea for this language came up, have been ignored. Of course, Java is voided of memory management, low level memory access, or treating every value as integer. But this was just cleaning up a part of most dangerous features. There are still lots of low level features in Java, and they don’t even have any high level replacements, as it is for some of them in C++.
Java is still business successful? Then its “lowlevelness” was part of this success. Sad, but true.