Sugar Tax

I have been searching for a good “shorthand” for “don’t pay for what you don’t use” for some time. This term came in handy, especially that there are lots of meaning of that in the world.

Sugar Tax album cover
The “Sugar Tax” is explained on the OMD’s home website: The title Sugar Tax refers to everything sweet having a price, particularly in relationships. The actual track Sugar Tax, ironically, doesn’t appear on the album due to it being unfinished prior to the release of the album. (Well, yes, it has been released later, on a “Pandora’s Box” single – you can also listen to it on Youtube.)

I have named my first “book” (let’s name it so) about C++ “C++ without cholesterol”, as referring to something light, without any heavy burdens. However today, armed in a bit better knowledge (for example that the advertisement for “cholesterol free” products were for “margarines”, which are hardened vegetable fat, that is, instead of the fattening cholesterol we get carcinogenic hardeners), I think focusing on sugar when talking about fattening, penalties, burdens etc. is much more appropriate.

Sugar Tax is an interesting topic in general – some time ago when searching for this phrase on Youtube I found a spot from some American TV (it might’ve been this one), where one of politicians referred that in US the Sugar Tax is considered. It would have been a “giant leap for the mankind”, at least the American one, in fighting against obesity, but unfortunately it eventually wasn’t claimed. You can guess why. Or, if you don’t, just watch “Food, inc.”.

Anyway, let’s stop this digression. There’s an interesting meaning of this in the world of programming.

The sugar (or, say, a spoon of honey)

Bjarne Stroustrup knows very well what Sugar Tax is. This knowledge has led to creating the C++ language. The reality at that time, when Stroustrup was writing his Ph.d. thesis, was that you could have a choice among various programming languages, which usually fell into the following groups:

  • Low-level languages, usually assembler, or something with horrible syntax, not much further from the assembly language, operating on the machine level (Fortran, Algol, BCPL, later C)
  • Functional languages, really logical and… not coinciding with the thinking manners of the majority of programmers (until today): Lisp, ML, maybe others – I don’t know which of them existed then
  • High level imperative languages (Simula, Smalltalk, Eiffel, Ada) – usually provide a very useful tool for a programmer requiring them to pay with lots of patience, also sometimes money

The problem was at that time that as the low-level languages were using not so complicated compilers and they easily mapped to assembler, so simultaneously they were usually easily available – but to achieve strict logical structures you needed to write a lot, and better to comment a lot if you don’t want to lose the real meaning of your code just after you wrote it.

This problem was partially solved by high-level languages. They provided a developer with various high-level facilities and, well… as it’s called today, “syntactic sugar”. However usually for this “syntactic sugar” you had to pay “implementation tax”. Compilers of these languages were usually very slow (remember that computers were really slow this time), moreover, many of them were running under some kind of Virtual Machine (Smalltalk, Simula). Not only did it take a lot to compile anything, but there was also usually a big runtime penalty.

How then did it happen that anyone was using them? It went variously in different languages, but the general rule was always the same: computers are getting better and better, faster, cheaper, so we don’t have to be so strict about performance and size; instead let’s give a programmer a good and useful tool so that they can finish their task quickly. This rule didn’t disappear until today – moreover, I would even say that it didn’t start yesterday, but much, much earlier. Practically this approach didn’t change since these times.

For example: Garbage Collector? GC is known since almost 50 years (let’s state 1970 year for Lisp) and the greatest modernizations of the GC algorithms and implementations were done last time maybe 20 years ago. Speaking about GC as about something that “a modern programming language must have” is very, very funny.

Same things about dynamism. Does anyone think that dynamism in languages (like in, for example, Python or Ruby) is something invented in last years? Dynamism, including self-recompiling in runtime, has been implemented in Lisp in the first implementation in 1962 year. It has been later used in many other languages (notably Smalltalk). Dynamism isn’t any modern – it’s the opposite, very old-school. Remember that the most primitive way to implement high level statements in a machine is to interpret them in runtime. Much more complicated thing is to translate these high-level statements into instructions of the execution machine. And this is exactly what compilers do.

You can think maybe then the JIT compilation then should be the next level of modernization. Well, JIT compilation refers to very old solutions in Lisp, it was also implemented later in various dynamic languages (notably Self), even though today Java and C# are most widely known of JIT compilation (there is also a JIT compilation possible for C++ – see LLVM).

If we want to speak about something “modern”, that is, developed and implemented last time, and of course widely accepted, there is first of all static type checking, static analysis, early (pre-runtime) checking. Skipping scripting languages, this is what has been provided by all of Java, C# and C++. That’s why Java has been recognized as a good replacement for Smalltalk, although the main difference between these two languages, but the syntax, is that Java is using static types.

My attempt when creating my book was to show that C++ is exceptionally a language that doesn’t fall into any of these two groups: high-level languages and low-level languages, in particular, it doesn’t integrate their disadvantages. It seemed to be impossible to integrate only the advantages of them, and partially it has been achieved different way: putting the performance in the first place, as it is for low-level languages, while adding some advantages found so far only in these “high-level languages”.

The principle “don’t pay for what you don’t use” can be easily shorthanded for C++ that this language has a minimum “sugar tax”. Although some may argue that it also doesn’t have too much of this “sugar”. This way it practically becomes a new representative of “low-level languages”, however with great knowledge to be assimilated in order to become productive.

The tax (or, say, a spoon of tar)

It’s not true that all languages that are high-level are so slow. There have been done many optimizations, usually with the use of JIT compilation, that can increase the performance significantly. However it doesn’t mean that the use of languages like Java or Python come without penalty.

Despite that these high-level solutions (not only languages, as it’s hard to treat things like QML as programming languages) are researched for achieving the best performance, usually there are no much things you can do. Java is a very good example of that: you can try to emulate your value type with a class of immutable objects, but it doesn’t change the fact that every object has a potential to become a mutex and so provides some things derived from java.lang.Object, like the “wait” method. It means that every object occupies more memory than it is excused by its functionality, and additionally this language works with GC, which – in general – requires some buffer of unused memory for objects temporarily being a garbage that was not yet collected.

The good name for this thing as “sugar” has also another meaning. For example, I am one of not many lucky people, who are physically impossible to become fat. I can eat enormous amount of sweet stuff and this will leave no fat in my body. I don’t know then what happens with this whole sugar I eat, but as it can’t be “used” by my organism, I state it just dumps it. The result is that when I buy any food, it’s often sweet because the food companies put lots of sugar into everything – however, I make no use of this sugar because it’s too much of it for my organism to use (if my organism didn’t dump it, it would turn it into a fat tissue). This is a good example of paying for what I don’t use.

But sugar is sweet, sugar makes the food a pleasure to eat, and this factor increases the number of people that like this food – despite that most of them become fat because of it. It’s the same with software: programmers like to use it and you can find more programmers to use this language – however the software that comes up from their hands (as one of my colleagues told me last time about Firefox) has lots of small fat dispersed throughout the whole body.

The penalty of laziness

Ever heard that Java is a high-level language? What a fool told you that?

You want to know what a real high-level language is? Check for languages like Eiffel, Ada or Simula. To some extent, when using some specific features, C++ is also a high-level language (especially with the features of the new standard, otherwise known as C++0x), but comparing to Eiffel or Ada, C++ is only a high-level-wannabe.

Languages like Java or C#, or even Python, are no high-level languages at all. Ok, to some extent Python may be meant as high-level language, however only when putting some specific features in the first place, which are practically unused. These languages are really low level languages because they just define the machine with very strict rules and provide a language that allows to operate within the rules of this machine. In other words, these languages comprise just a simple mapping to an execution machine. Also Tcl is a very low level language, although it doesn’t even pretend to be meant any high level language.

What do high level languages have? High-level statements, that is, statements that map to logical terms and are being located highly above the machine definitions. Does Java have at least one such construct? Well, Java language has only one thing that does not map directly to the definition of the execution machine: gluing strings by operator + in one instruction is translated into the use of StringBuilder objects. Ok, maybe also generics exist only in the source code (they are all replaced by Object class) and the nested and in-place-derived classes need some tricks to accomplish (they are later extensions to Java and that’s the reason). All other things are just direct translation to the machine. Threads are also being used in strictly technical way. High level threads should be defined directly in a language, or there should be some high level construct used that matches your task, not technical details to accomplish that. Java has some of them (futures and promises), but the limitations of interface definition abilities makes it sure that this same thing can be done also in C with similar result.

So, what is the “sugar”, that is, the thing that the majority of programmers love in languages most?

It has been believed by the creators of well featured high-level languages that things that people want from programming languages is a syntactic sugar that supports expressing logical statements. The popularity of Java, not overcome by C#, and the range of language features being in practical use in Python, proves that the support of high-level logical constructs absolutely isn’t what people want.

The popularity of Java, in which specifying an action to do may be only done with the use of in-place-derived class:

  x.addListener( new SomethingListener() { void onSomething() { /* YOUR ACTION */ } } );

while in C# it is:

  x.addListener( new SomethingListener( /* YOUR ACTION */ ) );

and in C++0x it is:

 x.addListener( [](){ /* YOUR ACTION */ } );

and in which ‘delegates’ (as in C#) proposed by Microsoft were blocked until the eventual court battle, also doesn’t tell the whole truth yet. Of course, Java or C programmers are not only Koreans, who are eager to prove that a Korean runner with tied legs will win with randomly running European (the most funny thing is that many Koreans achieve this!). After all, Java is very restrictive about global variables – you cannot just normally make a global variable in Java (although the smart programmers already overcame this limitation by using, lauded as advanced level of programming, Singleton pattern). There are also programmers, who like it by other reasons.

This is the truth – Java + design patterns is exactly what the people want. First, they get the language that doesn’t have “pointers” in C++ sense (that is, a higher level of addressing) – the pointers in this language are still used, but their use is mutually exclusive with variables. Second, they don’t need to worry about the object’s ownership (actually they at best didn’t get convinced yet that they should worry, but no matter). And finally, they get a language in which everything should be done just one way – even the construct of program’s design may only be object-oriented.

So, did the features of C# work for the users’ disadvantage? It depends for which users. For the users that are programming for the domain in which C# is typically being used (and competes with Java), they surely did. And it even started with Microsoft’s delegates that were added to J++, say, an important predecessor of C#. This just made the language more complicated, so more things to learn when want to use this language. It means that Sun did a really good job by not allowing Microsoft to add delegates to Java, and also prevented against adding any other new features.

Would lambdas in Java hurt it similar way? Actually they won’t – but it’s only a good luck. It’s because this won’t influence the existing libraries. In existing libraries (and in Java there are lots of them) you’ll still have to add callback actions using in-place-derived classes (or even not in-place). There’s just no way to make any use of this feature in some existing library that was using the standard Java way, that is, by providing an object with overridden method. If you are using an existing library, you still have to use the old way. You can create a new library and require that callbacks there are passed using lambdas – but in this case no-one will use them. The problem is that there is no possible translation nor interoperation with the existing solution, that is, the in-place derived class. You can also allow both ways (by overloading), so your library will be maybe used, but 90% of users will still use the old way of passing callbacks. Effectively adding lambdas to Java will be exactly the same reasonable as it was with adding list comprehensions to Python – just a super-duper language feature, which isn’t used by a lame-legged dog (*polish saying, although a bit abused :)).

Adding lambdas to Java is not the same as adding lambdas to C++. In C++, for example, the idea of callbacks (at first used by STL) is usually accomplished by the use of function objects – that is, just anything for which the operator () can be called. Lambdas just use exactly the same thing, so you can use lambdas in C++ together with a library that was written 10 years ago, which just rely on objects with operator () (maybe it uses boost::function or std::tr1::function). In Java it won’t work this way because the only way in Java to pass some “procedure to execute later” is to pass an object that has a method of a predefined name (each library uses various different predefined names and lambda will just introduce a kind of yet another its own one). Even though lambda is an object of some class and executing the lambda would be exposed via some its method name, the function that gets the callback object needs always an object with a method that has a specific name (moreover, it’s also a specific class, while every library defines its own).

Contrary to appearances, Java has lots of common treats with C language. Somewhere else I have already mentioned that, for example, both these languages have a “string” type that can be both empty and null (although some libraries, like Qt, sometimes repeat this stupid statement). Similarly, the only way to pass arguments to a function in both these languages is to pass by value. Both are also quite easy to learn, although not the same easy to use. The C language is even a bit superior to Java: it allows for type aliases using typedef.

Sugar VAT

It’s for some uses a bit better term: as when mentioned above, when I buy sweet food, containing a sugar, which isn’t absorbed by my organism, I practically pay for what I don’t use. VAT is something a bit different – it’s a tax that is payed by the “end-user”. So, you can guess, why the use of such languages is so easily tolerated: it’s because the programmers aren’t users of this software. It would be just an income tax in this case. But when this is a product for some non-programmer end-user – the end-user pays this tax, so this is a VAT. Of course, today Java has much less sugar tax than it had before, so it was possible to create tools for programmers in this language (Eclipse, Netbeans, IntelliJ Idea). But it’s not about only Java – also about languages used for web pages, like PHP.

A very specific software is being produced with a high Sugar VAT. And also it’s not a development of present times. Many products for this kind of marked have been made in Smalltalk, and Smalltalk has quite a big Sugar Tax. It makes that if this Sugar Tax is payed as VAT in particular situation, software producers are even very keen to use such languages because it’s not them who will pay this tax.

Of course, an important factor in this is that the software being produced with these languages has lower costs of production. So, for systems that has lots of specific cases to manage and relies on well known and well defined middleware layers (in particular it needs to provide an interface to operate with data kept in a database), the main factor that decides about selecting a language definitely isn’t performance. Because the most performance critical thing in this whole system isn’t the interface, but the database engine (and, in web applications, the network capacity). It practically means that the interface can be programmed in not only Java, but also Tk, Python, Perl… well, no. Not Perl. Too much security risk.

Unfortunately, it pays well for users that have lots of money to waste. Many people prefer to know though, for what they pay, and surely prefer to pay only for things that make value for them.

The content of sugar in sugar

The JIT compilers can wipe out many performance penalties, as its main purpose is to shorten the path that leads through execution points in order to achieve the same final result with less number of instructions. But there are not much things they can do about the memory consumption, in particular, they can do practically nothing. Oh, let’s say that at best they are able to rearrange the memory access requests to enhance locality of memory usage, so that the real memory access penalties can be decreased. But eventually if an application is using enormous amount of memory, better memory localization will not help much anyway.

Saying that GC may provide better results in performance than manual memory management, which is a reality today in C and C++, is largely overestimated. It doesn’t really matter, which performance penalties of manual memory management are avoided by GC. Yes, GC will allocate memory faster than the standard C++ allocator (because it takes less time to find the correct block). Yes, GC will improve locality because it is allowed to move an object to a different place of memory and this way it’s even able to decrease the memory fragmentation. But does the speedup in memory allocation compensate the performance penalty of running the mark&sweep cycle? What would be the result of comparing the size of memory lost due to fragmentation and the memory overhead with not yet recycled objects? I have never seen any comparisons for these things, while the hails for GC and his obvious outperforming of manual memory management are heard very often.

And it’s still not all because this above defines the theory, I didn’t yet start to talk about the practice. And the practice with GC-based languages is that they definitely make more use of dynamically allocated memory than it is with C++ (not with C – this language uses more dynamic features than C++, which is also one of the reasons why its better performance than C++ is purely mythical). C++ makes wide use of stack allocation, which is really fast – to allocate a memory on a stack you need to execute just one simple assembler instruction. Allocation of a dynamic memory comes with a big penalty anyway – you need to find an appropriate block of free memory or carve a new one and register this block in the memory allocation table. It’s orders of magnitude slower. Proponents of GC oppose that stack allocation leads to worse memory localization (which isn’t quite true because newer processors are being added additional memory cache for stack), but this performance penalty is negligible stating that for types of small size it would take more cases of creating objects than operating with them. And if someone would like to say that typically in the best solutions using JIT the allocation and deallocation of one small object is as fast as using stack for that, there’s one thing that stops me from believing: The IBM’s Java compiler was hailed that it provides a really great optimization that can in some cases unwind an object used only inside a method into local variables. Also, there should have been reasons why C# has structs and “stackalloc”. And note that Java and C# are considered today the best existing solutions using JIT compilers.

Additionally, in a language with manual memory management you can change the memory manager for either specific type or specific portion of code. It means that you can also use a specialized allocator, that is, an allocator that is predicted to work with objects of exactly one type (or even just with the same size and base class). Allocation of such object is still faster than that of GC, and deallocation is practically the same. Although it partially shares one disadvantage of GC, that is, it must keep a pool of unused memory allocated for the process. But as the name shows, it should be specialized, so it’s predicted for objects, which’s number should be stable, while they are very often deallocated and allocated again. If the number of objects is not changing during the system run, this disadvantage is insignificant. While when GC is used for the whole process and non-exclusively for all objects, the amount of unused memory may be significant. So, GC is a kind of sugar – not because you don’t have to worry about releasing memory, but because you don’t have to select the best matching allocation algorithm for particular purpose and data type, you always use the universal one.

Simplicity of a language is another case of sugar that needs not contain high amount of sugar. For example, operator overloading is one of features of C++ (although lots of languages feature it – C#, Ada, Eiffel, Smalltalk, Haskell, and probably many others, too) that is very badly evaluated, as something that decreases the readability and comprehensibility of the source code. For example, in Java when you can see “a + b” you know that it may only be adding two numbers or gluing two strings, never some crazy function execution, for which some crazy programmer has added such a crazy interface. It makes that if you create your own value type, or such a type is provided by some library, the only way to operate with it is something like that:

Ethereal x = t.number() + y.number() ).update( z.number() + x.number() );

Which can be encoded in C++ as

Ethereal x = a[t+y]->update( z+x );

Yes, operators is something that may have different meanings, depending on what things take part in the instructions (their types, in particular). In some cases (like standalone operators defined in a namespace) it may even depend on context (that is, whether any “using namespace” was declared in the enclosing block). For some people, like me, a reasonable use of overloading operators will increase readability of the code. However for majority of programmers it’s most important that the general meaning of an instruction be always the same, regardless of what things participate in the expression (that is the meaning be contextless). In Java, for example, when you look at the following instruction:

   System.out.println( "haha" );

you know that:

  • System and out may be classes or variables, if variables then maybe fields or local variables, never anything else
  • println is a method of a class designated by out (out may be its class or may be a reference keeping an object of this class)
  • this instruction designates calling a normal method of object designated by ‘out’ or a static method designated by System.out class

This last statement is also very important because there is a big difference between calling a method and, say, calling a constructor: when a constructor is called, the ‘new’ keyword stands always before the calling expression.

The same instruction in C++ may not exactly designate the same thing. It’s even easier with the “System.out.println” part, as these names would never designate a class name – it must always be an object. However, println may be either a method of object System.out, or a field of it, that is of a class that defines operator().

Similarly, if you create such an instruction in C language:

x->y->perform( a );

then there are several things you are sure of:

  • x and y are of pointer to some structure type (so -> operator simply derefers the pointers)
  • the “perform” field in y is a pointer to function
  • if the pointer to function type of “perform” declares that it takes one argument of type ‘int’, and the compiler did not issue a warning about incompatible types, ‘a’ is definitely of some integer type (int, short, long, char)
  • conversely, if type of ‘a’ is int, then the pointer is to a function that gets one argument of an integer type (int, short, long, char) or even a pointer type (although the most recent compilers would warn in this case)
  • the ignored return value, stating that the call does not return a pointer to some allocated object that would be leaked this way, does not have any additional meaning in the program

You can’t be sure of that in C++. In C++ the above instruction may have also different variants:

  • x and y may be smart pointers with overloaded operator -> (which need not do just simple dereference)
  • perform may be either a method of class of which y is a pointer, or a y’s field of either a pointer to function type, or some class that has overloaded the () operator
  • stating that perform is a method, it can be the only such method with one argument, or a method, which has default arguments at least starting from the second one, or it can be one of overloaded methods with this name (fortunately, “perform” can’t be defined simultaneously as a field of a class with overloaded operator() that takes two arguments and as a method that takes one argument, even though the overload resolution for this case would still work in theory)
  • stating that the argument of a call to “perform” (whatever it is) is of type ‘int’, the ‘a’ expression may be of any of integer types, or of any class that defines a conversion operator for type ‘int’
  • and conversely, even stating that the type of a is ‘int’, the argument type for “perform” may be either int or any other integer type (also char and bool) and any class that has a non-explicit constructor that takes one argument of type int
  • and additionally, this call might have returned a value of some class, which has been ignored, and as a temporary object it may undergo immediate destruction that involves calling the destructor, which may perform some additional action

From the perspective of the majority of programmers the cases described above for C++ is enough reason for never using a language that features operator overloading. Well, moreover a language that features also destructors and automatic implicit conversions. C++ has all of these features, and Java has none of them.

But from the perspective of a real professional the matter is much simpler: how do you read this instruction? Of course, you read it as: from an object designated by ‘x’ extract the ‘y’ member, from the object so designated extract the ‘perform’ member, and call this member passing ‘a’ as argument. Despite all the possible variants of what the real meaning is of this statement in C++, this explanation will be always the same. Of course, crazy programmers may put various crazy meanings of what the -> or () operators designate, that’s true. But for a professional it doesn’t mean anything because crazy programmers are not participating in software production, at least because they are very easy to detect and they are given a choice of either stop doing crazy thing or leave the software team and organization. Anyway, the number of possible real things being done by this instruction for a professional C++ programmer doesn’t really matter. What is significant in it is its logical meaning. For example, if we have -> operator, it means that some member of something that is designated by x is dereferenced. It really doesn’t matter if -> is an overloaded operator and if this maybe does some complicated things behind the scene. It still means the same: dereference. Same with (): it just performs a call. It doesn’t matter whether it’s a function, a method, a pointer to function, or an object that has an overloaded () operator. If () is used, it means “call”, no matter what is being executed.

The straightforward difference is that in case of Java, the language grants you that the statement you are reading will have always the same meaning. In any language that features operators overloading, and especially in C++, the language doesn’t grant you much – it only grants you that there exist a valid definition for an operator that allows for this statement to be valid – whether builtin or user defined. In specific cases it may mean that despite that you’ve learned a language, you still have to learn again, some particular library rules in this case. Of course, I’m explaining the general feeling of the majority of people – the publicly available libraries make really very little use of operators overloading; usually they overload the () operators for function calls and [] for some “indexing”. The only case I know of a really excessive use of operators overloading is Boost.Spirit.

Does that mean that “simple” languages are languages that aren’t used in software production organizations? Definitely not, of course. But definitely they are used in very specific organizations: either in “headless” organizations (like the bazaar-style headless programming), or commercial organizations, in which the performance of the software doesn’t matter, so they don’t care for employing programmers that can be trusted (that they are not crazy programmers). This is one of the reasons that Java and PHP rule in the software that generally relies on having a storage in a database and the presentation layer on the web, and also that Java and C are the most popular languages in open-source software. It’s usually cheaper for an organization to employ people that don’t have to be tested for trustfulness, as you can easily let them make software and be sure that it won’t run crazy by just giving them a tool with strict limitations.

And this is the sugar, in case of these languages. And yes, it has to be paid for – by end user.

But this solution cannot be used in an organization, which has to produce software with strict size and performance requirements. Even stating that computers are getting faster, better, have more and more memory. Because maybe computers do, but the requirements for features in the software increase, too.

The sweetness of sugar

Although C++ has become one of the most popular languages in, say, non-web software, it has lots of opponents, which seem to be more visible than people, who like it. The people, who value performance and want to have full control over the program, usually say that in C++ you don’t have full control over every part of the software (although I have been always saying that if such a case happens in C++ and it can be opposed by a similar case in C where you have full control, it’s only a result of the people’s indolence). The people, who value high-level languages, say that this sugar in C++ is little sweet. That the features that C++ supports are too little “logical”.

The real reason is that practically no language is “supporting real logics”. As I have already pointed in another article, logics is logics, it’s very fuzzy and this way not possible to be defined strictly. So, following thing is that there can’t exist a language that better supports “the logics”, or even some specific logics. There can only exist languages that contain some high-level statements that support many various logical ways of programming. The uniqueness of C++ is that it has lots of abilities to create various kinds of API, which means that it’s easier than in any other language to create an API that allow to express the logical statement of the user in the terms needed by the module. But of course, it probably doesn’t support the logical structure of any other, say, object-oriented language because, simply, it’s not just “object-oriented language” (it’s a multi-paradigm language). There’s no wonder then that people who were using other high level languages will never get used to use C++. But people interested with doing software engineering should never listen to what they have to say about C++.

The complexity of C++, on the other hand, may be an advantage of one type of programmers (those who desire abilities to describe logical statements clear way) and simultaneously a disadvantage of the others (those who want that the language have simple rules). You cannot make a language that fits both. Another thing is, though, whether both types of programmers are same suitable in software production.

The matter of taste

Well, some languages are sweet because of the sugar, some others because of an artificial sweetener. For me the artificial sweetener was just a piece of, say, chemistry that was going to fool my taste – well, it never did it. The artificial sweetener has been always for me something obfuscated, it never tasted even similarly to sugar. When I made a mistake once and bought a “cola light”, I have quickly learned the names of acesulfam, aspartam and sacharine, and that I should read the list of ingredients before I buy.

Maybe there are some people, who can’t distinguish the artificial sweetener and sugar. By the same reason, there are people, who cannot distinguish between the features that make the language useful and features that make the language easier to learn.

Don’t get me wrong – I’m not saying that C++ should be used in every part of software production. I understand that for C++ there can be lack of appropriate libraries (no one has every thought of using C++ in this domain), or using the shared ownership connected with no defined place of object deletion can increase the speed of making software. I don’t try to complain that so many programmers are dumb because they prefer Java over C++. In practice, Java is not a competitor for C++, the same as it isn’t a competitor for Python. I just would like to point out that there is a strict connection between the fact of bigger popularity of Java over C# and C over C++. I’m not sure, but it also seems for me that this is also connected to changing Smalltalk to Java.

Remember that many years ago a programmer should have been a really smart guy. Such a programmer should have learned very illogically looking rules and solve tough problems. But as the requirements of software increase and there is needed greater number of programmers, the number of really smart programmers cannot be increased so easily as the number of all programmers, including those that cannot understand pointers (Joel Spolsky says that it’s about 3/4 of all IT students). That’s why today there are lots of people, who need not be smart guys, but they just have to write software the best way they can.

So, I’m not saying that I have something against languages like Java, or C#, despite their quantity of sugar tax. I just think that it makes more sense to use C++ than C, same as there’s more sense to use C# than Java (but probably not from the perspective of some kinds of software business). By the same reason as there’s more reason to eat things that are either not sweet at all or at best naturally sweet (fruits) than candies or carbonated beverages stuffed with aspartam. But the life can only teach to us that there will always be these two different kinds of people, who care, and who don’t care. There may be always an economical explanation to make money from both of them.

The fattened and the jogging

Although I start to think that if such a thing happens to the programming world, it means that programming has become simply too little challenging. What does it mean? Well, it may be something like what can be challenging for a soldier when the war has finished. On the other hand, if one’s well payed for little challenge, will they be payed more for big one?

There is one thing that’s challenging in software development. Well, last time the speed of processors is going to get to the kind of speed of light. They just can’t be faster anymore. However they may be cheaper. It means that in the near future you can’t count for a faster processor, however you can count for more cores – even in a number of 100 or more. In such an environment, programming must be different – you no longer define a sequence of instructions; instead, you should define several instructions with dependencies between them, something like today the “make” tool needs in the Makefile. For such an environment, none of today the most popular languages may fit. Thread libraries are ridiculous in case when you are encouraged to create 10 threads in your simple program. Even futures and promises can’t help you much if they have to comprise 90% of your instructions.

Will there be any economical excuse for such a software? Well, this is at least that much probable as that the energy sources will be exhausted, some Armageddon will happen, or simply the world will collapse under the burden of permanent crisis. For today the best rule would be: don’t try to look into the future because there will be no promise.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s