My blogging will be sporadic to nonexistent for a while, as my friend Rob Landley and I are concentrating heavily on writing a paper together. The working (and probably final) title is “Why C++ is Not Our Favorite Programming Language”. It begins:
C++ is an overcomplexity generator. It was designed to solve what
turned out to be the wrong problems; as a result, it lives in an
unhappy valley between two utility peaks in language-design space,
with neither the austere elegance of C nor the expressiveness and
capability of modern interpreted languages. The layers, patches, and
added features designed to lift it out of that valley have failed to
do so, resulting in a language that is bloated, obfuscated, unwieldy,
rigid, and brittle. Programs written in C++ tend to inherit all
these qualities.In the remainder of this paper we will develop this charge into
a detailed critique of C++ and the style it encourages. While we
do not intend to insult the designers of C++, we will not make
excuses for them either. They repeatedly made design choices that
were well-intentioned, understandable in context, and wrong. We
think it is long past time for the rest of us to stop suffering
for those mistakes.
Yes, we are attempting to harpoon the Great White Whale of modern programming languages. I’m announcing this here to give my commenters the opportunity to contribute. If you know of a particularly good critical analysis of C++, or technically detailed horror story around it, please cite. Superb apologetics for the language would also be interesting.
The paper is developing primarily from a software-engineering perspective rather than out of formal language theory. I’m particularly looking for empirical studies on the importance of manual memory management as a defect attractor (I have the Prechelt paper from the year 2000). I’m also interested in any empirical studies comparing the productivity impact of nominative vs. structural vs. duck typing.
After about 3 days of work our draft is over 600 lines of clean narrative text in asciidoc. It’s going well.
The best apologetic has to be that it can be fast at a higher level of abstraction than almost any competition it had in the past five-through-fifteen years. The abstractions may be funky or may be poor, but a lot of that is based on making performant binaries. (Most of the rest of course being C-compatibility issues, which I agree is dubious, as many other languages have managed to bind to C without actually trying to be C.)
(I am aware of the various shortcomings of the shootout, it’s just an example.)
Note my klunky phrase “five-through-fifteen”. I say that because in the past few years, a lot of stiff competition has emerged and is continuing to emerge in that space. Java is leading the charge, but some surprising other entries are starting to pop up in that area. (I’m particularly counting things that people “actually” use; sorry, O’Caml!)
I sort of think it falls in the class of “inevitable failure that we have to pass through to learn the correct path”.
Mind you, I feel about the same way as you do about the language, I’m just inclined to cut the designers some slack, because I don’t see how to get from 1980 to here without something like C++ teaching us what we really want. That’s not to say the time to toss it isn’t here; this really re-inforces that point, that we’ve learned what we can, one of the things we’ve learned is that there’s better ways to do that stuff, and it’s time to stop torturing the poor language.
There’s already a rant by that title: http://www.demiurgestudios.com/blog/2007/10/why-c-is-not-my-favorite-programming.html
If you haven’t already, you ought to read up on the D programming language and/or speak with its designers. Fixing C++ is pretty much the raison d’etre of the language, so you’re likely to find some useful insight.
>There’s already a rant by that title:
We hope to achieve something rather above the level of a rant. Actually, I think we already have.
Moise understands that C++ sucks, but he’s enumerating symptoms rather than dissecting causes. That sort of venting is not completely useless — sometimes we can get an empirical datum from it we didn’t have before – but we are trying to look deeper, into the roots of the bad design decisions to show how they interact and what the consequences are at large scales.
>If you haven’t already, you ought to read up on the D programming language
I have, a little. It seems to have taken off in a deeply idiosyncratic direction. Does it have any actual users?
I hope you will cross-reference your complaints about C++ to e.g. bjarne’s various defenses of probably the same issues.
There’s a bit of wisdom floating around in Lisp circles — I first heard it from Paul Graham but I don’t know if it originated from him — that the existence of design patterns is a symptom of brokenness. The idea here is that wherever well-written code exhibits anything recognizable as a pattern, there’s a missing language abstraction that would allow that pattern to be reified into a function call or macro expansion. Dedicated Lisp evangelists will then go on to argue that CL’s macro facilities can cure all such ills. It sounds like what you’re exploring is closely related to this: what is it about C++ that causes its design patterns to emerge, and what are the consequences to maintainability?
However, the design patterns catalogued by the Gang of Four are mostly not the ones that make C++ unpleasant. The real offenders are the sort of thing alluded to by Greenspun’s Tenth Law. Unicode handling, reference counting (if not outright usage of Boehm or similar), serialization, and multiple-precision arithmetic are some common examples. Most 21st-century languages have these things built in. Poorly-written C++ code deals with these things ad-hoc and oozes with memory leaks and chokes on muti-byte characters. Well-written C++ code deals with them systematically, but yields a great deal of ongoing mental overhead in following the system as well as a frightful initial learning curve. An inveterate C++ programmer who looks at C++ XPCOM code for the first time is sure to feel like a deer in the headlights, and it’s no fun even once you’ve gotten accustomed to it.
> I have, a little. It seems to have taken off in a deeply idiosyncratic direction. Does it have any actual users?
I couldn’t say. I’ve only ever read some of the design papers; never actually used it for anything. It seems cleaner than and preferable to other members of its language family like C++ and ObjC, although a couple features like lazy evaluation seem out of place and bolted-on. Anyway, I don’t expect to ever use it: if I’m choosing a high-level language and my choice isn’t forced upon me by existing code, I’m almost certain to choose either Haskell or Python. Making language design improvements while using C++ as a starting point isn’t something that particularly interests me.
There’s really not a C++ “Way”, in the sense that there’s a C Way, a Lisp Way, a Ruby Way…
To me, the very best programming languages facilitate “flow” (in the psychological sense coined by CsÃkszentmihályi) — after using them enough to become an expert, mapping the problem space onto the code space becomes so natural as to be almost unconscious much of the time. It doesn’t feel like I’m writing code. It feels like I’m *solving the problem*.
I’ve had that kind of flow experience with C, Lisp, Ruby, and a few others (I think I got there with FORTH, but I haven’t used it for years. I also think it’s possible that I’ll get there with Erlang, but it’s too soon to tell).
I’ve never gotten into that kind of mental state with C++, despite having used it fairly regularly since the early 90s. I can write C++ just fine, but it seems like the warts of the language are always poking out and getting in the way. It’s like running with a rock in your shoe.
Java is much better than C++ in that respect, but it still doesn’t match C, Lisp, or Ruby.
Maybe part of it is syntax overhead? C, Lisp, and Ruby are all fairly minimal (it’d be hard to get less syntax than Lisp, although I guess you could argue that FORTH wins there :-)) Certainly these three languages are quite different in other respects. What else do they have in common that C++ doesn’t have?
I’m sorry if this seems fuzzy (in fact, I know it’s fuzzy), but I don’t think I’m the only one who’s experienced this sort of thing.
When I first encountered C++, I remember thinking the cout syntax was painfully stupid. But no, everyone uses C++. It must be good. Sure, it replaces any non-trivial printf() call with an elaborate array of incoherent things some of which are state changes and some of which are data, but… You know, this is really a hot new language. Surely, that’s an improvement.
No. No, it isn’t.
My experience of C++ has stayed pretty much unchanged. The de-magicking of (void *) is still dumb. Yes, I believe I do understand the reasons for it. No, I do not think they are sufficient. The entire point of writing in a C-like language is that you are doing so because you genuinely think you know what you are doing. If you don’t think that, you should not be writing in any language which has pointers in the first place. The argument that C++ can be used as a “better C” falls down on the simple fact that it is a *worse* C.
I’m sure some will see me as a reactionary fuddy-duddy or something like that. Yeah, fair enough. Never mind that I love both Lua and Ruby — not exactly old-fashioned low-level languages. Do I object to operator overloading *in and of itself*? No. I have written Ruby programs which changed the behavior of the Fixnum class. What I object to is operator overloading in a language which is closely built around the assumption that you know what a native machine word looks like and you have plans about how to arrange them in memory to suit your own nefarious schemes. You can have either of these, but attempts to combine them have transcended mere disaster. I am occasionally confronted with Open Source software which is written in C++; consistently, it turns out to be buggy and unmaintained. Why? Probably because C++ is, in practice, unmaintainable.
The C++ spec is too large for anyone to understand. The notion of a “reasonable subset” evaporates when you realize that no two people agree on what that reasonable subset would be.
Actually, though, that’s not quite right. We do have widespread agreement on the reasonable, safe, subset. It’s called C.
I’m highly confident the two of you will avoid simply blaming the tool, and look forward to reading this paper.
ESR, don’t forget Chapter 10 of the Unix Haters Handbook. It provides some very good criticisms: no design, poor abstraction, pseudo-OO, etc.
>I hope you will cross-reference your complaints about C++ to e.g. bjarne’s various defenses of probably the same issues.
Stroustrup’s apologia(s), are they webbed anywhere?
>I’ve had that kind of flow experience with C, Lisp, Ruby, and a few others […] I’m sorry if this seems fuzzy (in fact, I know it’s fuzzy), but I don’t think I’m the only one who’s experienced this sort of thing.
Certainly not. I know exactly what you’re talking about; my flow languages are Lisp, C, and Python. I’m fairly sure Ruby would become a flow language for me if I actually learned it, I just haven’t yet because it doesn’t seem to me like a bigger win than Python.
And I find your “it seems like the warts of the [C++] language are always poking out and getting in the way. It’s like running with a rock in your shoe.” both funny and perfectly descriptive.
I don’t think syntax is a major driver, though it can contribute. Some languages simply make you fight them more than others; I think this has less to do with syntax than the fundamental abstractions in them.
Yossi Kreinin’s C++ Frequently Questioned Answers:
“There is no reason to use C++ for new projects. However, there are existing projects in C++ which might be worth working on. Weighting the positive aspects of such a project against the sad fact that C++ is involved is a matter of personal judgment.”
>Yossi Kreinin’s C++ Frequently Questioned Answers:
On our list of documents to be thoroughly mined. We’re already using one of his horrible examples.
>ESR, don’t forget Chapter 10 of the Unix Haters Handbook.
You know, I’d forgotten how good that chapter was (UHH is a very uneven book).
Here’s something almost nobody knows: I was the first technical reviewer MIT Press picked to vet the UHH manuscript, way back when. I worked hard at trying to persuade the authors to tone down the spleen level in favor of making a stronger technical case, but didn’t have much success. They wanted to rant, and by Ghod they were gonna rant, and no mere reviewer was gonna stop ’em.
There’s a study, published in 1999, called “Comparing Observed Bug and Productivity Rates for Java and C++” which might be of interest, though it only presents empirical measurements for bug rates in C++ and Java, without a more fine-grained breakdown of what caused the bugs in terms of language features. Is this of interest?
>“There is no reason to use C++ for new projects. However, there are existing projects in C++ which might be worth working on. Weighting the positive aspects of such a project against the sad fact that C++ is involved is a matter of personal judgment.â€
Hearing it from C++ advocates makes it even better. I know my critique is anecdotal, but I’d like to bury C++’s use as a teaching language once and for all. Some years ago, I decided I would learn C++ as my first “real” language, because that’s what “serious” programmers used. Big mistake, as esr points out in his Hacker HOWTO. I purchased one of those “Teach yourself C++ in 24 Hours” books. I got lost on “Hello, world!” (“What is this ‘>>’ thing, and what’s with the colons?”) I managed to get the example programs running, but I couldn’t model what was going on in my head, something that Eric and others claim is important in order to be an effective programmer. I wonder how many wizards C++ has ruined.
I do not count myself as a programmer; Eric vaguely disputes this based on what I do in Excel.
My three exposures to programming languages/environments were QBASIC, C++ and a feeble thrashing about with Perl.
So, to answer your question about how many wizards C++ has ruined, I can argue that I never bothered to go further in programming due to feeling utterly lost with C++ and Perl.
I agree with this article. Unfortunately, I still quite often find myself required to make use of C++ against my will. This is because I program audio and haptic systems, and sometimes embedded systems. These are real time systems. This means they have special requirements from a language: predictable timing, and deterministic memory management. And fast. (Fast isn’t implied by real time, but it is often a co-requisite.) It seems that this always results in the choice between C++ and C. These days I have pretty much always started using C. However, quite often the libraries I need to use are written in C++, so I have no choice but to comply. In any case, I find it terribly unfortunate, but it seems that C++ is the only object-oriented language with “modern” features that is real time deterministic.
Pretty much every language I know about that is more interesting and more fun to program in is a dynamic language; they run in VMs or interpreters, or have a JIT compiler. This means they usually make use of garbage collection and automatically allocate memory on the heap for you. This doesn’t cut it for embedded code, DSP routines, etc. I really wish more effort was being put into modern languages for real-time systems, because I’m tired of having to use C++. Sigh.
There are some cool audio languages like FAUST which compile down to C, and a few other high level languages which also compile to native code, but I have no idea of their memory models and don’t trust them in these environments. Still, I’d like to take some time to benchmark them properly for these purposes at some point.
esr, for once (? more than that actually, but I digress), we agree.
10 years ago I had this for a .signature:
C++ is like jamming a helicopter inside a Miata and expecting some
sort of improvement. — Drew Olbrich
With luck, you’ll take down perl next.
I prefer this version of the same sentiment: “C++: an octopus made by nailing extra legs to a dog”
C++ is a complex beast but it beats using plain C anytime you want to produce native code.
Daniel: AFAICT, the idea that design patterns are signs of weakness in programming languages was first articulated explicitly by Mark Dominus (see also here), but Peter Norvig got most of the way there in his presentation Design Patterns in Dynamic Languages. You have to be slightly careful with quantifiers: Dominus’ claim is that (for all languages X)[(exists pattern Y in language X) => (exists weakness in language X)], whereas Ralph Johnson (whose response appears to have vanished from the Web) misunderstood him and addressed the claim that (exists language X)[exists pattern Y in language X] => (for all languages Z)[exists a weakness in language Z].
The thing that really convinced me that C++ was a language whose time had passed was reading Scott Meyers’ books Effective C++, More Effective C++ and Effective STL, which could be seen as a long list of gotchas and workarounds that must be borne in mind when writing C++. Any language that required you to deal with this nonsense on a daily basis was clearly wasting its users’ time.
There was a good thread on Reddit a while ago, containing many examples of large programs written in dynamic languages – possibly of interest to you as a counter to the claim that C++ becomes necessary above a certain level of complexity.
To be honest, Eric, I suspect C++ is on its way out anyway: the corporate drones have moved on to Java or C# and the open-source movement mostly uses C or the abovementioned interpreted languages. The last holdouts are the scientific community (who are notoriously conservative and uninterested in programming for its own sake) and the games industry (who are in the unusual position that all of their code needs to run fast). But even the games industry is starting to crack under the weight of the complexity of modern games: Tim Sweeney of Epic Games did a POPL talk from a few years back about how they’re starting to look towards the ML family. Once they go, the glamour will be gone, and there will be many fewer people like David who think that “real programmers use C++”.
My guess is that C++ is going to find its market share eaten up from one end by the scripting languages, and from the other end by the modern compiled functional languages. But anything you can do to hasten its demise is fine by me :-)
Wake up, it’s 2008. std::tr1::shared_ptr<> exists, and is readily available.
You should at least check out the Stepanov Papers. He wrote the STL, and considers C++ the only language where it is possible for him to implement his ideas. He says he still has a million complaints (which might interest you too), but it at least has something going for it. Knuth writes in C, but with a powerful custom macro system that might do what C++ templates do.
http://www.stepanovpapers.com
Back in March “Software Engineering Radio” interviewed Kevlin Henney, a C++ advocate. I’m no fan of C++ but I thought he defended the language quite well.
The podcast is still available here:
http://www.se-radio.net/podcast/2008-03/episode-91-kevlin-henney-c
>> There is no reason to use C++ for new projects.
So, what would you use instead for developing standalone application, where speed is critical?
Eric, if you want to start a language war, best bring it to Reddit where the Lisp and Haskell fetishism run strong. I’m sure you’ll find plenty of supporters there on this issue (unlike just about every other issue you blog about).
C++ is virtually the only contender in its niche. Its niche happens to be a rather big one: any time you have a complex, extremely high-performance application C++ should be the first thing you reach for. No other language combines raw speed with high levels of abstraction.
Despite being the only contender in its niche, C++, like Ada, gets slagged off all the time by soi-disant “hackers” who think they know what they’re talking about, but don’t. It’s a poor carpenter who blames his tools, and C++ is a precision tool which evolved the way it did for very justifiable reasons, as a cursory glance over Stroustrup’s or Stepanov’s writings would suggest. Oddly enough, C++’s templates were inspired by Ada generics, but are considerably more powerful. (C++ has the only general term-rewriting type system in mainstream languages, strictly more powerful than Haskell or ML type inference.)
I think that the author is biased against C++. I am eager to see his biased result. Really.
It is a fact that C++ is overly complex. It is complex enough that some of the readers of this article have posted that it is an OOP language, while, in all truth it is not. C++ is a genuine multiparadigm language, and as such, it has features that make OOP easier; but it is NOT an OOP “language” in the crappy tradition of “everything is an object”. Personally, I have seen the majority of issues coming from C++ programmer wannabes, a total lack of perspective or intuition into complex things like templates. Yes, templates, because if you are not in this for templates as _well_ then you are wasting your time and any other language can be more fruitful to you.
Does C++ have problems? Yes, it does. We can talk about ABI issues, we can talk about how the standard relies on undefined behaviour in some rather dark corners of the implementation from vendor to vendor and so forth. But if you only have a braindead python crowd nowadays, then it comes to no surprise that it is the same people who bash at it. C programmers understand why C++ is important. C programmer wannabes don’t actually know C so their opinion does not matter.
Does C++ suck? Yes it does, to the same level that any other language trying to do the same things does. It is not that you have to be against something though without being for something different as well. To date, there is no C++ counterpart. And C++ is still evolving.
Have you checked the amount of scientific literature within computer science (and not) with solutions implemented in C++ ?
You can bash at it as much as you like. Remember this though:
C++ is the programming language analogue of the incredible hulk. The “madder” the hulk gets, the stronger it gets.
Now, have fun making “void main()” programs.
Stepanov on C++:
“Unfortunately, at that point C++ was not at all usable. It took me
several years to discover the reasons why I could not use inheritance.
But I got really impressed with the underlying C machine and with the
way Bjarne added function and operator overloading, and with the notions
of copy constructor and destructor. …
And then Bjarne added templates. I was quite astonished with what I
could do with them. I could express very complex relations between
concepts, I could carry a lot of information with types. Aren’t C++
templates just glorified macros? I do not think so. I think that they
are very powerful type generating mechanisms. The ability to nest
templates together with partial specialization allowed me to express a
complex relations between types.
Is C++ pefect for generic programming? No. … There
are hundreds of little glitches. But it is much better than anything
else I know of for expressing what I have been trying to express. ”
http://groups.google.hr/group/comp.lang.functional/msg/d7394ad977b3a396
The rest of his posts on that thread are worthy as well.
I always find articles whipping C++ interesting to read, because C++ is my language of choice for new systems. I don’t find that the “warts” get in the way; I do feel like I’m actually solving the problem that I’m writing an application for. I find other statically typed languages can be quite painful in comparison. In C, Java, C# and Delphi I often feel like “this would be so much easier in C++”.
C++ takes a bit (lot?) of getting used to. However, you can often use the good techniques you learn from other languages in C++ to good effect. Modern C++ code is increasingly generic and functional in its approach, and templates provide for “duck typing”. With the new C++0x standard there’s quite a few changes to improve the syntax and support new features, such as lambda functions, “pure” (constexpr) functions, regular expressions and so forth.
The major downside to C++ is that the syntax is quite complex, which makes it hard to parse, so there are very few tools for manipulating C++ source (e.g. refactoring tools). However, even these are becoming more readily available.
>Once they go, the glamour will be gone, and there will be many fewer people like David who think that “real programmers use C++â€.
Unix and scripting languages have thankfully disabused me of that notion. :-)
As for performance, I think that Moore’s Law will eventually erode whatever advantages that C/C++ have in terms of performance. As Tim Sweeney points out in the aforementioned talk, most of the delays and cost overruns in modern games are caused by memory-allocation bugs. A game developer who used Pygame could run circles around EA (as long as they didn’t screw up and use restrictive DRM. Spore, Anyone?)
To why you shouldn’t use C++, read the book “Exceptional C++” by Herb Sutter, one of C++’s gurus. It inadvertently demonstrates just how hard it is to write correct C++ code.
Or you could try implementing your own container class (like map, vector, etc.) for the STL. Take something trivial like a binary tree with forward and reverse iterators and see if you can implement it in less than 300 lines of hideously complex template code.
To see just how unportable C++ and how poor its compilers are, look at the all the hoops the Boost library has to jump through.
i agree with Jeff and Irregularity.. C++ is the only reasonable choice when it comes to write speed and memory efficient code, yet having some sort of abstraction.
Uh David, I don’t think Pygame supports OpenGL. It seems to be mainly for 2d games. I do not think anyone will be running circles around EA using Python anytime soon.
I hope the paper will not try to compare C++ with languages that do not compete in the same market. As i understand it, there are only few languages competing with C++, and these are not as widely used as C++ is. I believe that for a language to become a viable choice for a project, it’s important to consider how big its user base is, and how much standardization efforts commitees have put into specifying the language. IMO, a one-man language project, while it may implement a few good ideas can’t really compete with international standards.
>So, what would you use instead for developing standalone application, where speed is critical?
Me? Python, with calls to wrapped C where (and only where) profiling reveals bottlenecks.
Others might prefer a different scripting language. Python has a bit of a functional advantage for this kind of mixed code because its extension/embedding facilities are clean, effective and well-documented…unlike, notably, Perl’s. But Tcl would be competitive here and I have the impression Lua is as well.
>Uh David, I don’t think Pygame supports OpenGL. It seems to be mainly for 2d games. I do not think anyone will be running circles around EA using Python anytime soon.
Pygame actually does support OpenGL, but I agree that games using it might suffer a performance hit, although they claim to generate optimized C code. Besides, I don’t fetishize 3D the way most modern game developers do these days.
I wonder if the problems of C++ might be caused by its lack of an interactive interpreter. Both Lisp and Python, Eric’s pet languages, both have them, and it seems that the immediate feedback lets programmers model problems in their heads more easily then with compilers.
> > I hope you will cross-reference your complaints about C++ to e.g. bjarne’s various defenses of probably the same issues.
> Stroustrup’s apologia(s), are they webbed anywhere?
Aside from the book “Design And Evolution of C++” you can find his “how it happened” stories at http://www.research.att.com/~bs/C++.html (“C++ Design and History” section). Man of his papers are available at http://www.research.att.com/~bs/papers.html . I would recommend:
* http://www.research.att.com/~bs/MIT-TR-original.pdf
* http://www.research.att.com/~bs/DnE2005.pdf
* http://www.research.att.com/~bs/abstraction-and-machine.pdf
* http://www.ddj.com/cpp/184401555;jsessionid=HL3L3MZ1N0BZMQSNDLPCKHSCJUNN2JVN?pgno=2
* http://www.research.att.com/~bs/new_learning.pdf
From what I can see, PyGame does not support OpenGL, and you need the PyOpenGL bindings in order to get OpenGL support. Now, I do not know how well the two packages interface; however, searching the site does not turn up much in the way of documentation, so my point still stands. If you want to have a truly useful game development product, it is helpful to integrate as much as possible. This is what Delta3D is doing.
>I wonder if the problems of C++ might be caused by its lack of an interactive interpreter.
I don’t think so. I’m not going to try to describe all the fundamental problems we think we see in a blog comment, but here’s a hint: GC > OO.
A critique of C++ should address the flawed ‘multiparadigm approach’ of the language ( http://www.research.att.com/~bs/bs_faq.html#multiparadigm ) . In fact, nobody (not even Coplien) has yet clearly described the major ‘paradigms’ C++ supports let alone their interplay resulting in one clear programming style. As mentioned above, there is no general C++ way to program, no C++ programming style. The language subset and and style are often determined by the framework used.
Another point: The number of genuine C++ libraries (not C libraries) is amazingly small for a language of such distribution. What’s the cause of this?
>I don’t think so. I’m not going to try to describe all the fundamental problems we think we see in a blog comment, but here’s a hint: GC > OO.
Hmmm, interactive interpreters seem to be a side effect of automatic memory management. Garbage collection lets programmers spend their time actually designing programs! Okay, Eric, I get it. ;-)
Each language excels in a particular area. Writing an ERP application in C++ would be just as foolish as trying to process the LHC‘s terabyte data streams with Python or Lisp. When it comes to efficiently providing memory management abstractions C++ is unbeatable. In C++ an object with non-virtual methods can occupy as little space as a single byte; in Java 12 bytes is the minimum. When handling large data sets this frugality can make the difference between success and failure.
Processing the complete Linux kernel with the CScout refactoring browser (a program written in C++) required 4GB of RAM. The abstractions needed for implementing CScout would be extremely difficult to express in a simpler language like C. On the other hand, implementing CScout in a language like Ruby, Haskell, or Java would mean I would not be able to afford to buy the required memory.
Sure, C++ comes with many warts. But it is the sharpest tool around, and for some tasks the only one that can cut it.
I guess this says it all: “I made up the term object-oriented…and I can tell you, I did not have C++ in mind.” – Alan Kay
C++ does not teach anything.
When I learned BASIC, I learned program flow. When I learned Pascal, I learned functions and structure. When I learned Ruby, I understood oop. And so on. The pattern here: because the implementation of one particular feature of programming is so clean like a list in LISP, it shows itself how it really is. In C++ and Java you first have to understand the concepts and than figure out how you can bend the language to use it. But once you understand the issues, why would you then want to use them with C++?
“Life is too long to be good at C++.”
Attributed to Erik Naggum
“aestheticles: n. The little-known source of aesthetic reactions. If your whole body feels like going into a fetal position or otherwise double over from the pain of experiencing something exceptionally ugly and inelegant, such as C++, it’s because your aestheticles got creamed.”
Erik Naggum
“I may be biased, but I tend to find a much lower tendency among female programmers to be dishonest about their skills, and thus do not say they know C++ when they are smart enough to realize that that would be a lie for all but perhaps 5 people on this planet.”
Erik Naggum
> “would be just as foolish as trying to process the LHC’s terabyte data streams with Python or Lisp.”
That’s why they have CINT with ROOT (http://root.cern.ch) it’s a C++ interpretter + reflection/introspection (and other dynamic features), which later can turn into C++ – e.g. selectively you can have certain source code being compiled, and others being interpretted. This helps them with cutting down iteration times when prototyping, or doing new features, while when needed switching back to the real C++ when speed is needed.
I work in the video game industry, and one of the biggest pains, is that from say all 1500+ C++ files, I cannot selectively run the ones I’m actively developing under some kind of interpretter (much like CINT/ROOT) while the rest are running real compiled C++, and once I’m done, submit to P4, etc. other people would be using my code as compiled too, but theirs (if they want) interpretted.
What’s the biggest pain? Well iteration times. Even with cool tools like XOREAX INCREDIBUILD, or what’s else on planet, you still have the inevitable compile/link/run which at least takes 1-2 minutes at best, and you always have to start from the begining (start the game level, try to come up to the same state, or even try to reuse the save games, but they might not yet work)…. (I have a good colleague who teached me this simple trick: always make your gameplay constants actual GLOBAL variables… because you can change them run-time, so you don’t have to recompile, but you can quickly tweak them, until you are happy with them (mostly when dealing with gameplay)).
Off course the video game business is using lots of scripting languages – and allmost, if not all of them are garbage-collected, because as esr said GC > OO, but also because you don’t want to put in the hands of your creative game designer using that script language the technicalities of manual allocation/deallocation, buffer overruns, indices out of the array bounds, etc.
Simple as that.
I wish the game industry considers cern’s ROOT/CINT technology and make use of it, in it’s development environments.
http://root.cern.ch
> In C++ an object with non-virtual methods can occupy as little space as a single byte; in Java 12 bytes is the minimum.
And in practice, Sun’s JVM adds quite a bit: “In Sun’s HotSpot JVM, object storage is aligned to the nearest 64-bit boundary. On top of this, every object has a 2-word header in memory. The JVM’s word size is usually the platform’s native pointer size” ( http://teddziuba.com/2008/02/the-road-to-hell-is-64-bits-wi.html ). That adds up quickly. The article I linked to talks about an object with two data fields — one int and one double — taking up 192 bits (including the padding) in memory. And the program using this monster object needed to put lots of these in memory at a time.
> When handling large data sets this frugality can make the difference between success and failure.
Yep.
I keep seeing people directly assume that C cannot be used to do “real” programming work. This hasn’t been my experience. The pain I’ve suffered writing C++ is much greater than the pain inherent in building up the C code to allow me to solve the same problem. This is doubly true for anything expected to target multiple generations of a given platform. I keep running into cases where I’ll download a cool-looking (despite being c++) project, drop it on my modern debian, osx, or cygwin box, follow the readme for compiling, and run into large numbers of “the compiler doesn’t accept this style code any more” errors. In some cases, the projects are only a couple of years old!
Maybe it’s just that I’m not motivated to work with c++, but I’ve actually had less pain working with ancient C (including K&R) source on modern systems than I’ve had with much younger C++ code.
In my view, toolchains that rapidly lose compatibility with existing code are dealbreakers. C has had a few such issues, but C++ seems to repeatedly change in ways that break previously compliant code in difficult-to-fix ways.
C is a perfectly rational choice for some tasks, such as systems software and embedded systems. However, once one deals with higher level abstractions than bits and bytes the allure of the C++ STL containers, iterators, and algorithms can become irresistible. I recently wrote some code to add Wikipedia links to arbitrary web pages. I wrote the bit twiddling code for the memory-mapped Patricia tree data structure in C. However, when the time came to handle HTTP and HTML I chose C++; robustly handling header maps and strings in C would be unnecessary pain.
I agree that C++ code is brittle in the face of compiler upgrades, and I’ve often encountered problems similar to those that Robert describes. This has to do with programming at the edge of the available technology. We want to use all those new exciting C++ features, before they are ironed out and standardized. This suggests that they offer real value, but the downside is the potential for code breakage in the next release of the compiler. Also, C is a relatively small language, and it’s easy to know what’s part of the language, and what is a non-portable extension. C++ is so large and complex, that the only realistic arbitrator of the code’s legality is the (constantly evolving) compiler. In many cases when my code that used to compile was flagged by a newer compiler version, I realized that the code was always incorrect, but earlier versions of compiler were wrongly accepting it.
Errr, have any of you heard of ObjectCenter? For at least a decade and a half it’s supplied an interpretive environment for C++ (CodeCenter for C). You can load object code as needed for performance, and run what you want interpreted, and of course use a REPL as needed:
http://www.ics.com/products/centerline/objectcenter/
In 1991 I used it to rewrite the engine of a high speed (120 pages/minute) Kodak scanner controller in three weeks. I didn’t have time for formal unit tests so I just tested on the fly in the interpreter as I wrote code. This was such a productive environment that within hours of receiving the lowest level part of the SCSI system (the driver that talks to the host adaptor), the first time I hit go on the scanner everything worked … until I ran out of file descriptors (forgot my close() for the output files :-).
Closest thing to a Lisp Machine for C/C++ I’ve ever found, and it saved me on a couple of short deadline projects at different companies. Don’t know the current state of the product but it’s worth checking out.
As for my opinion of C++, I consider it to be a supremely dangerous language to program in but I think it has its niches (this said by someone who worked for LMI and is a “Scheme Forever!” type :-).
E.g. VLSI CAD and verification is a major use, and that area’s demands for performance and huge datasets don’t make LISP an obvious win. Check out the thoughts of Jiri Soukup in his _Taming C++_ book (which has plenty of generally good and often hard earned advice, like make your class hierarchies DAGs) and at http://www.codefarms.com/
– Harold
Why I can feel some of the pain people go through, I disagree that C++ is in a valley between C and scripting languages. I write tools to process large data sets in NLP, and for such purposes there is practically no substitute except for C. Once you process very large corpora, how tightly you can pack and process data and starts to count. The common reply is that Moore’s law wil outdo C++, but in fact, people will just come up with larger data sets. If you buy that brand new cluster of new Linux machines with four times the processing power of the previous cluster, people expect to process four times as many data in the same time, not to rewrite your application in Python. Of course, C is also an option, and is used much as well, but for me personally, C++ is more productive thanks to the abstractions provided by STL. So, I see a lot of C/C++ hooked up with Prolog, Perl, or Python code.
I see potential problems in C++ coming from another direction that is not really mentioned in any of the comments: parallel computing. We all know that we have pretty much hit the ceiling wrt to clock speeds and common optimization techniques (branch prediction, reordering, etc.). To be able to jam more power in CPUs we have gone multicore, and the progress there is not going to end soon (we’ll probably have octacore CPUs in desktops within a few years). Some other widely used languages have started to prepare well for this change, such as Java, which now provides some thread-safe containers, and provides a promising task-based threading API (e.g. see the work done by the Functional Java project). Even if there will be standardized generic containers with thread safety in C++ and good ways to exploit parallelism, I worry that the relative complexity of C++ combined with the complexity of parallelism in stateful languages will make the language too difficult to use for most programmers for most practical purposes.
BTW. I am not an expert of parallel programming, just someone who has to deal with it in the future as well ;).
Scheme tends to be my poison of choice. The two big Scheme-to-C compilers, Gambit and Chicken, both have really excellent C FFI’s. For embedding there’s TinyScheme (currently used by GIMP, supplanted SIOD which wasn’t really a Scheme).
N.B.: What I said above about Scheme is contingent on having way more CPU and memory resources than what’s required for the application. The best garbage collectors require more than twice as much memory as manual allocation, and C++’s smart pointers cover most of the use cases for a GC. (The rest can be taken up with reference counting and cycle detection.)
Let’s suppose that we want to design and implement the programming language with the following requirements:
1. Full binary compatibility with C, in other words, no need for any special binding between the language and C libraries.
2. No performance overhead outside of the developer control, i.e. no garbage collection
3. Support for the object-oriented programming
I would say that any language that fit the requirements would not be substantially better than C++, so the lerning curve for the new language would not be justified by any gains.
It is completely unfair to compare C++ with C#/Java/Python, as these languages complement C/C++, but can’t completely replace them.
Hmmm, would the criticisms of C++ be applicable to Objective-C as well? (Yes, I am on a Mac.)
David, I am on a Mac as well, and the times I have looked at Objective-C all sent me screaming back to C/C++/Python. God damn that language is a fugly hack! It seems to be C with a Smalltalk-style object/messaging system jury-rigged on. From reading the sample code on the Wikipedia page, it looks as ugly as I remember it.
C++’s crowning achievement in obfuscation is its static binding rules – the combination of overload resolution, template instantiation and specialization, and built-in and user-defined implicit conversions. Quite unfortunately, C++ FAQ Lite barely scratches the surface in this department, and therefore so does C++ FQA Lite. I wish I had a pointer to a good supply of examples in that area.
Regarding empirical studies – measuring the build time of several large C++ projects vs, say, C and Java projects of similar size in LOC can yield beautiful results. The memory consumption of each compiler process is also interesting because it makes parallelizing the build more costly – more RAM per server, and likely events of workstation suffocation due to swapping. Code generated by scripts is as bad an offender as code instantiated from templates – I’ve seen g++ and Green Hills C++ consume way above 1G of RAM on such input.
C++ was named in 1983, and it’s been an ANSI standard since 1998. Is there a projected point at which that excuse will run out? I think it’s just barely possible that the problem lies in the language itself.
Dimiter: I used to work at a C++ shop where we had compile-and-link times of around twenty minutes, despite distributing compilations around the network. Total productivity killer; by the time your compilation’s finished, you’ve completely forgotten what you were working on. The codebase was less than 100Kloc, too. Now, you might say that that was because the physical and logical layout of our code (which I didn’t design) was stupid, and you’d be right, but that’s just the point: this is an opportunity to get stuff wrong which C++ presents you with and most languages don’t.
>Regarding empirical studies – measuring the build time of several large C++ projects vs, say, C and Java projects of similar size in LOC can yield beautiful results.
Can you point us at a study we could cite? In the draft paper, we compared compilation times for the Linux kernel (as a large C project which both Rob and I have done work on) and Battle For Wesnoth (as a mid-size C++ project I’ve worked on). Normalizing for SLOC we observed a 13:1 ratio in compiled source lines per second.
Objective-C has its own set of faults and foibles, and its own set of advantages as well: a small runtime and a few syntax extensions provide not just an object system, but an object system suitable for software componentry without the messiness of systems like COM. For the most part loading a class from a DLL and sending messages to instances of the class Just Works(tm), provided the objects understand the messages you send. This really isn’t possible with C++.
On the other side of the coin there’s a performance penalty: there can be no one-byte objects in Objective-C and method invocations take about four times as long as function calls, leading Objective-C programmers to use hacks like caching the pointer to the method implementation and calling that.
Also, Apple completely screwed the pooch with Objective-C 2.0, which introduces bogosities like properties for Objective-C objects. If you have an object of type NSFoo called foo with properties bar and baz, they are accessed with foo.bar and foo.baz. That’s right: the dot operator is used on pointers. Ugly, ugly, ugly. OTOH again closures and GC in Objective-C 2.0 tickle the Lisp-head in me.
Oh, crud, I forgot lack of namespacing in Objective-C. Oh well. :)
> here’s a hint: GC > OO.
Your ideas are not new, they are decades old.
There were these guys at Sun, nearly 20 years ago, who saw the future of C++, and decided to do something about it.
The (end) result was Java, but Java isn’t explicitly the marriage of Mesa and C that wnj sought.
http://www.cafeaulait.org/javafaq.html#xtocid190291
OBTW, how is that “64-bit will revolutionize computing” / linspire thing working out?
Jim Thompson, and the pisser is that once the focus moved to Java, a much more worthy Sun language project (Self) kind of dried up and went the way of the dodo. Of course who knows how Self would have been screwed had it been aggressively commercialized the way Java was. Then again, a glimpse of such an alternate universe may be found in JavaScript, a very commercial prototype-based object language which, at the end of the day, ain’t half bad (though it’s abused a lot).
Despite some driver suckingness (more a problem under 64-bit XP than Vista) there is absolutely no reason to believe that Windows will not be the dominant operating system of 64-bit land.
Unfortunately I don’t know a study to cite. For what it’s worth, here’s the one thing I can suggest – try /Modern C++ code/ (you know, with std::tr1::shared_ptr used for a broken emulation of GC, etc.); 13:1 seems a bit tame. I can suggest LLVM vs GCC, and Boost.Python bindings vs native CPython bindings (a .cpp file using Boost.Python can compile for minutes; I think you’ll get a ratio between 100:1 and 1000:1). If no source code examples of the latter kind are readily available, I can try to hack something up. I think that examples of “modern” style are important because they show that things aren’t just bad, they actually deteriorate, and they do so based on recommendations of the top experts in the field.
Many papers and articles were published on this subject, and the problem with most of them is that they fail to take it into account that it must have some merits too, because there must be some reason that f.e. all major browsers are written in C++. I don’t think a whole industry is composed of idiots, or something. I think you should ask at least 50 C++ programmers from various fields why do they use it, and choose one popular field to analyze deeper: I suggest browsers.
Probably the reason is you will find is the speed and the readily available libraries of C, while providing a bit more abstraction.
What I don’t get is – why do dynamic languages have to be interpreted? It isn’t particularly hard to generate compilable C/C++ with Yacc/Flex/Bison. And then you get the speed and you can link against any C/C++ library. And you can distribute a single executable with a couple of libraries, which is very useful if you want to make it easily installable. Looks like the best win/win scenario to me, why don’t Python/Perl/Ruby/etc. do this?
Grendelkhan,
I sincerely hope that there will be a time in the future when conducting this kind of “research” will be
about as shameful and subject to ridicule in the scientific community as consulting Tarot cards… yet I see this sort of thing in the newspapers every week.
I mean the following approach: take a given phenomena regarding two or more groups of people (number of bugs), demostrate how strongly does it correlate with a single factor (programming language use), but WITHOUT controlling for all other possible conceivable factors: programmer quality, hardness of tasks, how good is testing (how many bugs are discovered at all), and so on. The point is, it tells absolutely nothing,
yet it appears to demonstrate some sort of a causal link between the two. It might very well be that one or the other attracts better or worse programmers, is used to solve easier or harder tasks, or is used by companies who spend little or a lot on testing.
Short answer — they don’t.
Longer answer — by virtue of their very dynamicity, code written in a dynamic language makes decisions at run time which in C are usually made at compile time. Things like “Should this value be treated as an integer or a string?” and “What methods can be invoked on this object?” and so forth. The result is necessarily a performance penalty, although these days compilers for such languages are quite good at optimizing, and can frequently come to within an order of magnitude or so of the equivalent C code.
If you’re looking for performance, Python already does this (google PyPy); Ruby is trying to catch up. Smalltalk and Lisp have been doing this for decades. Some implementations of Scheme compile to C, as you mention.
If you’re looking for an easy foreign-function interface (FFI), the part that must interface with the C world, again by their very nature function invocations work differently in dynamic languages than in C so you need a way to bridge that gap. Chicken (a Scheme implementation) can parse header files for you; Gambit (another Scheme implementation) provides a nearly-as-easy FFI; for other languages there is SWIG. If you’re using C++, for Python there is boost::python. It makes creating Python wrappers for C++ objects nearly trivial; but there may be a nasty downside in terms of compilation time.
Shenpen: what Jeff said :-)
But really, the question you ought to be asking is “why need interpreted languages be so much slower than compiled languages?” A VM or JIT, which operates at run-time, has much more information with which to optimise your code than does a compiler, which operates at compile-time. Steve Yegge has written some good stuff on this idea, and there has been some promising work done – the one everyone cites is the Strongtalk optimising VM for Smalltalk.
The performance advantages that are supposed to come out of this oft-cited factoid remain largely theoretical. C++ remains at the top of the heap when it comes to performance from an HLL, and in general the earlier you bind stuff, the faster your code will be. (Thanks to templates, C++ can even early-bind polymorphic method calls.)
Are you familiar with Ian Joyner’s “A Critique of C++”?
>Are you familiar with Ian Joyner’s “A Critique of C++�
I wasn’t. Thanks for the reference.
C++ is Serious Business. Since you, esr, don’t have a full beard, you don’t have the authority to talk about stuff that is Serious Business. You can’t even comprehend C (which is Serious Business, too; the fetchmail project still suffers from your screwups), so why do you think you have the intellectual capacity to bring up any qualified criticism against C++?
> n the draft paper, we compared compilation times for the Linux kernel (as a large C project which both Rob and I have done work on) and Battle For Wesnoth (as a mid-size C++ project I’ve worked on). Normalizing for SLOC we observed a 13:1 ratio in compiled source lines per second.
I can believe that C code is faster to compile than the same number of lines of C++. However, the C++ also has a greater information density: each line of C++ can do more than a line of C. Templates really help here: the few lines of code that constitute std::find can be used to search an array, a linked list, an associative container, or even something far more abstract like a file-backed container or a database, whatever the data type in those data structures. In C you have to rewrite/copy the search code for every data structure or mimic iterators using function pointers (at a performance cost).
The only true way to compare is to write the same application in C and C++ the best way you know how in each case, and *then* compare lines of code (C++ should be less or you’ve done something wrong) and compile times (having not done the test, I don’t presume to know what the result will be).
>However, the C++ also has a greater information density
We’re not trying to measure stuff-that-gets-done-per-LOC, just considering the effect on the edit/build/test cycle.
foo, It does not take a ‘full beard’ to see that C++ is a load of horseshit! The Unix Haters nailed it back in ’95. Go through the rest of the comments, and you will find plenty of corroborating evidence from plenty of people who know ‘Serious Business’.
esr, from the perspective of a graybeard, which chapters did you consider good and which chapters did you consider bad?
esr, from the perspective of a graybeard, which chapters did you consider good and which chapters did you consider bad?
Hmm…You know, I think I might blog about this.
Both this critique and that of the Unix Hater’s Handbook come from before C++ gained some of its most essential features. If you restricted your citations of others’ critiques to strictly those which came after templates and the STL, that would bolster your case enormously. (The UHH states that C++ lacks namespaces; it’s had them for well over a decade! Talk about out of date!)
foo:
> the fetchmail project still suffers from your screwups
I’ve heard this a few times before, but always in general terms. Can you be more specific? I don’t remember having many problems with fetchmail when I last used it. I notice that there’s an Important bug from 2002 still open in the debian list, but other than that the bug count doesn’t look too bad.
> Both this critique and that of the Unix Hater’s Handbook come from before C++ gained some of its most essential features. If you restricted your citations of others’ critiques to strictly those which came after templates and the STL, that would bolster your case enormously.
Eric: this is a good point. I don’t think there’s any need to deal only with critiques of post-STL C++, because I’m sure there are still a lot of people out there writing “pre-modern” C++, but you’ll be dismissed if you don’t address modern C++ too. Alexander Alexandrescu’s Modern C++ Design (and his library Loki) should be compulsory reading here, if you don’t already know them.
>I’ve heard this a few times before, but always in general terms
So have I. Nobody has ever specified to me what these supposedly deep bugs in fetchmail actually are. If they were serious enough, I’d revisit the project and fix them (it’s had a lead maintainer who isn’t me for some years). I suspect, on good evidence including the silly talk of beards, that “foo” is just randomly flinging feces. It happens.
>Alexander Alexandrescu’s Modern C++ Design (and his library Loki) should be compulsory reading here, if you don’t already know them.
Are they webbed? If so, where?
I appear to have misremembered the author’s name: it’s actually Andrei Alexandrescu. Interestingly, he appears to be heavily involved in the D project, and is writing a book about the language. The Loki library’s here, and the book’s readable through Google Books here. Fragments are also available from Alexandrescu’s website, but it looks like he wants you to pay for the whole thing.
Personally, I really like what C++ let me do (once templates and namespaces were added to it, anyway), but over the past decade or so I have become convinced that it’s a tool that requires too much skill to use correctly, and which is too easily misused in ways that turn catastrophic way too easily. I never had trouble with it, but I actually like learning languages in depth and understanding their idioms. Most professional programmers want to use the tool to get the job done, not understand the finer points. (I sympathize, as I feel the same way about many other tools. The attitude frustrates me anyway, of course.)
C++ let me do things like work around broken semaphores by writing a simple spinlock in assembly and wrapping it in a very clean Lock object that everyone else could trivially use, and it let me do that very easily and elegantly. I am fluent in a wide variety of languages and I have yet to find another one that would’ve done that particular task that well. That said, C++ is one of the very few languages in which I have seen messes that were significantly worse than what you usually get with Perl. :)
> >However, the C++ also has a greater information density
> We’re not trying to measure stuff-that-gets-done-per-LOC, just considering the effect on the edit/build/test cycle.
The stuff-that-gets-done-per-LOC *does* have an effect on the edit/build/test cycle. If you have to write fewer lines of code to get the same thing done it can help productivity (in terms of features) enormously. That’s the benefit of languages with huge libraries such as Java, Perl and Python, and it’s a benefit afforded by C++ too: once you’ve written the supporting code (and both the STL and boost (http://www.boost.org) provide a lot of this for you) you can accomplish a lot in a few lines of application code.
Correct use of destructors for releasing resources leads to shorter code that is also less error-prone. e.g.
boost::mutex m;
std::vector some_data;
void do_something(std::string&);
void foo()
{
boost::lock_guard lk(m); // acquires mutex lock, ensures it is released at end of block even with an exception
std::for_each(some_data.begin(),some_data.end(),do_something); //call do_something for each entry in the vector
}
Smart pointers are also really good for managing memory and resources. For example, boost::shared_ptr provides a reference-counted pointer which deletes its object when the last reference is removed. You can use it just like a normal T* in most cases, but you no longer have to worry about dangling pointers (if you’ve got a shared_ptr to the object it won’t be deleted from under you) or memory leaks (once you no longer reference the object, either by destroying the shared_ptr instances or by assigning another value to them it is deleted). This removes two sources of bugs in C programs, and simplifies the code at the same time.
I also wrote a C++-focused blog entry about how exceptions can simplify code: http://www.justsoftwaresolutions.co.uk/design/exceptions-make-for-elegant-code.html
If you’re going to be writing a critique of C++ then you need to pay attention to the things that are good about it, as well as those that are less so. Another of my favourites, which again is related to stuff-you-can-do-per-LOC, is operator overloading. For example, I wrote some code that used fixed-point arithmetic and wrapped it in a class with overloaded operators. As a result, I was able to change an entire codebase to use it instead of double just by changing the variable types. For example, the following function calculates the two roots of a quadratic equation:
std::pair solve_quadratic(fixed a,fixed b,fixed c)
{
std::pair res;
fixed const temp=sqrt(b*b-4*a*c);
res.first=(-b+temp)/2*a;
res.second=(-b-temp)/2*a;
return res;
}
Without operator overloading, all those arithmetic operations would have been function calls, making the code harder to read and harder to maintain. It would also make it harder to switch between double and fixed. From where I’m standing this definitely impacts the edit/build/test cycle.
Anthony: operator overloading is hardly unique to C++ :-)
To all the C++ playa-haters out there:
Is your language pimp enough to support multidimensional analog literals?
Thought not.
Jeff: check the Reddit thread, which contains implementations in Python, Ruby, Common Lisp and Befunge. I think the tricks used for Ruby and Python should be applicable to Perl, too.
I’m not saying that the C++ version is anything short of totally badass, mind :-)
I’ve tried all sorts of programming languages – including C, Python, Ruby, Java and even less commonly used languages like D – but I keep coming back to C++ for any real work.
I’m the first to agree that some C++ features are overly complicated and shouldn’t be overused (with the usual suspects being included in the list — operator overloading, templates, multiple inheritance, …) – yet at the same time, those very features are very powerful and useful if used right.
Let’s start with templates — the main reason why they have a bad reputation is that the STL, the standard library that comes with every C++ compiler, overuses them quite a lot. In the STL, everything is a template, even something simple like a string. It makes the STL hard to read, harder than necessary to learn, and extremely hard to debug given the error messages most compilers spew out when a template is used wrong.
At the same time, templates are extremely useful where using them actually makes sense, such as in implementing hash maps, linked lists, or mathematical functions that are supposed to work regardless of the numeric type passed to them, without casting from one type to another.
Java tried to do away with templates — and reconsidered it in 1.5, introducing the concept of “generics” — which are separated from C++ templates by virtually nothing but the name.
It is similar for operator overloading – of course that can be used to generate severely obfuscated code, but — if used right — it can also be used to make code a lot more readable and to make libraries a lot more intuitive to use. I’ll take
std::string a=std::string(“C++”) + ” doesn’t suck”;
a += ” after all”;
over
char *a=strdup(“C++”);
a=realloc(a, strlen(a) + strlen(” doesn’t suck”));
strcat(a, ” doesn’t suck”);
a=realloc(a, strlen(a) + strlen(” after all”));
strcat(a, ” after all”);
any day. Yes, all the modern interpreted languages allow that syntax as well (usually by having a hardcoded string type in the language), but operator overloading makes it possible to have similar useful features for other classes that are too specific to be in the language itself or that were invented after the language was. It is easy to understand what this would do:
NewsTickerWidget += “new message to be displayed”;
and that sort of thing is made possible by using operator overloading on classes where usually operators wouldn’t make any sense.
Overloading [] can be extremely useful when implementing e.g. hashmaps — I’d take
if(KeyValuePair[“FeatureEnabled”] == “yes”)
over
if(!strcmp(g_hash_table_lookup(“FeatureEnabled”), “yes”))
any day. Yes, you get the same feature from languages that support a hash map as part of the language (e.g. Python) as well, but a hash map is not the only use case that makes sense, and C++ allows you to add that feature in any class where it makes sense.
As with templates, it is a feature that can make things a lot better if it used well, but that can mess up things badly if it is used in the wrong way – and as with templates, it is overused by the likes of the STL.
The same goes for the 3rd usual suspect – multiple inheritance. This is a feature I could usually live without, but even that has its uses, especially if you’re using a library that isn’t very well designed and you have to create a layer on top of it to make it usable (where of course usually the right fix would be to simply not use a library that isn’t well designed).
Another thing C++ haters from the interpreted languages camp often bring up (and which I sorely miss when using any of their preferred langauges) is the preprocessor. While that too can be misused to create obfuscated code, it is an extremely useful thing to have, especially when targeting multiple platforms, or supporting optional features (e.g. ./configure –disable-debug can actually leave the debugging code out of a binary that is to be deployed on an embedded device, rather than just hiding the messages).
Given the above, the thing that needs changing in C++, given it severely abuses the questionable things, is the STL, not C++ in itself. (The one thing that is nice about Java when coming from the C++ world is that it comes with a far more reasonable default class library compared to the STL – but you do find it lacking the other features after a while).
Take C++, forgetabout the STL, and throw in a sane class library, such as QtCore, to replace it — and you’ll instantly have a programming language that just gets the job done, creates fast code, and is totally intuitive to use at that.
Some proof, especially for the Python guys: Take a look at code written with PyQt, then compare it to equivalent C++ Qt code — usually the difference is just language semantics such as : vs. {}, other than that, you can translate it line by line because the C++ library at the core of the C++ Qt code gives you all you need with a sane syntax.
Now translating PyGTK to C GTK code is an entirely different matter – because the C language at the core of that misses all the useful features such as an object model (even GTK developers admit that “to put it bluntly, defining a class in GObject isn’t easy” (Quote from the Official Gnome 2 Developers’ Guide), and that “the derivation of new widgets in GTK+ C code is so complicated and error prone that almost no C coders do it” (Quote from the GTKmm FAQ), templates (for readable hashmap handling) and operator overloading (needed to provide readable string handling).
C++ isn’t perfect, but especially with a good class library to replace the STL, it is still the best language out there.
Although I myself very much dislike c++ (and use it), in academia it is heavily used for numerics for lack of better mainstream option in this respect. The fact that you can get to the bare metal is crucial for number crunching. (Too bad Fortress will probably never be more than a research project running on Java VM.)
My favorite C++ quote, spoken only recently by Ben Goertzel:
“C++ with use of boost and templates is a very elegant language … hidden behind a very ugly syntax.”
OpenCog is another case where, pragmatically, C++ the only language where it is possible to implement the ideas of its designers (both its advanced AI and systems-level designs).
Using the term “interpreted languages” in the context of language design further perpetuates the problem of people confusing a language and it’s implementation. It’s a newbie mistake and will probably cost you some credibility.
Bero: I kinda doubt the paper will devote too much space to operator overloading per se, as it’s a feature of many other languages (including Python, Eric’s favourite language). It may address problems with C++’s version of overloading, or problematic interactions with other features, but I guess we’ll have to wait and find out.
As for your comments about the STL: the T stands for “Template”, that should perhaps give you a hint :-). The STL is only a subset of the C++ standard library, specifically the part that deals with collections and related algorithms. And it’s full of templates because that’s the only way to achieve its goals given C++’s restrictions. For instance, std::string is a template so you can implement custom allocators (dunno why you’d want to, but apparently that’s a requirement).
The C preprocessor can be used on code in any language, not just C or C++. The Haskell standard library uses it extensively, for instance, and it can be invoked by the interpreter’s -P switch in Perl. Perl also has a feature (see Filter::Util::Call) whereby your code can be filtered through an arbitrary Perl module prior to compilation. Both source filters and the -P flag are strongly deprecated (except for joke modules like Lingua::Romana::Perligata), because the community’s experience has been that they cause many more problems than they solve. But still, the facility’s there if you want it.
I’m curious why you find the D programming language to be “deeply idiosyncratic.”
On CINT+ROOT at CERN:
A physicist at CERN has recently written to me that he considers quitting physics altogether because he is sick with the C++ programming which has gotten out of hand and he is struggling with C++ 100% of his time not doing any physics. He seems to be better versed in programming than most physicists.
Another physics colleague at CERN (a former sysadmin with an indisputably high intellect) stated that he stopped attempting to write anything non-trivial in C++ with CERN software because debugging is hopelessly hard. He limits his physics explorations to what he can achieve by merely “pushing the buttons”.
Every user of CINT+ROOT I’ve had a chance to communicate with says the same thing. All demos of CINT+ROOT I saw crashed with a seviol.
Last Friday I received a request from a colleague to check some events because the C++ port of an important algorithm I developed in Oberon would crash, and because the prospect of debugging C++ (again) was so intimidating that they wanted to be really, really sure that the original program was ok. Of course it was.
And it goes on and on and on….
The adoption of C++ as a standard at CERN in the early 90’s has proved to be a major disaster.
Eric, please do your best with that paper. And do not mince matters.
C++ with good libraries like Boost or Qt, and using things like smart pointers or auto pointers, is just as easy to code as Java or C#.
Also, Python with wrapped C code is good, but still does not offer the same level of raw performance, or power, or precision, that C++ does.
And, just look at all the truly world class software out there that is actually written in C++ (including the JVM and the CLR, ironically) – afterall, there must be a really good reason why Google uses C++ for it’s search engine (Google does use Java and Python too, but for internal and/or front end stuff).
Yup, C++ is a big, complex, sometimes ugly, sometimes hard to learn, programming language with some serious warts. But it is a very powerful, efficient, precise, tool that has no real equal for the types of tasks for which it is particularly well suited.
Thus, if ESR is going to do paper about his qualms with the language, it will only be useful if he (and co-auther) can suggest something better. Otherwise, it will yet another anti-C++ rant that is, quite frankly, less than useless.
Well, just make sure that you’re writing about the right language.
C++ today is not C++ as it looked in 1998, or earlier. Make sure you’re familiar with *modern* C++ before tearing the language apart.
In particular, your mention of manual memory management is a problem that’s pretty much solved in modern C++ code through RAII and smart pointers. If you call new() in your user code, you’re doing it wrong. If you use raw pointers, you are *probably* doing it wrong.
This isn’t to say the language is perfect (far from it), or that it’s not humongously overcomplicated (it is), but *because* it’s so complicated, it’s very easy to write criticism that doesn’t actually apply if the language is used properly.
However, I’ve yet to find a language with better support for generic programming, or for that matter, a language which supports OOP without limiting you to just that paradigm.
Read the Usenet post by Stepanov again. He strongly makes the case that C++ is the only language that could do all of what he wanted, to wit: automatically specialize formal algebraic structures expressed in general terms with a Turing-complete parameterized type engine.
If templates were used to get around restrictions in C++, those restrictions inhere to just about every other programming language.
(No, Lisp macros don’t count. Lisp macros are based on textual substitution, much like C macros, and know nothing about types.)
Never done much high-performance or embedded applications programming, have you?
> Never done much high-performance or embedded applications programming, have you?
Not much, and not in C++ :-) I realised that it would be something like that soon after posting – d’oh. Actually, I think this could be construed as another failure of C++’s “all things to all men” approach: because some small fraction of users need custom allocators, everyone else needs to live with all the botheration arising from std::string being a template.
> If templates were used to get around restrictions in C++, those restrictions inhere to just about every other programming language.
I’d describe “static typing without first-class type variables” as a restriction, and one that’s far from universal. Dynamic languages don’t need templates to implement a sane collections framework; the Hindley-Milner languages don’t need them because they have first-class type variables. You’re certainly right about C++ templates being more powerful than H-M; the first time I gave up learning Haskell was because I couldn’t translate some C++ code I’d written (to handle polynomials with coefficients in arbitrary finite fields) to Haskell without either checking types at runtime or generating the code with a Perl script. However, you don’t need a Turing-complete type language for this stuff.
By the way, have you ever taken a look at Qi, or the various dependently-typed languages like Cayenne or Idris? Research toys for now, but potentially interesting.
> Read the Usenet post by Stepanov again. He strongly makes the case that C++ is the only language that could do all of what he wanted, to wit: automatically specialize formal algebraic structures expressed in general terms with a Turing-complete parameterized type engine.
*reads*
Interesting stuff, though I’m not at all sure I understand it. The optimizations he talks about should be possible with something like GHC’s Rules pragma, though.
Let me try to meet you half-way. As a platform for doing crazy, brain-bending things with types, C++ stands alone. As a platform for developing useful applications, well, it is not my favourite programming language :-)
> However, I’ve yet to find … a language which supports OOP without limiting you to just that paradigm.
Really? Which languages have you tried? If Common Lisp and O’Caml are too outré for you, Perl and Python manage this trick just fine.
“harpoon the Great White Whale”? The Great White Whale, a beautiful animal, was harpooned by Captain Ahab, a monomaniac, This analogy is really bad. Even Ishmael appreciated the majestic whale.
Eric,
(Trying a shorter version of my earlier comment – Please do not post if the reason the previous comment did not go through is some sort of moderation)
I agree with Jeff that writing about the problems of C++ will only be effective if you can offer something that is better. Consider your own critique of the Unix Hater’s Handbook: the best chapters were those where the authors had something better in mind that they could use as a reference. I believe you suggest that Python or similar languages are the “better” solution in the language space, but as many have pointed out, there are many things C++ can do that Python can’t. Let me respectfully propose my own biased answer.
Designing a modern language is a problem I have thought about long and hard for a very long time (I’d say 15 years). The result is the XL programming language, and a programming paradigm I called concept programming. Highly idiosyncratic, certainly, and even more highly confidential at this point. But I think it’s worth sharing with you. Your criticism of C++ will be stronger if you know of other ways to achieve the same goals as C++ than if you suggest that we should change the goals.
And to make it clear, the goal in my opinion is to be able to develop very large and complex programs that still take advantage of the machine to the maximum possible extent. In other words, I am not ready to sacrifice performance or memory usage for convenience, and I don’t think we should have to. Many commenters here expressed the same feeling.
Eric,
I believe I’ve been running into some size limit on comments, so I’ve placed the rest of my comment on-line. I’ve been trying to focus on things that you or your readers seem to care about: efficiency, code density, high abstraction levels, operator overloading, preprocessing, garbage collection, rapid prototyping, quick build cycles, interaction with other languages such as C, simple syntax.
A friend referred me to this, and I figured I’d dump in my two cents.
I’m a good programmer, and I’ve written and debugged a lot of code in C++ over the last 20 years, several hundred thousand lines at least. My first languages were several flavors of assembly, and going to C from there was very natural. Going “up” to C++ from there seemed natural too, and C++ was my exclusive language for the better part of a decade. I know the language well enough to write a compiler for it, and that includes a lot of its deepest, darkest, ugliest corners.
I also have a proper Ivory-Tower CompSci degree, and thanks to it, I’ve had a deep love of Lisp and the purity and simplicity of the pure-functional languages and their ilk ever since I first laid eyes on them. They have their warts too (*every* language does), but there was an elegance there I didn’t see in other languages I’d encountered before. Despite their lack of speed, which grated on my assembly roots, and the eggheadedness of some of them, they were pretty languages in ways that C++ and even C could never be.
And then about two years ago, I had to do a lot of programming in PHP for work. Now I know PHP’s not a pretty language, but despite its ugly corners, I was finding that it was a lot more fun to code in than C++ had been for ten years before it. I was an assembly programmer at heart, and I love the feel of twiddling individual bits and knowing what’s going on in every corner of memory; so why was I getting hooked on this dinky kiddie scripting junk?
And I realized the answer one day: C++ had made me miserable.
The problem with C++ is subtle: It’s not so much that X feature or Y feature is inherently a bad feature; it’s that they all fit together to create one unmanageable whole. There’s nothing wrong with, say, classes or templates or manual memory management or virtual tables, but put them together and you have a nightmare. C++ is a duck-billed platypus: Take a few of the “features” and it could be fine, but put them all together and you have something that’s a monster. And worse, the monstrosity spreads to code written in it.
I have a piece of software of non-trivial size that my livelihood depends on. The design is elegant. The concepts fit together well. It’s written in C++. And it’s unmaintainable. Every time I try to add a feature, or improve an algorithm, or alter something small, it often comes falling down like a house of cards and I always seem to have to prop it back up again. I built into it years of hard work and study, carefully abstracting everything I could, using good algorithms and good design patterns, but the language itself almost seemed to conspire against me at every turn during the initial implementation and still seems to during maintenance.
Dealing with large, complicated data sets is just something that C++ isn’t good at and never will be good at. And STL didn’t help: It didn’t resolve any of C++’s fundamental issues, and in many cases, it just covered them up, giving you yet another layer of junk you had to work around to solve your problems. Why does anyone have to write whole lines of complex junk involving piles of and :: and . just to accomplish what Lisp can do with a simple (cons) and (cxr) or two? PHP may be ugly, but its highly flexible arrays make storing and managing giant messy piles of data easy. Lisp may be ugly for “traditional” programmers to read, but its lists are incredibly good at collecting and managing messy piles of data. Javascript, Python, Ruby — even Perl beats out C++ at managing complex data structures. C++ isn’t good at dealing with the realities of unpredictable, messy real-world data.
Moreover, I realized something else that was very startling: The speed of C++ is a myth — any C++ program large enough to perform a nontrivial task has tons and tons of additional support code — code that’s doing exactly what another programming language provides natively. The C++ program really isn’t any faster or simpler; the work’s just moved to the programmer-side instead of to the runtime-side. There are plenty of studies out there to show that manual malloc() and free() (or new and delete) don’t provide any more speed improvement in a large program than GC does; and in the same vein, all of the extra work I was undertaking to manage objects and data structures and keep the program from tromping all over itself was no different than the under-the-hood work languages like Javascript could provide for me. They took the same order of execution time, but in C++ I was the one who had to write all the code. I was doing the hard part myself and getting nothing for it.
C++ is a camel, a horse designed by a committee, the perfect language only for hypothetical tasks. That it solves any real-world problems is, at best, a bizarre quirk of history, caused by its ability to provide high-ish-level abstractions in environments constrained by tight resources where other languages wouldn’t fit well. But as the resources grow, the usefulness of C++ is diminishing — probably matching Moore’s Law, at the rate of one-half usefulness, every 18 months. An unfortunate consequence of Moore’s Law is that theory-based designs have more inherent value than practice-based designs; and I strongly suspect Moore’s Law will kill C++ just as effectively and just as brutally as it killed Fortran.
So today, I consider myself a “reformed” C++ programmer: I still code in it when I have to (and, sadly, I have to), but I don’t use it and won’t use it on new projects. PHP, Python, Javascript, even Perl… I’m adding some Ruby, and experimenting with D. I miss C’s speed as only an assembly lover can, but I won’t go back to C++ just to gain that. I’m no longer coding in C++ any more than I can humanly avoid, and I’m all the better and all the happier for it.
>There’s nothing wrong with, say, classes or templates or manual memory management or virtual tables, but put them together and you have a nightmare. C++ is a duck-billed platypus: Take a few of the “features†and it could be fine, but put them all together and you have something that’s a monster. And worse, the monstrosity spreads to code written in it.
Your comment is critiquing C++ from pretty much exactly the same angle as the draft paper. In fact, parts of it read eerily like our draft.
> Parts of it read eerily like our draft.
Well, if you like anything I wrote in that posting, feel free to steal it: I hereby place it (and this one) into the public domain.
(Darn spam filter ate my angle brackets, though: “Why does anyone have to write whole lines of complex junk involving piles of <> and :: and . just to accomplish what Lisp can do with a simple (cons) and (cxr) or two?”)
But, yeah, I coded C++ for a long time, and just got plain old frustrated dealing with it. I kept finding myself writing a lot of code and not getting much result for it, and the few so-called benefits that C++ offers don’t really hold much water in the age of gigahertz processors and gigabytes of RAM. Lisp’s parentheses may be uglier than sin, and SML’s design may grate like fingernails on a chalkboard, but Moore’s Law almost guarantees that languages with strong theoretical underpinnings will eventually beat languages based on practical demands, and we’re starting to see that now. C++, C#, Java, maybe even Python, I suspect they’ll all eventually succumb to the inexorable bloom of transistors, all beaten down by languages that can do the same things in less code and in the same order of execution time. If Lisp or Smalltalk or Javascript or Haskell doesn’t beat them, one of their conceptual descendants probably will.
One thing I *didn’t* mention was the interesting realization I made while coding in the “scripting” languages, which is that performance in them is based less on traditional principles of writing efficient algorithms and more on finding clever ways to make the underlying runtime (the underlying C code) — which is presumably already well-optimized — do all the hard work. Just as C was a high-level wrapper on assembly, “scripting” languages can really be thought of as high-level wrappers on C functions and C libraries, and in that context, there’s sometimes no real difference in performance between high-level scripting and C, since it’s really all C anyway — which leaves C++ out in the cold with no purpose.
Really? Then why did companies such as Google choose C++ as an implementation language to manage vast quantities of such messy real-world data? Couldn’t be because C++ is the only tool out there that can get the job done with minimal overhead cost. Nah.
Patently false. The speed gains to be had from GC are largely theoretical; to realize them you need five times the RAM available as it takes to run the equivalent manual-memory-management program. Furthermore, modern C++ provides plenty of methods of fast automatic memory management in the form of smart pointers; and with custom allocators the speed gains of C++ over a GC language may be even greater.
I’ve been programming in C++ ever since the early CFront days, and I’ve gone through many love/hate phases with the language. I was on the verge of giving up on it forever when the STL emerged and fixed — or rather, ameliorated — many of C++’s most irritating flaws. But as I’ve worked more with more modern languages like Python, Java, and C#, the defects of C++ only become more glaring. In particular, the Byzantine syntax makes it a maintenance nightmare (and templates exacerbate this problem tenfold). I pity those poor unfortunates who had to write compilers, debuggers, and editors for this language, and the difficultly in doing this is evident in the overall poor quality of C++-specific editors and IDEs.
The main problem in C++ seems to be a philosophical one. Consider the humble “string”: rather than provide a single, well-defined string type the C++ designers opted instead to leave this up to library developers. The result was that there are probably hundreds of string implementations, few of them interoperable or sharing a consistent API. Modern developers also have to worry about memory management, ASCII/UTF8/Unicode conversions, endian-ness, and all the other nightmarish rubbish that C++ was supposed to hide. If I must handle “strings” as strucutreless bags of bytes, then I might as well use C and get rid of the ridiculously complex C++ syntax. (And were iostreams and locales really worth the effort that the designers put into them?) The C++ designers sacrificed clear design on the altar of maximal flexibility…but the maximal flexibility came at the cost of complexity.
I’ve often heard people say that using C++ still makes sense for performance-intensive applications like games, but this seems pretty silly to me. Modern graphical subsystems like DirectX and OpenGL are indeed complex, but their interfaces are hidden behind C-compatible APIs; ditto physics engines and so forth. On modern hardware, the difference in execution speed between C++ and (say) Java is negligible in real-world terms. Even embedded devices like smartphones, PMPs, and netbooks can run Python and Java apps plenty fast even for media and games.
C++ is more of a lesson how now *not* to do things. I cannot imagine why anyone would choose this language for new development when there are so many great alternatives.
> I wonder if the problems of C++ might be caused by its lack of an interactive interpreter.
I wanted to respond to this point as well. The biggest problem with C++ from a developer’s/tool-builder’s standpoint is that it has limited ability to expose metadata. There are some non-standardized ways to get information on data types and so forth, but it’s still a huge pain to “discover” information about a C++ object: type information, visibility of members, methods, properties, and so forth. This means that both editors and debuggers tend to be of rather limited usefulness in many cases because they cannot accurately parse the C++ object (particularly when templates are being used).
> If you have to write fewer lines of code to get the same thing done it can help productivity (in terms of features) enormously.
…and one last response (I don’t mean to hog the thread, eric)…
In my experience, C++ tends to promote complexity rather than reduce it. Lacking a consistent and well-defined idiom to accomplish common tasks, developers tend to adopt overly-ornate solutions to fairly simple problems. (Just look at the Boost library if you doubt that.) C programs might contain more lines of code than a comparable C++ program, but they are often much more straightforward and easy to debug. The program also tends to be far easier to profile and debug than the C++ program. C++ is often too clever by half, and often *causes* the very problem it is meant to *solve*: that is, software complexity.
A typically American conceit — that growth could continue untrammelled forever. In reality, the transistor bloom is already running into hard resource and energy limitations, not to mention limitations imposed by the laws of physics. In Europe, where the permanent energy crisis has been felt more drastically, there is a burgeoning “Low Power Computing” movement which emphasizes necessary sufficiency for the application when it comes to CPU speed and, hence, power consumption. Such a movement necessarily calls for brutally efficient applications.
That’s just it: most languages with nontrivial type systems never really made it past the research toy stage, although Haskell comes close. There is one glaring exception: C++.
That is indeed a severe weakness of C++, although std::string helps to ameliorate it. Part of the problem is that it came too late, leaving time for everybody and their cousin to come out with their own string class (CString, QString, etc.) in order to improve on crufty old `char *’. Handling Unicode and multibyte strings is its own can of worms: no language that I know of does it correctly, but the Java and .NET runtimes come closest. The vast bulk of serious XML applications are written in Java for a reason: it is my understanding that as of 2007, all the major C and C++ XML parsers still harbored latent bugs due to the fact that they handle XML data in what are effectively C strings, and make bogus hidden assumptions based thereon: for instance, that a zero byte anywhere in the string means “end of string”. And because the Perl, Python, Ruby, etc. XML libraries relied on those parsers… well, you figure it out.
> Then why did companies such as Google choose C++ as an implementation language to manage vast quantities of such messy real-world data?
And why does Google code everything except its core search engine in Python and PHP? Surely if C++ is the end-all be-all, there’s no reason to write even a single line of code in anything else.
> The speed gains to be had from GC are largely theoretical
Which explains why the innards of GCC now use GC, why the innards of Firefox now use GC, why the innards of a number of programs are now using Boehm-Demers-Weiser… But so what if it’s five times the RAM? So what if it’s ten times the RAM? So what if it’s a *hundred* times the RAM? We live in an era where 32 bits is no longer sufficient to count addresses, and we have to start learning to code like that. I cut my teeth on a processor where there was 64K total and half of that was in use by ROM and video, and I learned hard to squeeze every byte in my code. But… there’s no point these days in coding like we’re still on a 64K box, because even little embedded things like cell phones and video game consoles have a gazillion times the RAM of those old 16-bit boxes we grew up on. The technology changed; it’s high time we changed to match it.
> A typically American conceit — that growth could continue untrammelled forever.
I didn’t say it was a conceit. I said it was an inexorable law. Moore’s Law continues its march unbroken. People have tried hard to break it faster and to break it slower, but it stubbornly refuses to stop doubling our transistors every 18 months like clockwork. Maybe it’ll stop someday; but it hasn’t shown any signs of doing so, and as long as it keeps going, we have to acknowledge that.
And the argument about energy is a false one: Yes, energy is a *vital* issue, but again, look at cell phones: These dinky little things have megabytes and gigabytes of RAM on them now too and run quite happily on very little juice. They trail behind desktop and back-office gizmos, but that’s to be expected: They’re low-power. Low-power desktops are appearing now, and laptops have been running low-power for years, almost by definition. The bloom of transistors says nothing about power requirements; it just says that the number of transistors you can cram onto a chip will keep doubling, whether you like it or not, and that has major implications to everything in the industry, language design included.
The fact is that the days of the 16-bit processors are not just over — they’ve been over for more than a decade, and we have to stop pretending that we’re still so resource-limited that we need to sacrifice conceptual elegance in our languages just to gain a little execution time. The computer exists to do our work for us, not the other way around.
> That’s just it: most languages with nontrivial type systems never really made it past the research toy stage, although Haskell comes close.
That’s mainly because nontrivial type systems solve problems that almost nobody encounters in the real world. (In fact, if you *are* encountering problems that type theory helps you with, you’ve probably already engineered yourself so far into a corner that type theory isn’t going to help *much*.) You have to *invent* problems for type systems to solve. If all you want to do is crunch numbers, or filter some text, or even analyze a big monster database, the type systems really don’t help. Type systems rarely get you any closer to solving your problem; they just help to keep you from tripping over your own feet, which assumes the predicate that you *were* tripping over your own feet to begin with. And in environments where time is money (that’d be everywhere except the Ivory Tower), “getting closer to solving your problem” is the only imperative. The fact that *one* language (C++) succeeded is largely due to the fact that C++’s nontrivial type system was snuck in by the back door: People started using C++ for stuff because it did a little more than C did, and then templates were added, and long after C++ was well-established, people started realizing that templates were useful for a lot more than just not having to write two classes where one would do.
Type theory is the golden fleece of computer science, and one of my personal pet peeves: Even if we are lucky enough to find its “perfection,” we’re not going to find anything useful we can do with it. Hang it on a wall, admire it, give out some awards for it, but at the end of the day, it’s still just a hunk of useless fuzz hanging on a wall.
> C++ is more of a lesson how now *not* to do things
Testify, brother! :-D
> Really? Then why did companies such as Google choose C++ as an implementation language to manage vast quantities of such messy real-world data? Couldn’t be because C++ is the only tool out there that can get the job done with minimal overhead cost. Nah.
Well, it’s certainly not because their core engine was written over ten years ago when the hardware and software landscapes looked substantially different. And their continued usage of the language certainly has nothing to do with institutional and cultural forces. Nope, definitely not :-)
As for the “low power” thing: it’s precisely this that is driving the push to multi-core, and C++ is just about the worst language to write a parallelizing compiler for, precisely because of the low-level hackery it allows.
Does that mean that the angle is that you can’t create a language that offers, say, both templates and manual memory management without creating a monster? Why not? Just because C++ implemented concepts such as templates using an extraordinarily bad syntax does not mean that everybody is bound to make the same mistake. Bjarne did not have to use “angle brackets” as delimiters, and some members of the C++ community warned him at the time. But the question is: can we achieve the same objective, e.g. combine templates, garbage collaction and low-level memory management without creating a monster?
It’s not like we have much of a choice. Moore’s law enables more complex hardware. This in turn enables more complex software. You now need millions of lines of code under the hood and three or for programming languages just to draw “Hello World” in a browser. Unfortunately, our brains don’t scale exponentially like Moore’s law. So something has to give. The solution so far has been to invent new abstractions. C++ has attempted to combine, more or less successfullly, multiple forms of abstraction, multiple paradigms. Granted, the result is not exactly pretty, but we still have to try.
If the article is just about bashing C++ for not getting this or that right, it’s probably going to be a lot of fun, but ultimately sterile. If the article is about ways to combine different programming models together in a better way than C++ did, then it will be both fun and useful.
Oh, and the platypus thinks that you are ugly.
But was the GC chosen for its performance? Or for the convenience it brings to programmers and for the increased code stability?
It’s all driven by economics, really. The program that uses 100 times more memory or runs 100 times slower may generally be coded 100 times faster. Sometimes, time to market wins, sometimes, performance wins. As esr explained in “Why I hate proprietary software“, the core of the Wesnoth game he’s working on is an interpreter for some game description language. Why? Presumably because it makes it easier and faster to work on the gameplay. Here, time to market and convenience trump raw performance. On the other hand, when you develop a virtual machine monitor and that the goal is to run Linux or Windows in the VM as fast as possible, raw performance matters. If anything, you rewrite C code into assembly when it’s too slow… Garbage collection in that code? Thanks, but no thanks. Unless you don’t mind waiting 2 hours for your Linux guest to boot.
If you ran on a machine with 64K or memory, chances are that it also had a BASIC. So even back them, you had the exact same trade-off: code quickly in BASIC, or code more painfully in assembly language and get something that was fast. The only real difference is that, back then, even simple games were taxing enough that you needed assembly language.
You can use this extra power in two very different ways: to waste energy with machine models that are unnatural to the hardware, like Java virtual machines or garbage collected memory, or to run stuff that you could not dream of on earlier generations of hardware, like sending live video. Since, at this point in time, sending live video is taxing the hardware resources of most cell phones, this video-compression code is not written in Java and doesn’t use a garbage collector (at least not in the compression core). That will change, probably, but by the time people code codecs in Java, the edge of programming will be something else, and it will use every single cycle.
To get back on topic, I hope that esr’s article will not take this kind of “silver bullet” approach, “one size fits all”. I’m a bit concerned by the “GC > OO” hint. I’ve not seen many programming techniques that were universally good. The art of programming is largely about having the biggest possible toolchest. I believe that is one reason why so many people like C++: it’s a language that tries to deal with many problems at once, not just one.
> Sometimes, time to market wins, sometimes, performance wins.
Can it really be 2008 and we still haven’t learned the lesson of Micro$oft or the World Wide Web? 99% of the time, time-to-market wins. Visual BASIC succeeded because time-to-market wins. Ruby is succeeding because time-to-market wins. Performance wins in a very few specialized cases, and it loses everywhere else — and you can make a very good argument that those specialized cases are better coded in C, not C++.
> But was the GC chosen for its performance? Or for the convenience it brings to programmers and for the increased code stability?
Convenience and code stability, of course. But you still haven’t made a good argument that performance is important anymore in anything but very specialized cases, and I can make a VERY good argument that convenience and stability are more important today than they’ve ever been in history.
> If you ran on a machine with 64K or memory, chances are that it also had a BASIC.
Sure it did, and Applesoft was my first language way back in the day. I coded in assembly on the Apple II only because you often couldn’t do what you wanted to in BASIC — certain fundamental abilities just didn’t exist. Those same limitations thankfully don’t exist in the modern high-level languages: The only real differences between Python and C++ are that Python is a far cleaner design and C++ programs run faster and use less memory. Had BASIC been able to do everything assembly on the machine could, and had speed and memory not been an issue, I’d have coded everything in BASIC, and so would you have. Nobody wants to do it the hard way; we do it that way only because of hardware limitations, and these days, hardware isn’t very limiting, even on embedded boxes.
> sending live video is taxing the hardware resources of most cell phones, this video-compression code is not written in Java and doesn’t use a garbage collector
Which is why some of the newer cell phones have a built-in hardware video decompressor, among other hardware widgets: That lets you do video in higher-level languages at far less programmer cost and far less power cost too: A dedicated video coprocessor will still beat the best hand-optimized assembly you can run on a general-purpose processor, and once the heavy lifting is moved to its own transistors, that leaves the general-purpose processor with little else to do except GC.
> I believe that is one reason why so many people like C++: it’s a language that tries to deal with many problems at once, not just one.
“Jack of all trades, master of none.” Sums it up pretty well.
Jeff Read: “Low power” concerns have little (though not nothing) to do with energy prices, but with the physical constraints of powering mobile devices (battery charge cycles). In the stationary (AC power) setting, the issue is removing heat from the chip/device (and again the physical constraints of supplying/handling the power within small volumes), not the price of electric power.
Shenpen, others: As for “industries of idiots” (not), one thing that C/C++ (and related style languages, including Perl, Java(script), Python, etc.) have going for them is that they lend themselves to an ample supply of commodity talent. More “advanced” (or anyway niche) languages have always suffered from a magnitudes smaller “undersupply” of programmers and tool vendor focus. It’s the same as in every other domain – the KISS principle.
Sean Werkema: “Jack of all trades, master of none.â€
Sadly, after decades of “experience” in a software engineering “career”, I have to conclude that mastery (in any and every sense) belongs in the spiritual rather than the commercial domain.
As it comes to the business subject matter, commercial success appears to require at most mediocrity (in the sense of average (or “at scale”) proficiency), being crowded out by “business acumen”. And that seems to be the case generally, not just in software.
I’m a humble developer, I did cut my teeth first using VB, then some PHP, Bash and little C++. I’m by no means a bearded geek. I mostly fill gaps, solving little problems here and there. My coding skills are not of any sophistication worth mentioning, programming is not my main job.
I have been wondering for many years what’s stopping the FOSS community from producing a new compiled language that can also be interpreted for easy debugging. A language as easy as PHP and that produces standard executable files for easy deployment. Something that can be used for small and big projects without having to devote your life to master its intricacies.
I think C++ is good for for writing all sort of apps (specially for performance ones), but the maintainability is quite hard. If there was an alternative, like some sort of VB-C++ / PHP-C++ plus a compiler, 90% of apps would be built using it. I envision something with the good karma and power of PHP plus some of C++ versatility minus the puzzle factor plus a compiler. Something Delphi-like, I guess.
Just my thoughts.
I have given such an argument: I wrote is that you can choose to do two things with the extra performance: new applications or new programmer convenience. You chose call “specialized cases” everything that I put in the “new applications” bucket. I would myself say that virtualization, multimedia, mobility or games are not exactly niche markets, but you are entitled to your opinion. In any case, while you sit back and relax in the comfort of Java and suggest that we “learn to code like that”, more aggressive kids will instead chose to use the extra power to invent vision-based computer input or similar cool stuff.
Then make that argument. But the argument you have been making so far is that for the first time in history, convenience and stability would suddenly have made performance irrelevant. I think that this is completely false, and my daily job proves it.
There is a fundamental difference in how each language represents the execution environment. In C++, it’s assumed to be a real machine, in Python, it is a virtual machine (Note that even the “real machine” representation in C++ is increasingly at odds with today’s hardware.) This difference is why you need the extension mechanism of Python (which normally uses C or C++), and you don’t need an extension mechanism in C++ that would use Python.
That illustrates very well the “sit back and relax” attitude I referred to above. If you have extra cycles, some will use it to run a GC and win on the time-to-market front, others will use it to innovate and run something more useful than a GC and win on the performance or features front. The choice is yours to make, just don’t ask everybody to do the same one.
> I would myself say that virtualization, multimedia, mobility or games are not exactly niche markets, but you are entitled to your opinion.
I would not say that virtualization, multimedia, mobility, and games are niche markets, true, but they *are* specialized, and they *do* have special transistors in most modern equipment: Multimedia has hardware decoders and even encoders now, games have hardware video and audio support, phones have all sorts of specialized widgetry, and even virtualization is getting its own transistors in the newest Intel chips. There’s a very distinct trend in the industry: Any time software ends up doing very heavy lifting, that gets moved into hardware.
> sit back and relax in the comfort of Java
If you’re going to pick another language for me to supposedly be advocating, why in the name of all that’s holy would you pick *Java*? Forget comfort: Let me sit back with the *power* of something like Lisp so that I can write in one line of code what takes twenty in Java and fifty in C++.
And I’d like to point out — for the record — that a good modern Lisp compiler can come in competitive with C++ for performance on most tasks.
> If you have extra cycles, some will use it to run a GC and win on the time-to-market front
Here’s the problem I have with your argument: Right now, I’m sitting in front of a four-year-old desktop PC. There are thirty applications open. The processor load is presently hovering in the 5% range (that’s an 0.05 for you Unixy people). That means that even with all the gazillion features my applications are currently sporting, the processor is sitting and executing *trillions* of NOPs every second (or, if I’m lucky, maybe it’s using a HLT and waiting for interrupts). 95% of the processor’s power is sitting doing nothing, unused by any of the multitude of applications or background processes I have open. It’s still drawing about the same juice — my wattmeter confirms that much.
Now you can make a decent argument that the list of apps doesn’t change over time. A web browser, a word processor, a paint program, a chat program… they’ve all been done before, and they’ll all be done again, and the average user doesn’t add or subtract anything to his regular list of applications very often.
So what, then, should we do with that other 95%? Well, we can code to the metal, and ping the processor as rarely as possible, and do our darndest to raise that 95% up to a manly 99.9%. We can pound our chests and celebrate our awesomeness at making our apps use even fewer of the resources that we never used before!
Or we can use that 95% for something better. Most of the desktop doesn’t benefit from video or 3D graphics or virtualization. The server definitely doesn’t have any use for any of those except virtualization. Even if you argue that 3D graphics and video are good additions to a web browser, you won’t find many pages using them very well — for every video on YouTube, there’s a thousand blogs like this one that just don’t need it, making video a niche even on the web — a *big* niche, but still a niche.
Oh, so you wanna talk servers? I’m SSHed into a server I maintain. It sees about 10,000 unique IP addresses every day, and transfers several terabytes of data each month, a load that’s easily comparable to any medium-sized corporation out there. And… it, too, is idling away, running nothing but NOPs or HLTs as its system load sits at a cool 0.13. We run Apache and MySQL on this box — and everything else from tiny page serving to *compiling* and *parsing* is written in scripting languages — with GC.
So that still leaves us with a 95% unused processor and nothing to give it to do except things like GC that you don’t *want* to give it to do. What’ll it take? Another 10%? 20%? There’s still a lot of processor sitting there with a lot of unused cycles on it after that. Maybe you want to avoiding GC due to power requirements, but my wattmeter tells me that’s just dumb: This Pentium 4 is burning the same juice whether it’s at 0% or 100%.
So why not put it at 10%, heck, why not put it at 100%, why not put it to some good use, instead of spinning all those NOPs? That’s the reality of modern processors: They got fast enough that we’re running out of work to give them. While we were all tweaking bits, GC got cheap. It’s time we acknowledged that.
Or, hey, y’know, it doesn’t really matter, to be honest. For better or for worse, the web browser is taking over everything anyway: Everything from word processors to games to Adobe Photoshop is moving online, and there’s only scripting and GC in that direction. You’re gonna be using GC and high-level languages eventually; so really, it’s just a question of how long you want to try to put it off.
Sean,
Your arguments are based on an overly broad generalizations of particular cases. Here are a few examples for illustration purpose:
You take advantage of large memory to run memory-hungry languages (the particular case), and deduce “we have to start learning to code like that” (the generalization), even if others might have better use for that memory. You use an obsolete Pentium IV CPU (the particular case), which was probably the last one Intel did without power control (link), and you deduce that CPU usage has no impact on power consumption (the generalization), which is blatantly false today (link). You look at an idle machine (the particular case) to justify your position that we have a reserve of CPU cycle (the generalization), but that’s a bit like saying that the average car is parked most of the time, therefore mileage and power don’t matter. You take this blog (the particular case) as an example of why video doesn’t matter (the generalization). You state your belief that a modern good Lisp compiler is competitive with C++: this is certainly true in some particular cases but actual benchmarks (link) shows just how false the generalization of that claim is. You also wrote that Lisp is so powerful that one line in Lisp equates 20 lines of Java and 50 of C++, and again, there are particular cases where this is true, but there are also cases where it’s much easier to code in C or C++ (would you write bzero or a device driver in Lisp?)
To get back on topic (this thread being about C++ after all), I tried to point out earlier that what some people like in C++ is that it offers a wide range of programming tools: I believe that is one reason why so many people like C++: it’s a language that tries to deal with many problems at once, not just one.. To which you replied “Jack of all trades, master of none.†Sums it up pretty well. This is actually an old battle between tenants of the There is more than one way to do it school of thought (Perl, C++) and the To a man with a hammer, everything looks like a nail school of thought (Lisp: everythinng is a function; Smalltalk: everything is an object). In my experience, the latter category has to go out of its way to fit things into the One True Worldview. For example, in Smalltalk, precedences do not follow usual rules (2+3*5=25, not 17). Similarly, in order to bring I/Os to a pure functional language, you have to use something highly non-trivial like monads. Sure, these are really neat intellectual constructions, and Haskell lovers will usually see monads as a strength, not a weakness of the language. But that’s really where the debate is. On the other side of the fence, proponents of hodgepodge languages like C++ will generally consider the big mess as some inevitable consequence of the juxtaposition of paradigms. And again, that’s where the debate really is.
Now, I realize that pointing out why some people love C++ might easily put me in the “C++ apologists” category. Don’t get me wrong: I don’t like C++. I may have contributed to C++ in the past, but that doesn’t mean I had to like it, right? The point I’ve been trying to make here is that in order for any criticism of C++ to be productive, it has to speak to the people who like C++, not to the people who already dislike it for their own reasons or have no use for it. That’s why I’m trying to point out why some people like C++, even if I don’t like it myself. And what I’m trying to say is that an argument is much stronger if it demonstrates alternatives that are credible or appealing to C++ users.
In other words, if you say “Python is better than C++”, no matter how smart your argument may seem to you, you lose, because there are some large amounts of C++ code that you wouldn’t be able to write in Python. If you say “C++ has an ugly syntax”, you lose, because you did not show that it was possible to have a better syntax. If you say “C++ is a hodgepodge of inconsistent paradigms”, you lose, because some programmers need more than one paradigm, and you have essentially told them that you personally don’t care. If you say “Lisp is often as fast as C++”, you lose, because in many other cases, Lisp is much slower than C++ by construction, and an experienced programmer will know that it takes a lot of effort and convoluted programming in Lisp to achieve the kind of performance that would be trivial to get in C++.
If you want to make the argument more solid, you need to focus on things that C++ does better than Python and think how Python would have to be enhanced to match that. You need to think how to make a language as powerful as C++ but with a much nicer syntax. You need to think about how to make multiple paradigms fit together without stomping on one another. You need to think about the cases where C++ goes in the way of optimizations (pointers everywhere being a good starting point), or still on the performance front, how you can design a language that outperforms C++ for example by taking advantage of all these transistors that exist in modern hardware, as you correctly pointed out. In short, you need to think about what it would take to merge the best of Lisp and the best of C++.
All these things are precisely what I’ve tried to do with XL and concept programming. And believe me, it’s much more difficult than bashing C++. Or, as the French saying goes, La critique est aisée, mais l’art est difficile (criticism is easy, but art is difficult).
If you are going to critizise the design of C++ you at the very least have to read the book “The design and evolution of C++” where Stroustrup’s explains the reasons behind all the deign choices. After reading it, you will probably appreciate the language more. It really is the most powerful general purpose language out there. Its a rich language with its templates, destructors, exceptions, inheritance, pointers. To effectively solve complex and varied problems, you need a rich language.
Sean,
Don’t you think that you are making generalizations based on particular cases?
For example, you personally take advantage of large memory to run memory-hungry languages (the particular case), and deduce “we have to start learning to code like that†(the generalization), even if others might have better use for that memory. You use an obsolete Pentium IV CPU (the particular case), which was probably the last CPU Intel did without power control (link), and you deduce that CPU usage has no impact on power consumption (the generalization), which is blatantly false today (link). You look at an idle machine (the particular case) to justify your position that we have a reserve of CPU cycle (the generalization). But you need the power when the machine is not idle, and reasoning based on an idle machine is a bit like saying that the average car is parked most of the time, therefore mileage and power don’t matter. You take this blog (the particular case) as an example of why video doesn’t matter (the generalization).
Similarly, your belief that a modern good Lisp compiler is competitive with C++ is certainly true in some particular cases but actual benchmarks (link) shows just how false the generalization of that claim is: in many cases, there is at least an order of magnitude between, say, C and Python or Lua. You also wrote that Lisp is so powerful that one line in Lisp equates 20 lines of Java and 50 of C++, and again, there are particular cases where this is true, but there are also cases where it’s much easier to write the code code in C or C++: would you write bzero or a device driver in Lisp?
Back to C++, this is what this thread is about after all.
I realize that pointing out why some people love C++ might easily put me in the “C++ apologists†category. Don’t get me wrong: I don’t like C++ (link). I may have contributed to C++ in the past, but that doesn’t mean I had to like it, right? The point I’ve been trying to make here is that in order for any criticism of C++ to be productive, it has to speak to the people who like C++, not to the people who already dislike it for their own reasons or have no use for it. That’s why I’m trying to point out why some people like C++, even if I don’t like it myself. And what I’m trying to say is that an argument is much stronger if it demonstrates alternatives that are credible or appealing to C++ users.
In other words, if you say “Python is better than C++â€, no matter how smart your argument may seem to you, you lose, because there are some large amounts of C++ code that you wouldn’t be able to write in Python. If you say “C++ has an ugly syntaxâ€, you lose, because you did not show that it was possible to have a better syntax. If you say “C++ is a hodgepodge of inconsistent paradigmsâ€, you lose, because some programmers need more than one paradigm, and you have essentially told them that you personally don’t care. If you say “Lisp is often as fast as C++â€, you lose, because in many other cases, Lisp is much slower than C++ by construction, and an experienced programmer will know that it takes a lot of effort and convoluted programming in Lisp to achieve the kind of performance that would be trivial to get in C++.
If you want to make the argument more solid, you need to focus on things that C++ does better than Python and think how Python would have to be enhanced to match that. You need to think how to make a language as powerful as C++ but with a much nicer syntax. You need to think about how to make multiple paradigms fit together without stomping on one another. You need to think about the cases where C++ goes in the way of optimizations (pointers everywhere being a good starting point), or still on the performance front, how you can design a language that outperforms C++ for example by taking advantage of all these transistors that exist in modern hardware, as you correctly pointed out. In short, you need to think about what it would take to merge the best of Lisp and the best of C++.
All these things are precisely what I’ve tried to do with XL and concept programming. And believe me, it’s somewhat more difficult than bashing C++. Or, as the French saying goes, La critique est aisée, mais l’art est difficile (criticism is easy, but art is difficult).
This illustrates an old battle between tenants of the There is more than one way to do it school of thought (Perl, C++) and the To a man with a hammer, everything looks like a nail school of thought. In Lisp, everything is a function; In Smalltalk, everything is an object.
In my experience, the latter category has to go out of its way to fit things into the One True Worldview. For example, in Smalltalk, arithmetic precedence does not follow usual rules (for instance, in Smalltalk, 2+3*5=25, not 17). Similarly, in order to bring I/Os to a pure functional language, you have to use something highly non-trivial like monads. Sure, these are really neat intellectual constructions. As a matter of fact, Haskell lovers will usually see monads as a strength, not a weakness of the language. But that’s really where the debate is. On the other side of the fence, proponents of hodgepodge languages like C++ will generally consider the big mess as some inevitable consequence of the juxtaposition of paradigms. And again, that’s where the debate really is.
Christophe,
In Lisp not everything is a function. Haskell, yes (except for primitive objects like numbers, characters, and so forth), but not Lisp.
Common Lisp has in fact a really robust object system in CLOS; it falls more in the hudgepodge language category.
> For example, you personally take advantage of large memory to run memory-hungry languages (the particular case), and deduce “we have to start learning to code like that†(the generalization), even if others might have better use for that memory.
See, again, Moore’s Law, which as much as you might dislike it, still has not ever been broken. Memory quantities are increasing at an exponential rate, faster than uses for that same memory. If not true already, there’ll eventually be more memory in your computer than you know what to do with. It’s only a matter of time.
> CPU usage has no impact on power consumption (the generalization), which is blatantly false today (link).
So what? Take a given program X and implement one version using malloc and free and another version using GC. There’s already plenty of evidence to show that GC is often (not always, but often) a win for execution time; and I’ll place you bets that you can show it matches malloc and free in power consumption too. Malloc and free are anything but free.
> You look at an idle machine (the particular case) to justify your position that we have a reserve of CPU cycle (the generalization).
I look at a nearly-idle machine that’s serving thousands users per hour and still sitting at near-idle and argue that it has plenty of CPU to burn, yes. It’s doing its entire job, and doing it mostly with inefficient scripting languages that use GC, and even with all *that*, it’s still sitting idle most of the time. There is no reason to replace any of that huge mass of “inefficient” PHP with highly-optimized C++, no reason to code anything new on it in C++, and I can say that the C++ would’ve taken a *lot* longer to code. That server simply doesn’t need C++ for any of the hundreds of different tasks we put that server to. While you’d be still coding C++ boilerplate code, I’ve added the new features, fixed the bugs, satisfied our users and my bosses, and moved on to something else — and there’s still tons and tons of processor and memory left over.
> You take this blog (the particular case) as an example of why video doesn’t matter (the generalization).
I take a blog, a forum, a search engine, a Google, a Yahoo, just about any website that isn’t YouTube to argue that video is dubiously relevant. 99% of the web doesn’t use video; it’s important to the 1% that does use it, but that’s still only 1%. The other 99% doesn’t need your highly-optimized C++ video routines, your highly-optimized C++ database routines, your highly-optimized C++ audio routines. Nine out of ten web sites are fine without C++, and that’s not likely to change.
> There is more than one way to do it [in C++]
Which explains why the C++ standard added things like closures and anonymous functions and symbols to C++ back in the early ’90s, so you’d have the flexibility to add those kinds of powerful concepts to your program anywhere you wanted.
Oh, wait, no they didn’t. C++ still doesn’t have those. For incredibly large classes of real-world problems, there’s still only one or two highly awkward ways to do it. Nevermind.
Sean: you may wish to check out the Boost libraries, and in particular Boost.Lambda. Yes, it’s a rather severe case of Greenspunning. Yes, I’d rather use a real Lisp: while I haven’t used the Boost libraries to any significant extent, I’m sure you have to be much more careful of abstraction leakages than you would with a language that supports these concepts natively. But these concepts are available in modern C++.
Oh yes, I definitely want my lambda expressions (which evaluate to functions or their equivalent) to be indistinguishable from ordinary expressions (which evaluate to their results) but for the ugly mutated embedded de Bruijn indices.
All I can say is it’s a good thing C++ is getting proper lambdas RSN.
Have a look at:
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml
This is Google’s C++ style guide. The discussion (click the arrows) of each guideline is very good.
Reading this guide should be enough to convince someone that C++ is not a well-designed language.
If even Google, a young company, will not use C++ exceptions, RTTI, streams, auto_ptr, etc. then why are these even in the language?
I think that C++ is an excellent example of the second system effect. The language has incredibly sophisticated, generic, and complex solutions to some of the perceived problems of C. And the thing is, the problems are “the problems of C”, not the problems of actual programmers. Wouldn’t the time spent on developing iostreams or the STL have been put to much better use developing good networking and concurrency libraries (to name just two)?
I’ve been reading esr’s The Art of Unix Programming and enjoying it immensely, which led me to this blog …
I haven’t seen my particular perspective on these matters quite posted here, so here’s where
I’m coming from. I ported cfront to several different C compilers in the late 1980’s and found C++
a vast improvement in capability and power over C. This was in the era of 20 MHZ chips, when
slow code was reaaaally slow and there was no Perl and no Python and Lisp was used only in
universities and in AI work.
I became a Perl programmer in 1994 and used it to program a few website backends. I found
the syntax a bit weird (as everyone does, I guess), but was very impressed by the amount of
work I could get done manipulating text in very few lines of code. It didn’t occur to me that Perl
would encroach on C++’s turf of desktop software and other performance-critical code. By this
point I seem to reall that chips were running about 100 MHZ.
At some point during the late 90’s while bouncing back and forth between C++ and Perl on various
projects, it struck me: clearly, the limiting factor by then was not the speed of computer chips
nor hard disks, nor networks. It was the attention of a talented programmer. So why
not choose programming tools like Perl (the only one I really knew at that point) that produce much
more work per line of code?
Last summer my friend Lucy gave me Paul Graham’s book Hacker’s and Painters and found that he was making the same case in much greater depth
and breadth. Look at his essay “Revenge of the Nerds” on his site for an example.
I was tasked with “getting ready for 80-core chips” by my boss last year. I thought that took
threading, which leaves out Perl 5 which says “don’t use our threading module — it’s broken.” Lisp seems only to be an option if you live near Cambridge MA. That left Python or Ruby, and I thought
Python was better supported and so chose that for a sample project.
I used Python to duplicate a piece of the (ancient and algorithmically stupid) C++ code and tested each version’s performance, finding the Python code ran faster.
Since then, I’ve written two pieces of Python middleware (a protocol translator and a multithreaded
socket interface), both with many fewer lines of code and who knows how much less hassle than
if I’d had to do them in C++.
I think the basic issue is the level of abstraction that each language operates at:
Lisp
Python Perl Ruby
C++ Java C#
C
I think that with the speed of current chips and the expense of software development,
anybody not choosing one of {Python,Ruby … possibly Scheme or Perl 6 or ..} and using
C++ possibly only for performance-critical few percent of the code … is effectively incinerating
money!
>I think that with the speed of current chips and the expense of software development,
anybody not choosing one of {Python,Ruby … possibly Scheme or Perl 6 or ..} and using
C++ possibly only for performance-critical few percent of the code … is effectively incinerating
money!
Yup.
Would it surprise you to learn that Paul Graham and I are friendly and frequently end up developing similar ideas in parallel with each other? No? I thought not…
Mark: while I haven’t done any significant multi-tasking in Perl, I believe the preferred approaches are (1) OS processes and (2) POE. But I could well be out of date. Anyway, it sounds like you’ve found something that works :-)
>Since then, I’ve written two pieces of Python middleware (a protocol translator and a multithreaded
>socket interface), both with many fewer lines of code and who knows how much less hassle than
>if I’d had to do them in C++.
…
>I think that with the speed of current chips and the expense of software development,
>anybody not choosing one of {Python,Ruby … possibly Scheme or Perl 6 or ..} and using
>C++ possibly only for performance-critical few percent of the code … is effectively incinerating
>money!
Using a language designed to be uncompromisingly efficient in situations where efficiency is of no
major importance is a bad idea. This is neither a surprise nor a criticism of C++.
C++ has a set of problem domains where it’s virtually the only game in town. That tends to indicate
that the trade-offs made in its design were good ones. However if you use a chainsaw when you
need a hammer then don’t come crying about the results. ;-)
Imho, seems that the fundamental design error that caused C++ cascading brittleness, was the (ambiguity avoidance cascade caused by) overzealous capability to multiple overload (methods & operators) by argument (arity and) type:
http://lists.motion-twin.com/pipermail/haxe/2008-November/020935.html
Shortened:
The fundamental design error that caused C++ cascading brittleness, was the overzealous capability to multiple overload by argument type.
Rudimentary dynamic typing was optional in C++ (using cast to void* and run time type info reflection), but afaics the forced typing cascade hell appears to have originated from overloading capability. If the programmer and libraries avoiding overloading by type, then complexity creep was avoided. Overloading by type (a form of run time dynamic typing) and compile time static typing are fundamentally incongruent granular multiplexing. The special cases fork faster than exponential, perhaps more like geometrical tree (e.g. population) growth. If we want to overload by type, then don’t expect the compilation (compiler+programmer) to follow that geometric tree of forks. Do the type overload checking at runtime where context leads to manageability, perhaps analgous to the complexity improvement of a runtime GC versus compilation level memory allocation.
The more I think about it, the more I realize that statically typed languages have suffered from special case creep/cascade hell primarily due to fundamentally incorrect model of 00, where the subclassing of methods and the data are conflated.
http://lists.motion-twin.com/pipermail/haxe/2008-November/020937.html
A dynamically typed language is not really always precisely a solution, as much as it is an avoidance, where appropriate. For the cases where static typing is important, it might make sense to consider fixing the OO model, if it is broken? Specifically, make orthogonal the subclassing of methods (prototype) and data.
C++ (OO in general) solved a problem I never had.
I’m really disappointed. Eric influenced me a lot a few years ago (he’s the reason why I chose Python as my first programming language), and I used to think of him as a highly experienced programmer.
But now I find out that he’s just a language-flaming idiot and aficionado with the luddite attitude typical of UNIX hackers who rant against anything that is modern and powerful. What makes things worse is that he’s flaming C++. People who bash C++ don’t usually have any legitimate grounds for doing so, but usually do so because of one or more of the following reasons:
(a) They don’t get OOP; their minds are still stuck in procedural.
(b) They’re too stupid to learn C++. Sure, C++ is fairly complex, but bashing it just because they don’t understand it just shows their dishonesty and mediocrity.
(c) They’re forced to use C++ at their job.
(d) Everybody around them uses C++ rather than their favorite PL, so they become frustrated.
What’s worse, I probably would have believed all his garbage a few years ago…
As for my opinion about C++ bashing in general, well:
“There are two kinds of programming languages: those that people always bitch about, and those that nobody uses.”
I was having the exact same thoughts as Sebi and agree on all points.
Many comments bashing C++ seem to come from narrow-mindedness and lack of understanding. In many cases, higher level languages are used to write programs that run on top of software written in lower level languages. It’d be interesting to see someone try and write an OS in Java or any other managed language. Sure, Visual Basic, Java and C#, as well as, probably, Python, Ruby, among others mentioned here, are more user-friendly and easier to understand. But does everyone understand why? Abstraction and limited purpose. At least the first three languages mentioned run in a limited environment with no means to go beyond that. That’s enough in many scenarios, such as developing business, data-driven or web applications for the platform the languages run on. But the environment, whether it’s JVM or .NET, is the only medium such programs can exist in. The ease of use comes from the fact that someone else worked hard on abstracting all the lower-level mechanics.
Ok, we criticized c++ enough. Isn’t it time for all of us interested in a language with C-like performance features and high-level of abstractions to gather together and create a new language?
http://felix.sourceforge.net/
Achilleas Margaritis: I think the ‘D’ guy is trying that… http://en.wikipedia.org/wiki/D_(programming_language)
I don’t think that D is the right direction towards a new C++-like language. D is more like C and Java put together. It’s not elegant, and contains many features which seem “bolted on” and not well thought out. For example, the separation of classes and structs, which is totally redundant. Or mixins. Or the scoped keyword.
Felix seems nice, but the site indicates the language is abandoned.
I was thinking more on a language that is orthogonal…for example, there are no primitive types, everything is a class (ints, pointers, etc), while maintaining the value-based characteristics of c++ (important for many reasons).
Richard Fateman’s paper on C should be mandatory reading for writing such a thing:
http://www.cs.berkeley.edu/~fateman/papers/software.pdf
C may be inappropriate for implementing big software projects with it, but its runtime characteristics are the most appropriate for any type of software. A new language need not contain C semantics or the C type system.
Con:
What you say reveal a quite widespread narrow-mindeness: I’ll call this the Babel theorem, wich states that each application should be written in One Unique Language. If this theorem is true, C++ is effectively the only choice for any big application, because it is the only widespread language wich can do both low and (somewhat) high level stuff. The catch is, the Babel theorem is false…
…because it depends on the axiom that it would be difficult for people to be proficient in more than one language at a time —or for management to hire them. So we’re gonna use One Unique language: C++ (the only choice, remember?). C++ is multiparadigm. C++ is low level. C++ supports OO. C++ supports high level. C++ support Generic Programmig. So we have C++, C++, C++, and C++. To me that makes 3 or 4 languages, not one. C++ is so feature-bloated that learning both Python and C would be less difficult.
If the Linux kernel (a big application) is written in One Unique Language (oops, not C++), I suspect this is because most of the problem domain is Utmost Performance. However important Utmost Performance is to our computing lives, it is a niche. Only a relatively small percentage of programming effort goes into that.
sebi:
Guilty as charged!
(a) True, I don’t get OOP… because it is essentially undefined. As anecdotal evidence, I once proposed a solution (in C++) based on functionnal programming. My colleague heard OOP. I once asked another (experienced) colleague about the essential mechanisms of OOP. He basically told me that was being close to the problem domain. Plus a good deal of hand waving. So much for a definition of OOP. What I currently think about OOP: abstraction is good, subtyping is good, genericity is better, and inheritance ranges from useless to harmfull, especially when tied to subtyping.
(b) Or it could show the excessive complexity of C++. I consider myself gifted with a more than average intelligence, curiosity, a good abilty to learn, and a big ego. Yet I don’t know every gotcha of C++. After reading the FQA, I know I never will. C++ is a very interesting language, I am glad I learned it. But I think this very charasteristic is harmful —self quote.
(c) I am forced to. You are right, this is not a legitimate reason to complain about C++. However, if C++ sucks it is a legitimate reason to complain louder (which I do).
(d) True. My current favorite language is either Haskell or Ocaml. But it is not like my colleagues were using SML, or Lisp, or Ruby, or even Java. No, they use a language without garbage collection, without strong typing (be it static or dynamic), without first class functions (despite std::foreach and the like). C++ is so remote, so much more difficult to use, so much more verbose that I indeed am frustrated. Anectotal evidence: try to implement optionnal data in C++. Safe, if possible. One line in any ML/Haskell language, 50 in C++ —implemented myself, because boost::optional (900 lines) was not available.
Stroustrup is right in your quote. I daresay this is due to (social? mental?) inertia. Better languages are newer, so nobody uses them. And it is natural to complain about the language we use, because there is allways some kind of limitation. Hey, even users of Ocaml and Haskell complain about their language. I complain about C++ more, because it is way below my blub language.
Just a little reply to the one criticizing Objective-C, namely Phil and Jeff Read.
Objective-C may be considered as a hack over C, but it gives C an OO abstraction to it, given a simple runtime, a simple syntax that you can find verbose but which makes code more readable, of course there’s some time of adaptation, like any language, but once you get it right, it’s a very nice and comfortable language.
You get a pure C layer, to make ObjC code you simply have to change the extension from .c to .m and link your project with libobjc and you have ObjC, with the runtime, you get a very powerful and dynamic abstraction, with late-bindings, message forwarding, polymorphism… Of course, that makes the message sending slower than usual function calls, that’s why C is still under it, for most tasks, ObjC lags are not noticeable, when you want high speed you just use the C layer.
And for dot-syntax with properties that Jeff Read reproaches to Apple, that’s just a syntactic sugar, and that’s not even linked to properties, since you can use that syntax with method that aren’t even properties. No the real additions of properties is about reducing the number of lines of code : @property with @synthesize tells the compiler to generate the missing methods directly to give a solid implementation for accessing instance variables without flooding the implementation with repetitive, non-informative lines of code unlike java where you’re always forced to write setIvar() and getIvar() methods…
By the way, the closures are added by Apple to C and C++, not directly to ObjC, but since ObjC and ObjC++ are based on those languages, they receive the feature as well.
No, really, I think saying that ObjC is as bad as C++ shows a lack of knowledge about that language. It’s a very lightweight language, a thin OO layer over C. Yes the syntax is awkward, at first, but once you understands how it works and especially what it means, the ugliness becomes beauty, for that language wraps what you need to work efficiently without flooding you with alien features that can’t even work together.
Some of the newer languages would be better if they didn’t *force* you to use GC. GC may not bog down your processor, but it sure expands your memory footprint. In many applications, memory is your bottleneck, not CPU.
That’s not to say that I do manual malloc and free when I can avoid it. Reference counting is more processor-intensive, but more aggressively conserves memory, which is often a profitable tradeoff. It also allows you to extend RAII to things allocated on the heap… for instance, in Java, if you want to map memory, you create a heap object that does it for you, and if you want to release that mapping… uh, you can’t. Or at least you couldn’t for a long, long time (I haven’t checked past Java 6). You have to wait until the GC gets around to collecting it, because if you try to dispose of it and you’re actually still referencing it and you try to use it later, you might be pointing to someone else’s mapped memory. Or at least that’s the reason the Java folks gave. Using reference counting, this sort of thing would have Just Worked.
I like to use “memory pools” when I don’t want the overhead of GC or reference counting. Put one on the stack, allocate your objects in it, and when you’re done using *all* your objects (no need to keep track of the lifetime of each individual one), you let the memory pool go out of scope.
There’s lots of ways to skin this cat besides manual malloc/free on the one hand and GC on the other. Most languages make it difficult to do anything else. C++ makes it easy to bring in any strategy you like, and reuse it throughout your codebase, and to tweak it system-wide.
‘d say it’s biggest practical shortcoming of C++, the fault that intensifies the inconvenience of working with it more than any other trait, is its grammar. A complex grammar means that making a tool that consumes its source code in any way (including, but certainly not limited to, a compiler) is a more costly undertaking with respect to C++ than with respect to most other languages. And that means tools proliferate more slowly, and improve in functionality and quality more slowly. Fix the grammar, and that problem goes away, and it becomes a whole lot easier to (for instance) more intelligently create user-friendly analysis of template instantiations and syntax errors related to same.
If the grammar is simple and nicely Context-Free and LL(k) and all that good stuff, you can clear up a lot of the *semantic* ambiguities or complexities that tend to lurk in code with murkier grammar specs, and your language can more quickly grow tools to help you deal with the ones that remain.
A compiled-module specification that lets you package up template classes in a library, and not recompile them everywhere they’re #include’d, would cut down compile times and enable more rapid iteration.
And don’t forget the hard work required to write a C++ compiler and C++ parsers. These are one of the most complex pieces of software ever written. Hence C++ is not only more difficult to program in, but also the most difficult compiler to write. Compare this with writing a C compiler. Any one who has read some Compiler Construction book, can write it in a year or so. That is why you see many C compilers but lesser C++ compilers, due to the time and effort required to write them.
“And don’t forget the hard work required to write a C++ compiler and C++ parsers. These are one of the most complex pieces of software ever written.”
I didn’t. I’m trying to say that the hard work required to write a C++ parser has caused a shortage of good tools for working with the language, which makes the language harder to work with than it would otherwise be.
And the tools that do exist are difficult to get right. For instance, Intellisense in Visual Studio works on C++ code whenever it feels like it, while it works flawlessly on C# code.
Hi Eric, i was wondering if you’ve finished the paper, and if so where one could find a copy?
Ditto what Gregory Gelfond said. Where can I find a copy, if at all?
-Patrick
Any progress? It’s been over a year…
We have a draft that has been released to a limited group of reviewers. It needs more work.
Its been 14 months, still nothing?
Hmm. C and C++ are sort of my “native programming languages”. I really started taking off in my understanding of programming with C. While it is low-level, I sort of liked the fact that it was low level, and gave me insight into what it was doing with memory (despite the fact it is more overhead).
Occasionally, when I have some huge mesh, I want to know when I am freeing the memory so I can make another without running out. I’m always wondering how I can get half my RAM back in a pinch in a garbage collected language
However, certain uses of the object oriented paradigm baffle me. People expecting me to overload callback functions without knowing or being able to replace all of what the callback function originally did, or not providing any other means except global variables to pass anything to them, ect. Sometimes I can get completely lost.
That, and what all the introductory text tell you to think of objects as never struck me as a useful mental model. Objects aren’t like physical objects with attributes, and message passing is what threads and the Windows API does, not objects as far as I can tell.
I’ve always thought of objects as fancy structs with all the code intended to operate on said structures bundled with them. So rather than saying function(*structure, params), you say structure->function(params). In addition, it makes memory management easy, because you can stuff all that in the constructor and destructor, and have the whole chain of allocation and deallocation be iteratively applied on creating/deleting the top level objects.
But I might be missing out on the point. I self-taught most of what I know about programming to do math. I don’t have a formal CS background.
PS – I hate cin, cout. It violates the crap out of the grammar of the rest of the language. It’s like Eubonics in a Shakespeare play. I still use printf and fprintf. Shame on me?
PPS – my brother clued me in by giving me a very good book on object oriented programming called “Head First: Design Paterns”. I now understand much better what is meant by message passing wrt the Observer pattern.
I think C++ is a plot to give programmers work.
Most professional examples I’ve seen of C++ are inherently more complicated than the problem being solved (the goal is to reduce complexity, not increase it).
Where was this paper published (it is was?) I can’t find it
I’d like to read the published paper, please. Was it published yet?
C programs are usually really shitty and less comprehensible than C++ programs. If you don’t like certain C++ features like template metaprogramming, the STL, or RTTI, don’t use them! You don’t pay for what you don’t use in C++.
I like having the freedom to use low-cost OOP constructs to simplify my code, templates instead of disgusting macros, and efficient error handling. Not using modern programming features and relying on C instead amounts to micro-optimization in lieu of productivity.
Has the paper been made available yet?
>Has the paper been made available yet?
No. My collaborator remains 5000 miles away and unavailable.
Too bad, I wanted to troll it.
Too bad, I wanted to troll it.
Depends on the the IQ level. People with higher IQ prefer C++ while those with lower IQs prefer an alternative. With C++, unnecessary load on the CPU is reduced while the load on the designer / implementer increases. In Java, the CPU is stressed out figuring GC, while the programmers are free to generate garbage for the runtime to handle.
Are you going to remake the paper due to the releasing of C++11 and C++14 or just give up the work?
>Are you going to remake the paper due to the releasing of C++11 and C++14 or just give up the work?
Still blocked on my collaborator being a long way away and both of us very busy.
I think C++ is a difficult language and certainly not for everyone, but it gives you more freedom over say Java or Python and is very powerful for resource intensive applications. There are cases where you really don’t have an option (game engines, speech recognition, etc). I have been coding in C for some time and now I am also learning C++. One thing to note is that C++ is a moving target, an evolving language. So what you had in mind here in 2008 might not be totally relevant in 2015. I’d really like to hear how you view the recent changes and developments in C++.
While esr is busy, do you mind a quick comment on C++11 from someone who actually uses it? A review of some of the comments reveals of the perceived problems of C++, principally there are two: memory management and syntax. C++11 really cleans up both of these, at least to some extent.
Even though the comments contain a missive from ak on 2008-09-26 at 06:49:54 who said: “Wake up, it’s 2008. std::tr1::shared_ptr exists, and is readily available.”, it is only recently that a style called Rule of Zero has become popularized. This technique uses the now included with STL, std::shared_ptr, to take care of the memory management issue for the most part. This technque brings to bear a reference counted pointer that simplifies programming. Now, maybe our host would still claim “GC > OO,” but memory management can be largely simplified by managing objects on the stack or by reference counted pointers using shared_ptr. C++ programmers who use the language as “C with classes” are missing out big time. Perhaps with Python, only small snippets of C are required, which is probably an appropriate use of that language.
The second problem of syntax gets some treatment in C++11. There are a number of irritations that are fixed, as well as some more significant changes. The introduction of lambas simplifies callbacks greatly in comparison to C. There are other uses of lamdas that ease programming just as much. The introduction of auto simplifies code, though there may be some debate on clarity of code on its use. Range based for-loops tidy up code which makes use of STL-style iterators. Combined, an iterator based for statement could look something like:
for( auto i : thing )
In comparison, for some template based type, the same loop statement pre-C++11 could have looked something like:
for( Thing::iterator i = thing.begin(); i != thing.end(); ++i )
What is more, no new syntax would have been needed in Thing to support the new for-loop in this case. The language, while it is indeed large, is being simplified while still retaining or in some cases increasing type safety. The progress has been slow, but like an language, it is evolving.
There is better C runtime integration in modern compilers, too, and that can make a huge difference.
A decade ago I swore off C++ after a few projects that fell on the spectrum between “disastrous failure” and “catastrophic failure.” I never wanted to use the language again, and in most of the years that followed I tried every other language I could find (except PHP–I was desperate, not insane ;).
A year or two ago I picked up C++ again for a project on a target platform that supported only C++ and C. C++11 support in GCC was coming along, so I tried using it again after reading about exciting new features like auto and closures.
The first thing I noticed: almost everything that annoyed me about C++ has been fixed. It took me months to find a compiler or library bug, and the first few I found had already been fixed for a year in upstream GCC, so I just upgraded and kept working (contrast with C++98 where I found a way to segfault the compiler on day 2).
I wrote a medium-sized multi-threaded program complete with a command interpreter and something analogous to a Unix shell, except using pthreads instead of processes, queue objects instead of pipes, and commands implemented as class methods instead of external binaries.
Just for amusement one day, I decided to implement a “kill” command that would hit a thread with pthread_cancel as a test of what I believed to be the worst-case failure scenario: low-level C API interacting with C++ runtime in an undefined way. I expected the program to either crash or hang (as all the previous projects I had worked on did), but this program did neither. Instead, the C++11 runtime converted the pthread_cancel into a kind of C++ exception, then neatly unwound the cancelled thread, and even propagated errors correctly from one thread to another (the queues had an exception analogous to SIGPIPE that would fire when the peer thread destroyed its end without closing it first) automatically. The program leaked nothing and it even returned to the command loop with a readable (though cryptic) diagnostic extracted from the exception.
The thing is, this didn’t happen on my 20th try or my 200th try. This unfolding flower of correctness happened on my first try, by accident, because I’d followed a few simple rules about object ownership and exception safety, and let the compiler do the work. I couldn’t have planned for that success because I was convinced such a thing was completely impossible.
Some of the old criticisms still apply, of course. It has been only 108 days since I wrote in C++ a trivial file parser that segfaulted at the first sign of danger. :-P
There are a bunch of people who claim that ‘C is just a subset of C++’. And yes, my answer is, I can still design a language, which is a real superset of C++, which can do BOTH prototype-based and class-based OOP and which supports Operator Overloading and yet, Overloading precedence too; with super-duper-error-prone template system which, by standard, traces back errors until ‘just-above-assembly’ – code. Let’s call it programming language X – Why use C++ when there can be X, which can be a better superset of C++?
I can understand C++, sure, but I am pretty sure that a badly written C code is more readable than badly written C++ code – just like a badly formatted XML is far more readable than a badly formatted JSON file (or just IMHO).
And, the C++ code is something I see very unportable – to even integrate it with other programming languages like C – mapping classes to pure functions, the ‘extern “c”‘ for the sake of ABI compatibility, etc – when the code griws large, they are such a pain. When I program, I would rather use strict C so that they would not make the code unreadable with too many types of brackets and punctuation like :, ->, , something::dosomething<somethingelse::dosomethingelse, something::tempstorage>. In C, they don’t even have a chance for this :D.
But I wish the standard of C gets extended to something that has all the cool features without the word ‘class’ (If someone desperately wants some OOP, JavaScript-style is cooler :) ).
C++ is full of sh*t and now D also (thank you Alexandrescu!!! this guy is putting all the shit of C++ into D).
Use Nim.
http://nim-lang.org
To add to the pile, and provide a first-of-2016 comment…
As a scientific programmer, I require abstraction and performance. I enjoy high-level systems design, but I don’t have time to invent wheels, buggies, or efficient diesel engines. Consequently, I use R (in something of a functional programming paradigm) for 80-90% of what I do. As per ESR’s 2008 comment, “profile and push bottlenecks to C”.
But from where I sit, C looks like heaps of raw materials and unassembled parts. The STL comes “batteries included” with fully templated containers. Writing a decent implementation of a list is *not* my job, let alone writing a high-performance, automagical templated list. I can snap those parts together fast and depend on them to perform their task, and move on to the interesting parts of my job.
As to GC, I can accept that RAII was relatively new 8 years ago. Automatic GC sounds nice, but RAII isn’t particularly hard to learn, and deterministic object destruction is a pleasant feature.
(See, for example, http://programmers.stackexchange.com/questions/118295/did-the-developers-of-java-consciously-abandon-raii)
Hi Eric,
I was wondering what the status of this paper was, and if it would be possible to read a draft if it hasn’t been published yet.
I’m also wondering if you have any thoughts on Niklaus Wirth’s Oberon language and system.
What about Ada programming language, why doesn’t that get more popular?
Is the main reason over-verbosity?
If yes, why doesn’t someone create a language with same semantics as Ada, but much less verbose?