Articles / Make alternatives

Make alternatives

Last week, I listed what I think is wrong with Make. This week, I offer my suggestions for alternatives.

(The previous article is available here.)

Almost like text editors, build tools form a large population. You can find many of them cataloged at the Free Software Foundation site and at DMZ. You can also find many build tools by digging in well-known Open Source repositories like SourceForge or Tigirs, but it is much more difficult to spot what you are looking for on those sites. I will discuss here a small selection. This selection is highly subjective and is is biased towards Open Source tools. I have grouped the tools into several categories to help you get a better overview.

Make clones

The tools in this category are not really alternatives. They are what people currently understand as Make (virtually nobody is using today the original BSD Unix Make). I mention three tools in this category: GNU Make, Opus Make, and NMake. They all share some compatibility with the format of the original Makefiles.

Opus make

This tool is not widely used today but, in my opinion, Opus make is the best Make clone ever made. It has a few outstanding features that set it apart from the crowd. It has the richest set of directives allowed in the Makefile (including its own "cd", "echo", "copy", "delete", and other frequent shell commands). Even more important, these directives can take effect at parsing time or at rule execution time. This makes for more portable execution parts in rules, greatly reducing the dependence on the shell underneath. Opus make has always had logical operators in conditional expressions, regular expression substitutions, the ability to trace the parsing and to stop on lines not understood, and much more. With its "inference restart" command, it has support for one-pass building, a feature rarely seen. Along with its comprehensive set of native features, Opus make also has a fair set of emulations for other Make tools. Unfortunately, according to the Opus Web site, its development seems to have stopped back in 1998. A version of it is still distributed today with the IBM Rational ClearCase SCM tool.

GNU Make

GNU Make is very likely the most widely-used build tool. Being available as Open Source and being an integral part of the GNU tool set made it a popular choice on many Unix-like platforms. It is actively maintained and is a vast improvement over the original Make, especially with its large set of new macros. Over the last ten years, it has gained some important features (that should have been available from the beginning), to print at parsing time, stop parsing and exit, force the build, define new function-like macros, make case-insensitive filename comparisons on some platforms, etc. Unfortunately, GNU Make also kept most of the problems of the original Make, and much of the criticism that you'll hear of Make is actually directed to GNU Make.

AT&T NMake

NMake originates at and is maintained by AT&T laboratories. The large set of Open Source tools from AT&T (Korn shell, graphviz, etc.) is built with it. NMake is Open Source itself. There is a commercial version from Lucent as well. The AT&T-style source distribution packages rely on NMake and their own configuration tool, iffe. This build system from AT&T is very Unix-centric, exactly like the GNU build system. Note that Microsoft also has a Make clone called "nmake" bundled with their development environments. Take care; they are incompatible. Don't expect a Makefile written for one to work with the other.

Make evolutions

The tools in this category move away from the old syntax of Makefiles but don't break with the Make tool spirit. That is, they are still developed in C or C++ and they still use some kind of text file located close to the sources to describe what has to be built. I will mention two tools in this category: Jam and Cook.

Jam

Jam is maintained and promoted by the people behind the perforce SCM tool. Perforce is commercial software, but Jam is fully Open Source. Despite some small issues with its syntax, Jam files are expressive enough for the Jam tool to come standard with a decent database of rules.

Jam was so influent that it generated a set of clones (FTJam, Boost.Jam, Boost.Jam.v2). The most interesting is Boost.Jam from the maintainers of the Boost C++ library. It introduces some good syntax extensions to the original tool. You can tell by those extensions that the authors were C++ programmers (for example, the scoping of variables looks like C++ namespaces). Boost.Jam is not the only nor the first attempt to raise the level of the build description, but more than other Make evolutions, Boost.Jam focus on real build issues. For example, it provides canned variant builds, it provides dynamic loading library abstraction for Unix and MS Windows target platforms, it cares about testing the result of the build, etc. If you ever used Make for a real-life-sized project, you had to provide such things by your own effort, and you will immediately understand what savings this approach of Boost.Jam brings. Unfortunately, Jam doesn't make the jump to content signatures and to commandline signatures. The problems related to the weak timestamp heuristic are still present.

Cook

Cook is a build tool designed by Peter Miller with quite a long history. Like Jam, it is Open Source and supports parallel builds. Like Jam, it avoids recursion. Like GNU Make and unlike Jam, it relies on a separate tool, c_incl, to get implicit dependencies in C code. The build description syntax used by Cook takes some features from Makefiles and some from LISP. (Many build tools have a LISP-like syntax. There is a good reason for that: Manipulating lists (i.e., lists of dependencies) is a frequent task when describing a build. Cook can use content signatures, or "fingerprints" in its parlance.)

I cannot express a clear preference in this category. Boost.Jam and Cook come close. But I can say that I prefer any of them over GNU Make. The reason is that they allow you to focus on build design and forget some of the low-level build implementation details. Unfortunately, they haven't achieved the widespread use that they deserve, perhaps because they are not part of a more comprehensive build system, as GNU Make is.

Bootstrapping

You may argue that my preference does not make a lot of sense. Indeed, both Cook and Jam (the original) rely on the GNU build system to get themselves built, which in turn relies on GNU Make. So it is not a matter of preference; you'll have to have GNU Make. This gives me the opportunity to introduce a thorny issue, bootstrapping. The question is: How do you build when you don't have a build tool? In particular, how do you build the build tool itself? One solution is to cross compile and then distribute binaries for the new platform. Another solution is to go back to shell scripts for the build description of the build tool itself (like Boost.Jam does). Or you can just rely on "good old GNU Make", like many self-proclaimed modern build tools do.

Some build tools are able to generate shell scripts automatically from their native build descriptions. Of course, the generated scripts don't have the full functionality of the build tool. They are usually able to do a build from scratch, and nothing more. And they usually require manual setup of the shell environment. Nevertheless, those generated shell build scripts help a lot with the bootstrapping issue.

Build systems

The build tool is always just a piece in the larger software development system. Let us define more precisely the terms used below. The build tool reads a build description and then directly starts other tools to actually produce the result of the build (a report, a compiled file, etc.). A build system is a set of tools, including a build tool (by the way, a good build system will support several competing build tools). A build system may produce or adapt the build descriptions before they are used by the build tool. A build system may in some way manage the build result (for example, it may post the summary of an automatic build on a Web site). Next to the build tool, the other important tool in a build system is the configuration tool, the one that adapts or automatically generates the build descriptions used by the build tool.

To understand the paradigm shift introduced by build systems, it is important to first know the new requirements addressed by build systems: software distribution as a package of sources. The typical usage scenario for a build tool is C sourcecode development. Here, the genuine input changes frequently and locally. Avoidance is a crucial feature in this use case. How building works elsewhere is less important. Also, the addition of new components to a build has to be easy. By contrast, for software distribution, the crucial need is the ability to customize the software. It has to be adapted to new build platforms, to new runtime platforms, to sitewide conventions, etc. The builds are far less frequent. (Avoidance is not really a requirement, for example.) The difference in requirements has had an influence on the evolutions of build tools (some being used mostly from within build systems, and some being used mostly stand-alone).

The key advance in build systems is the fact that they put some executable code in the sourcecode distribution of the software to be built. We name the executable part in the source distribution the configuration tool. This changes the nature of the sourcecode distribution; now it is more like an installer package. "Installer" is a term stolen from binary distributions. (For software distributed in binary form, it is a long-time established practice that the package has to "execute" on the system that receives the software distribution). Because you are now supposed to execute some program delivered with the sources before you start the build, the source distribution has a chance to adapt itself to the current build machine. This is what makes build systems so helpful with sourcecode portability. The configuration tool may change build descriptions or sourcecode or both. Sometimes, the configuration tool has evolved into a complex interactive tool (see the several tools available to configure Linux kernel compilation).

The configuration tool inspects the build platform in small steps, with individual checks for one feature. Hereafter, we will call these steps probes. How results of probes are represented differs from one tool to another, but most of them have a caching mechanism in place so that you don't need to run a probe each time you need the result of that probe. Probes can have various granularities, can be independent or can be dependent on the result of other probes. Roughly speaking, the available set of probes is the way a configuration tool represents its knowledge about the platform to which it needs to adapt. The result of the entire set of probes defines de facto a model for a given platform. This hurts on the plane of operating systems, with more or less support from those systems' maintainers. Given the diversity of platforms to cover, configuration tools in general take on a tremendous job. This unavoidably generates frustration in different places and at different levels. Try to be positive and not judge a configuration tool only by its failures in some cases.

Before we look to some more Make alternatives, let us first compare a few build systems.

The GNU build system (GBS)

This is the system of choice in the Open Source community. It is based on GNU Make as its build tool. GBS has a few remarkable features that greatly help with software portability. The configuration tool of GBS is a shell script named "configure". That script inspects the build platform and then produces build descriptions for GNU Make as well as C sourcecode (actually one central header file, to be included by the C source files).

What makes GBS so successful? Certainly not the fact that it was the first attempt or the only attempt. Long before it, the IMake tool introduced higher-level build descriptions. IMake comes from the C code base of the X Windowing System. IMake higher-level Makefiles made the build description both more portable and easier to write. Yet IMake was never as successful as GBS, probably due to the way the knowledge about the platform was implemented for the IMake tool.

The configuration tool of GBS, the "configure" shell script, is automatically generated by a tool named autoconf. autoconf is not a build tool, but a script compiler. In my opinion, autoconf is the most valuable part of GBS, and it is largely responsible for GBS's success. The script produced by autoconf is made very portable (it doesn't have heavy requirements on the underlying system). This means that, unlike with IMake, with GBS, you have a fair chance to successfully build on a platform that was never seen by the author of the software you build.

Equally importantly, the authors of GBS were the first to seriously consider the bootstrapping issue. Indeed, it is possible to use GBS to build parts of GBS. After a few iterations, you'll get a completely new version of GBS. It is not an easy process, but at least it is possible. You may wonder, "What's the big deal? I can already use Make to build Make!" The big difference is that, in order to build Make with Make, a version of Make has to be installed on your build machine first. By contrast, you can build autoconf with GBS on a build machine where autoconf was never installed before.

If GBS is so smart, why isn't everybody using it? One reason is that GBS is very Unix-centric and C/C++ dedicated. And even if you build C programs on Unix, GBS only works effectively if your sourcecode complies with the GNU coding guidelines. This is a showstopper if you have a large body of code with its own coding guidelines (for example, not using the central header file config.h but some other code, maybe automatically generated as well).

Another reason why GBS is not used everywhere is the price it pays for its portability. For example, the probes of autoconf are encoded as M4 macros. M4 is a powerful text processor, but its syntax is pretty low level and difficult. Difficult enough to make this a major obstacle for extending the reach of GBS. While a godsend for less capable platforms, GBS has a much harder time winning the favor of developers on mainstream platforms. People simply don't want to give up their convenience (when writing probes) for the sake of some obscure platform no one's heard of. I know this is not a technical argument, but the world is such that non-technical arguments are sometimes more important (see user reactions at Freshmeat).

Another thorny issue is weak support for embedded software development and associated issues. In the embedded world, everybody is cross-compiling; the build platform and the runtime platform are separate and very different in nature. In that case, automatic inspection of the build platform before the build will not get you very far. In the embedded world, some kind of database holding the characteristics of the different platforms turns out better. The database is painful to maintain, but is the only solution that works. Autoconf has lately added some features to support cross-compilation.

Last but not least, one issue with GBS is the fact that it uses GNU Make and nothing other than GNU Make as a build tool. You will frequently have inconsistent builds, you will have a hard time debugging an inconsistent or a broken build, you will not be able to build a program called "clean" or "install", etc. All these issues with GNU Make undermine the important achievements of autoconf.

Alternative configuration tools

The need to configure the sources before compilation is quite old, and solutions have grown in almost all large C/C++ codebases, many being ad-hoc solutions. Some codebases have been migrated to GBS, some kept their own because of some isolated advantages or just lack of resources for migration. Many build tools evolved to provide some form of source configuration, either as an add-on or as a part of the same tool. For example, Boost.Jam has its "feature normalization", SCons has its Configure, etc. I would like to mention here two stand-alone solutions used with Make, iffe and metaconfig.

If you are familiar with the AT&T labs Open Source software, you've met iffe. This is the configuration tool of their source distribution packages. Like "configure" in GBS, iffe is a shell script and is distributed with the C sourcecode. Unlike "configure", iffe is not generated. Another important difference is that iffe doesn't focus on build descriptions. It generates only source files (to be precise, it generates header files that your C files are supposed to include). It does that by processing input files named "feature test files". The feature test files are written in a specific language that is interpreted by iffe to generate header files. Because, in one configuration process, several headers are generated according to your decisions to group probes in feature test files, iffe's authors claim that their system is more flexible than GBS, and they are probably right. In the end, the fundamental mechanism to adapt the sourcecode is the same: conditional compilation with the help of the C preprocessor.

If you are familiar with the Perl software source distribution, you've met metaconfig and the dist package. Metaconfig is a shell script compiler (older than autoconf), and the configuration tool it generates is called Configure. Unlike the one generated by autoconf, Configure is mainly an interactive tool. The probes used by metaconfig are called units. These units are shell code snippets. Metaconfig was probably the very first tool to do a decent job scanning the sourcecode base to automatically detect points of customization. By comparison, autoconf still has to rely on a helper tool, autoscan, to do this with more or less success. Automatic scanning of sourcecode is great, but it requires consistent compliance with coding guidelines decided by the configuration tool. That may not be the case for existing code.

Alternative build systems

There are several other alternatives to GBS available today, both commercial products and Open Source programs. I would like to mention two of them: CMake and Qmake. Do not to be misled by their names; they are not replacements for the Make tool, but are Makefile generators. Also, do not confuse the Qmake tool from TrollTech with the qmake tool from the Sun GridEngine, which is just a parallel GNU Make.

As with GBS, the main issue addressed by these tools is the portability of the build description. CMake and Qmake have a lot in common in their design and in their spirit. They are both Open Source and they are both implemented in C++ (Qmake had a predecessor, tmake, implemented in Perl). They both support more than one build tool. (Most notably, they support the classic Make as well as the Microsoft IDE. They provide dynamic linking library abstraction to cover both Unix .so files and Microsoft .DLL files.) They both introduce their own high-level format for the build description. They both take basically the same approach to hierarchical builds as the original Make. Not betraying their high-level nature, both tools also provide support for generating some sourcecode (wrappers or mock objects). My preference goes to the CMake tool. It supports a wider set of build tools, and it offers a choice between commandline and graphical user interfaces.

For both of these tools, you have to distribute the binary executable of the tool with the sourcecode package, and this raises a bootstrapping issue. Also, in my opinion, having the knowledge embedded in C++ classes as in CMake is a shortcoming. I certainly agree that C++ classes are much better than M4 macros, but C++ will limit the number of contributions from the community. People who can write good C++ and want to support new toolchains are required to contribute their changes in a consistent and systematic way. Otherwise, over time, we will get several incompatible versions floating around for the executable of the tool. Storing the knowledge about the toolchains in some kind of configuration files so that additions don't require C++ recompilation may outweight the disadvantage of a more complex distribution of the tool (a set of files to distribute instead of a monolithic executable). Finally, like GBS, these two build systems share the disadvantages of the build tools they use. As already mentioned, that is mainly poor checking that results in high chances of getting inconsistent builds during active development of the software.

More Make alternatives: script-based build tools

My point in the previous section is that, despite what some people hoped, a good build system will not spare you the need for a good build tool. It is certainly helpful, but if the build tool is weak, the build system will have a hard time hiding it. Fortunately, the community of developers has been busy making not only smarter build systems, but also better build tools. In the following section, I describe another category of build tools. I call this category "script-based" build tools, but it is not easy to find a name that will adequately describe them in two words. It is important to understand the new paradigm shift introduced by this category of tools. The approach taken by their authors says, "Let's not invent a new syntax. We are not in the business of writing text parsers. Others already did that, and they did it better then we will do it in a fraction of our time." They also say, "Let's not write and maintain the portability layer for our build tool. Let's reuse some virtual machine already available." So these people focus on the build design and on build design-related issues. This seems to me a sound choice when one aims to provide a better build tool. I will mention only two tools in this category, Ant and SCons, but there are many others.

Ant

The Ant build tool is well on the way to become the next Make. It is an Open Source project from the Apache developer community. It is a Java program, and it uses XML for the build descriptions. It has spread quite quickly because it faced no competition (certainly not from the old Make). Too Java-centric at the beginning, it has grown today into a mature build tool that can put to shame many of its competitors. One sign that a piece of software is mature and fulfills a real need is the number of new projects that are taking that software as a base, and there are many software projects based on Ant, including:

  • Commercial products like OpenMake.
  • Extensions to address new builds (sets of new tasks in Ant parlance, like ant-contrib, which includes support for C/C++ builds).
  • Extensions to allow higher level descriptions (antworks, previously centipede).
  • Extensions to provide integration with graphical IDEs, etc.

It fact, no other build tool is gathering so many development efforts today. This gives an important head start to Ant in the race for the next Make.

Like the original Make, Ant was the primary source of inspiration for a set of new build tools like Nant and, more recently, MSBuild. They don't aim at file compatibility for build descriptions, but their spirit is the same: Describe the build as a set of tasks to carry out, each task coded as an XML element.

Of course, Ant is no silver bullet. Ant without extensions has a scalability problem. A lot of it comes from the choice of XML as a file format. XML is verbose. XML does not have a convenient way to include a file in another file (the XML Fragments standard is not really supported by XML parsers, XML entities includes work but have limitations, etc.). You need some kind of inclusion because you want to factor out common parts in several build descriptions. Of course, an application using XML files is free to add "include" semantics to one chosen XML element (and that is what Ant finally did with the "import" task in version 1.6).

Also, XML is not really a programming language. You may want to have the content of one XML element computed from sibling elements (computed strings occur often in build descriptions; see Make macros). Or you may want one element to be taken from the parent or the parent if not found at some place (which would correspond to variable scoping in programming languages). I say "you may want", but for projects of real-life size, you will need that. Otherwise, the build description grows very big and redundant. This is not a hard limitation; the Ant tool does give you the power to express whatever you want. It is just the fact that you have to go back and forth between a purely descriptive part of the build specification (build.xml) and a purely procedural part (the Ant tasks implementation) that I don't like very much. People have contributed Ant tasks like <if>, but this will not turn XML into a programming language overnight.

Another issue is that Ant has a noticeable overhead for startup and initial XML processing. This is an important psychological factor in the case of null builds (builds in which everything is up-to-date and nothing has to be done). The Make tool was fast in such builds. Make was fast for a bad reason, but the fact remains that people have a philosophical problem today with accepting that the build is slow when nothing is to be done. As with other build tools in this category, it would be nice to have the ability to "compile" the build description, and maybe the entire dependency tree, into a speed-efficient format.

SCons

SCons is a build tool that uses Python for the syntax of the build descriptions. The origin of SCons is Cons, a similar tool using Perl syntax for the build descriptions. That older tool is also still available. SCons is my preferred tool in this category and also my preferred build tool overall. It comes preconfigured with a fair set of rules ("builders" in SCons parlance). Even more compelling, it can be extended in an easy and natural way. Unlike Ant, the build description and the extensions use the same syntax (in this case, Python). SCons allows flexibility in the compromise of speed versus safety (by using either content signatures or time stamps). It detects changes on the commandline used to build something, and this makes it outstandingly reliable compared to its competitors. There are many other features I like in SCons: the transparent use of code repositories, the wink in from cached binaries, the ability to accurately document the build, the support for extraction of snapshots of the code and, finally, the promise to grow into a better build system (autoconf-style probes, the ability to generate projects files for IDEs, etc.).

As for its shortcomings, SCons is a young piece of software, at least when compared with some of the tools mentioned above. Another issue is that, despite not being recursive, SCons is slow. This seems to be a price to pay when focusing on high-level build design issues. A time will come when the authors of SCons will focus on speed optimizations. As with Perl and Java, Python allows moving the speed-critical parts to a native compiled code implementation (when the software is stable enough).

From another perspective, despite the fact that Python has a clear syntax, some people perceive SCons as being too low-level. It is perfectly possible for the release engineer to describe the build of a 50-developer project for several target platforms in only a few dozen lines of SCons code. The release engineer will find this a great time saver, but the average developer will find that code cryptic, like shell code in the early Unix days, or worse. The developers will soon ask for easier ways to add new components to the build.

More build-related tools

An overview of build tools in one article has to be a limited one. We didn't even mention all the areas in which build tools needed and received improvements.

One such area is better integration with the SCM tools. This is important because it directly addresses a long-standing issue: the reliability of the build. There are many SCM tools providing improved build tools, both commercial (Omake in IBM Rational ClearCase) and Open Source software (Vesta). But comparing the build tools integrated with the SCM systems goes far out of the scope of this discussion.

Another area we didn't touch is automation and reporting of the build (CruiseControl and Dart as Open Source tools, VisualBuilder and FinalBuilder as MS Windows commercial software). Yet another area not discussed here is the acceleration of the build when parallel build machines are available (distcc and ccache as Open Source tools, IncrediBuild and ElectricCloud as commercial software).

Finally, there is one last area that we don't want to discuss here: the software distribution tools. Many tools today allow you to connect as a client to some software repository, download the software you want, automatically download other needed parts, and install everything on your system while locally building parts that need building. Those systems include a build tool or act as a build tool when needed. They have to act as a configuration tool also, and they have to do some form of dependency tracking, much like build tools do. The best known is probably Ports of the BSD OS, and its followers like Gentoo Portage. Take a look at the A-A-P tool Web site for a list of such systems.

General conclusion

The final conclusion is up to you. The considerations in this paper reflect only my experience. But I hope that you have now some useful background to choose the build tool and the build system that best fit your own requirements.

RSS Recent comments

09 Jul 2005 04:31 alienscience

Thanks
I've gone through a smaller but similar journey through build tools and appreciate that you actually took the time to write down and share your experiences. I find the GBS (autotools) to be very frustrating and have ended up writing my own shell scripts in the past because it was the easiest option. I will definately be trying QMake, CMake and iffe after reading this.

However, I do disagree with your conclusions on some of the tools. Personally, I am thinking going back to make. It took me 10 minutes to learn, uses skills I already have in the shell and, for me, it works in a simple and consistent way.

I moved to using Scons a few months ago and I feel it suffers greatly from being based on a general purpose language. The code does not look like a build description, its too slow on any CPU under 1GHz and I do not find it consistant. For instance, having a 'make test' sort of target that runs unittests took me ages to figure out. With 'make' its simply the same as any other target.

Humble 'make' never wasted so much of my time as the better improved Scons and I fear that this is a problem with a lot of the newer tools. They try so hard to be powerful (possibly because they are written by people that find builds interesting rather than a necessary chore when coding) that they loose the simple elegance of 'make'.

09 Jul 2005 05:49 jpmalkiewicz

Original BSD Unix Make
Unless I'm mistaken, many *BSD developers would disagree with your statement about 'virtually nobody uses the original BSD make'. Indeed, the PORTS tool you mention towards the end depends on a direct descendant of BSD Unix Make.

09 Jul 2005 07:36 jaceatfreshmeat

Other make clone

I ran into dmake quite some time ago. It's not the same as Sun's dmake (distributed?). There's more info at linux.maruhn.com/sec/d... and www.wticorp.com/Downlo... but it doesn't seem to be maintained anymore. It had some advantages over GNU Make. Clearer syntax, clearer VPATH handling.

(couldn't get the links to work with tags, sorry)

09 Jul 2005 09:59 noselasd

Scons slowness.
Just thought I'd mention that the slowness of scons mostly comes from it MD5 summing the files to check if source files have changed.

You can make it use timestamps like most other build tools; SourceSignatures(&quot;timestamp&quot;) - usually speeds things up significantly.

09 Jul 2005 12:02 renez

Make article
Make is not the build tool for every thing. It does have simplicity as it biggest asset.

All the others fail to be a compelling alternative to make.

An other article about a better make?

requirements like:

platform indepent,

tool sethandling (scons)

consistent handling of variables,

being able to recurse into directories.

handle timestamp and if changed 2nd check with hash

a lot of information about target is implicit. make them explicit ( a la scons, jam e.g. lib, prog)

variant builds?

what grammer for the improved make

break history and make a clean start?

conversion utility for makefiles to new make

create a graph of al the dependency

graph with build times of every node.

infer dependencies on the basis of rules

distributed builds?

Bram Molenaar has given it some thought for the aap utility (www.a-a-p.org/tools_sc...)

The scope for aap is biggere them make

09 Jul 2005 15:47 jepler

Re: Scons slowness.

> Just thought I'd mention that the

> slowness of scons mostly comes from it

> MD5 summing the files to check if source

> files have changed.

I wonder whether this is the case---if it's true, then I can only imagine they've written some code with very poor I/O characteristics, or are using a slow implementation of the hash function. In the project I mentioned in the previous article, we have about 6000 object files built with non-recursive gnu make. A "do-nothing" build, including the time to stat the files, takes 4 seconds. It takes only 0.7 seconds to md5sum 184 megabytes of source files, include files, and objects, from hot cache. (time sh -c 'find ... | xargs -P4 md5sum > /dev/null', best of 4 runs)

I've never used scons.

10 Jul 2005 13:04 coudercd

Another configure alternative
Well i'm surprised (or maybe sad) that the author didn't found the PMK project (freshmeat.net/projects... or pmk.sf.net) which is in the first page of the builds tools recorded in freshmeat.

Speaking of make, i don't feel so much pain for using it. The problems come when you start to use non portable features. That said you it's possible to make clean and portable makefiles.

I'm actually working on a makefile generator that scan the source files to automatically produce portable makefile(s) that doesn't need any template and that need the less manual changes possible (look at pmk.sourceforge.net/pm...).

10 Jul 2005 14:46 Avatar metux
11 Jul 2005 03:34 Kayamon

Let it figure the dependencies out itself
I'm going to take this opportunity to make a shameless plug for my own program -

sham.sourceforge.net/

It's a small command-line tool that tracks dependencies itself. It allows you to replace Makefiles with just a script in any language you like.

13 Jul 2005 16:21 buildsmith

Re: Original BSD Unix Make

> Unless I'm mistaken, many *BSD

> developers would disagree with your

> statement about 'virtually nobody uses

> the original BSD make'.

Your're right. My apologies. The sentence should have been "the original make has only a small fraction of users today". It's more polite but the meaning is not really different: I do believe that they are a very small fraction if we compare, let's say, with GNU make.

13 Jul 2005 17:23 buildsmith

Re: Thanks

> Personally, I am thinking going back to make.

> It took me 10 minutes to learn,

> uses skills I already have in the shell

> and, for me, it works in a simple and

> consistent way.

In 10 minutes you only learned the basics of make. "I already know Make" was a very common misbelief among my developers. It takes 10 minutes to understand the concepts of make. But the pitfalls of the syntax, the subtle differences between make clones, when rebuilding from scratch is needed, when and how to rebuild the dependencies etc. makes up a large body of knowledge that is almost never assimilated by the users (both developers just building and developers adding new parts to the build).

The fact that with make you have to exercise your shell skills is an outstanding drawback of make. This is because the shell scripting is hardly portable and that ruins the portability of the entire build.

But, after all, may be the size and the scope of your project makes it a good fit for make. Please read the first half of the article as well. It explains with much more detail the disadvatages of make and when it fits well.

> I moved to using Scons a few months

> ago and I feel it suffers greatly from

> being based on a general purpose

> language. The code does not look like a

> build description, its too slow on any

> CPU under 1GHz and

SCons is slower than what it should be. Granted.

It's not clear what you mean by "not looking like a build description". I guess you mean that it is not descriptive but ptocedural in nature. This is not a priori good or bad; it depends on what you try to achieve.

Anyway, I agree with you on one point that you didn't mention explicitly: SCons would gain much if it would be easier to learn.

> I do not find it consistant. For instance,

> having a 'make test' sort of target that runs unittests

> took me ages to figure out. With 'make'

> its simply the same as any other target.

Oups. It seems that you never faced a big project. Otherwise you would know what is evil in pseudo-targets like the "test" you mention. Things like "test" are actions, not target names and the fact that makes puts actions and target names in the same namespace is bad. How would you build an executable named "test"? Is "test", according to you, a likely name for an executable in a large system? How do you make sure that a file name never conflicts with an action name? Do you get it now? The actions should belong to a separate namepace (for example, "make --test" is a more inspired way to implement such a feature). The worse pseudo-target is "clean". Because you can say "make target_A" and "make target_B" but you cannot say "make clean target_A", it becomes practically impossible to make sure that exactly things related to target_A are deleted, nothing more, nothing less, especially nothing from target_B. Read the manual of the autotools (link above in the article).

For small projects, this disadvantage of make (pseudo-targets) can be kept under control. When you count build targets by many hundred, this starts to look like a nightmare.

> They try so hard to

> be powerful (possibly because they are

> written by people that find builds

> interesting rather than a necessary

> chore when coding)

Now it is evident that you never faced large builds. If your system is small, I can only advice you to not pick up any of those more advanced tools. It will be overkill. But, please, do not deny a priori requirements other people might have. See also the comments of various people on first part of the article.

Also, never underestimate the fact that the build description is an integral part of the code base. Your sentence above is an offense for porting engineers or release engineers. They spend more than 50% of their time dealing with the build description. A bad build description can ruin a product exactly like bad C code can ruin it.

> that they loose the

> simple elegance of 'make'.

Do not confuse simplicity and elegance. Make concepts are indeed elegant; Make implementation has serious problems. Read the first part of the article (if possible, before commenting on the second part).

13 Jul 2005 17:44 buildsmith

Re: Scons slowness.

> Just thought I'd mention that the

> slowness of scons mostly comes from it

> MD5 summing the files to check if source

> files have changed.

I'm sorry Nils but you are wrong here. You obviously never benchmarked SCons. This confusion is quite common so let me put it straight. The MD5 computation is blazingly fast. The reason is that it is implemented in C (SCons reuses the implementation inside Python). On my system, it is many hundred files per second. By comparison, scanning for #include in SCons is about a order of magnitude slower (switch on and off implicit-cache option to see).

Another reason of slowness is the fact that SCons is memory hungry when constructing the dependency tree in memory. If you need a proof, just run -c --debug=time (meaning run a cleaning with timing switched on) several times in a raw and see how long it takes (I hope you agree that -c means that no signature has to be calculated at all).

> You can make it use timestamps like most

> other build tools;

> SourceSignatures(&quot;timestamp&quot;)

Indeed you can. Going that path, it is much smarter to use the max-drift parameter, which allows to convert SCons to signatures that can be assimilated to stored timestamps

13 Jul 2005 17:49 buildsmith

MD5 speed
You are perfectly right about the speed of the MD5 algo. The belief that the MD5 signatures is what slows down SCons is wrong. Widely spread but still a misbelief ;-). See aslo my answer below.

13 Jul 2005 18:23 buildsmith

Re: Another configure alternative

> Well i'm surprised (or maybe sad) that

> the author didn't found the PMK project

Don't be suprised or sad. There are lot of tools out there. Some of them are known to me and I didn't want to mention and some are simply unknown to me. PMK is in the latest category. I will certainly take a look at it. Thank you.

> The problems come

> when you start to use non portable

> features. That said you it's possible to

> make clean and portable makefiles.

I'm affraid we disagree here. Clean, it is possibe. Portable across clones, it is possible. Portable across shells, it is possible with a lot of effort. But clean AND portable it is impossible. How do you perform if-then-else construction? May be you moved all decisions out of the Makefile into the input to your configuration tool. How do you create a directory of level 5 when only the first 2 levels exist (the mkdir -p functionality)?

> I'm actually working on a makefile

> generator that scan the source files to

> automatically produce portable

> makefile(s) that doesn't need any

> template and that need the less manual

> changes possible (look at

> pmk.sourceforge.net/pm...).

This tool of yours sounds too much like a silver bullet. I think the misunderstanding is in what exactly you name "portable makefiles".

Do you mean by "portable" that they can be used with different make clones (BSD make, GNU make, NMake)? Do you mean by "portable" that they work well with Bourne shell, tcsh, bash and cmd.exe? Or may be you only use a limited set of Unix command line tools inside your makefiles that you assume are universally available? Please elaborate.

Be aware that if you stick with only one make, only one shell and only one toolchain, the place is taken (the autotools). People will not be very interested unless your definition for the "portability" of the build description goes indeed broader than what existing tools do. At least you made me curious about PMK :-)

13 Jul 2005 18:29 buildsmith

Re: Make article
Hey, Rene, what is the point of your comment exactly? The site of Bram was already hyperlinked in the article. Concerning the succinct list of requirements for a build system that you posted, people will fit in two categories: those who already know them and thought about them and those who didn't but will not learn them anyway from your soup of words.

18 Jul 2005 08:07 bmm

Re: Make article

> requirements like:

> platform indepent,

As a Linux hobbyist one of the main things I found wrong with most build utilities is: they take away the simplicity by trying to solve every problem on every system in every way.

Platform dependency simplifies, code dependency simplifies, compiler dependency simplifies. The resulting system can be fully automated for most languages, which makes for a fine building utility for hobbyists.

I'm just saying: if you focus on one problem, you can excel with ease. I'd like more build systems mentioning: we are small, simple and only work on Linux but on this system we can do anything you can imagine with a minimum of scripting and file listing. ;-)

21 Jul 2005 10:46 smithdev

Pretty important Boost.Build / SCons note:
Work is afoot to merge the two. See appropriate gmane.org newsgroups.

26 Jul 2005 10:26 buildsmith

Re: Pretty important Boost.Build / SCons note:
To my knowledge, "merge" is a big word for what is currently happening. It is better called cross-pollination. In the 2 communities, jam and scons, there are puches to import and take profit of ideas developed in the other community. For example, there are discussions to add boiler-plate variants to scons. This basically means preparing higher level build descriptions for scons. You will agree with me that this is quite far from a "merge". Nevertheless, what is happening between jam and scons is noteworthy and it should be taken as an example on how to avoid reinventing thet wheel.

26 Jul 2005 10:43 buildsmith

Re: Make article

> if you focus on one

> problem, you can excel with ease. I'd

> like more build systems mentioning: we

> are small, simple and only work on Linux

> but on this system we can do anything

Sure, narrowing focus greatly helps. Unfortunately, this is a luxury that

only hobbyists may afford.

If you need a real life example, think about the embedded software. Those software engineers are dealing everyday with one piece of software that is crosscompiled for a half dozen target platforms on one or two different build platforms. It's in your car radio, it's in your washing machine, it doesn't run Linux (it doesn't run any OS by the way), yet it has to be compiled, tested and so forth. For those people, target platform specific build tools are not an option. Even build tools specific to the build platform are not really an option because the embedded hardware provider is usually choosing the build platform.

28 Jul 2005 14:44 ejoy

Re: Thanks

> I moved to using Scons a few months
> ago and I feel it suffers greatly from
> being based on a general purpose
> language. The code does not look like a
> build description, its too slow on any
> CPU under 1GHz and I do not find it
> consistant. For instance, having a 'make
> test' sort of target that runs unittests
> took me ages to figure out. With 'make'
> its simply the same as any other target.

I have similar experience here. I think I'm faimilar with python, and also take ages to figure out how to do "make test" in scons. Although I finally succeed in converting a autoconf/make project to scons with all sorts of fancy stuff (make test, make install, make dist etc), the script itself is a huge python program in it's own right. Also scons is too slow after implementing all this in my script.

I now move to jam and never want to go back (even think of) to scons. Jam is small, fast and clean.

03 Aug 2005 07:38 buildsmith

Re: Thanks

> I have similar experience here. I think

> I'm faimilar with python, and also take

> ages to figure out how to do "make test"

> in scons. Although I finally succeed in

> converting a autoconf/make project to

> scons with all sorts of fancy stuff

> (make test, make install, make dist

> etc), the script itself is a huge python

> program in it's own right. Also scons is

> too slow after implementing all this in

> my script.

>

> I now move to jam and never want to go

> back (even think of) to scons. Jam is

> small, fast and clean.

Jam is indeed small, fast and relatively clean. And it is much easier to learn from somebody with Make background compared to SCons. So feel free to stick with jam. BTW, which version of Jam are you using? How did you replaced Autoconf?

Concerning the phony targets (make test, make dist, etc.) check out my answer right above to learn why a build design using them is not a smart idea.

03 Aug 2005 08:04 buildsmith

Re: Pretty important Boost.Build / SCons note:

> there are puches to import and take profit

Sorry for the typo. I mean "there are pushes" of course.

03 Aug 2005 08:34 buildsmith

Re: Another configure alternative

> the author didn't found the PMK project

> (freshmeat.net/projects... or

> pmk.sf.net) which is in the first

Damien, after a short study of PMK I got confused. Could you elaborate on why should I prefer PMK to CMake?

I understand the reasons to prefer PMK over the autotools, you explained them quite well on the site. But what I see in PMK is yet another Unix-centric tool that doesn't help a lot the embedded developers like me: we are cross-compiling day-in day-out and most of the time we don't even have the choice of the build platforms (because the hardware vendor already chose it). CMake has the advantages and the disadvantages of PMK in general but it's much more opened to the world. May be I am overlooking something.

04 Aug 2005 12:22 ejoy

Re: Thanks

>
> % I have similar experience here. I
> think
> % I'm faimilar with python, and also
> take
> % ages to figure out how to do "make
> test"
> % in scons. Although I finally succeed
> in
> % converting a autoconf/make project to
> % scons with all sorts of fancy stuff
> % (make test, make install, make dist
> % etc), the script itself is a huge
> python
> % program in it's own right. Also scons
> is
> % too slow after implementing all this
> in
> % my script.
> %
> % I now move to jam and never want to
> go
> % back (even think of) to scons. Jam
> is
> % small, fast and clean.
>
>
> Jam is indeed small, fast and relatively
> clean. And it is much easier to learn
> from somebody with Make background
> compared to SCons. So feel free to stick
> with jam. BTW, which version of Jam are
> you using? How did you replaced
> Autoconf?
>
> Concerning the phony targets (make test,
> make dist, etc.) check out my answer
> right above to learn why a build design
> using them is not a smart idea.

I use a custom built jam (modify Jambase only), together with autoconf. Jam can work fairly well with autoconf. It's a make replacemant, not an autoconf replacement. No need to use automake, which is quite a problem though.

05 Nov 2005 10:26 coudercd

Re: Another configure alternative
Wow,

sorry, i didn't seen your reply before so here i come.

>

> % The problems come

> % when you start to use non portable

> % features. That said you it's possible

> to

> % make clean and portable makefiles.

>

>

> I'm affraid we disagree here. Clean, it

> is possibe. Portable across clones, it

> is possible. Portable across shells, it

> is possible with a lot of effort. But

> clean AND portable it is impossible. How

> do you perform if-then-else

> construction? May be you moved all

> decisions out of the Makefile into the

> input to your configuration tool. How do

> you create a directory of level 5 when

> only the first 2 levels exist (the mkdir

> -p functionality)?

>

The main problem is to only use standard features of make like described in POSIX.

>

> This tool of yours sounds too much like

> a silver bullet. I think the

> misunderstanding is in what exactly you

> name "portable makefiles".

>

See above, portable makefile are files that follow standard specifications. Any POSIX compliant make could use them.

05 Nov 2005 10:35 coudercd

Re: Another configure alternative

> I understand the reasons to prefer PMK

> over the autotools, you explained them

> quite well on the site. But what I see

> in PMK is yet another Unix-centric tool

> that doesn't help a lot the embedded

> developers like me: we are

> cross-compiling day-in day-out and most

> of the time we don't even have the

> choice of the build platforms (because

> the hardware vendor already chose it).

> CMake has the advantages and the

> disadvantages of PMK in general but it's

> much more opened to the world. May be I

> am overlooking something.

>

Hi Adrian,

you're right PMK is absolutely Unix oriented and more specifically POSIX oriented.

The auto* tools have demonstrated that a unique tool will become more and more bloated if it must support more platforms.

My point of vue is that a common base is needed to be used by platform specific tools. This common base must be as simple as possible to be usable on each platforms.

The main PMK project will be always POSIX centric simply because Unix is my favorite platform. I always said that maybe one day pmk could be ported on windows as another project. Why another project ? Because i prefer many specific tools instead of one bloated piece of work.

I hope the explanation does not come too late :)

Damien

15 Nov 2005 19:57 nogin

Missing alternative: OMake.

Unfortunately, the articale does not mention the OMake Build System (it does mention IBM's OMake, but that's a different project).

The OMake build tool is designed specifically to address all these many limitations of make, while preserving the "spirit" of make.

OMake is a build system with a similar style and syntax to GNU make but with many additional features, designed to scale from tiny project, where an OMakefile might be as small as one or two line to large projects spanning multiple directories. It has native support for commands that produce several targets at once. In also includes fast, reliable, automated, scriptable dependency analysis using MD5 digests. It is highly protable portable (Lnux, Windows, Cygwin, Mac OS X, FreeBSD, etc) and comes with built-in functions that provide the most common features of programs like grep, sed, and awk. OMake also provides active filesystem monitoring that restarts builds automatically when source files are modified. OMake comes with default configuration files simplifying the standard compilation tasks. A companion command interpreter that can be used interactively is included.

OMake is open source and is distributed under the terms of GNU GPL (with an MIT-style license for standard libraries).

See OMake Project page (freshmeat.net/projects...) for detail.

03 May 2006 12:53 feanor_marco

Re: Thanks
"No need to use automake"
well, thanks a lot for this hint... I've tried nearly everything

31 May 2006 04:57 operaFAN1

Re: Let it figure the dependencies out itself

> I'm going to take this opportunity to

> make a shameless plug for my own program

> -

>

> sham.sourceforge.net/

>

> It's a small command-line tool that

> tracks dependencies itself. It allows

> you to replace Makefiles with just a

> script in any language you like.

good work, mate

09 Aug 2006 06:32 ericArmstrong

Make Alternatives: Rake/Rant
Terrific set of articles, with a nice collection of alternatives. One that really needs to be mentioned is Rake, and it's high-powered cousin, Rant.

Martin Fowler's article turned me on to Rake. It's the reason I became interested enough in Ruby to overcome the startup hurdles.

Rake not only remedies the syntax deficiencies of Make and Ant, it puts the power of the Ruby programming language at your disposal. In a word, it is suberb.

For more information, see:

Rake Rocks, by Eric Armstrong

www.treelight.com/soft...

Using the Rake Build Language, by Martin Fowler

www.martinfowler.com/a...

23 Aug 2006 01:04 CollegeStudent

Go with ANT
Very comprehensive! I was introduced to ANT a few years ago and have never looked back. As a ColdFusion/Java developer it's been invaluable.

Here's a great article for CF guys looking to get started with ANT:

coldfusion.sys-con.com...

--

daniel _AT_ tuggle.it (tuggle.it)

03 Jan 2007 13:34 bugmenot

Influent?
I think you mean 'influential' not 'influent'.

04 Jan 2007 16:48 jeffcovey

Re: Influent?

> I think you mean 'influential' not
> 'influent'.

Why would you think so?

06 Jan 2007 03:30 CrazyGFreak

Re: Another configure alternative

>
> % I understand the reasons to prefer
> PMK
> % over the autotools, you explained
> them
> % quite well on the site. But what I
> see
>
> The main PMK project will be always
> POSIX centric simply because Unix is my
> favorite platform. I always said that
> maybe one day pmk could be ported on
> windows as another project. (www.forumla.de/portale...) Why another
> project ? Because i prefer many specific
> tools instead of one bloated piece of
> work.
>
> I hope the explanation does not come too
> late :)
>
> Damien

>
No it was not too late. At least for me. Thanks Damien.

28 Sep 2007 17:57 buildsmith

Re: Influent?

> I think you mean 'influential' not

> 'influent'.

Damn, it took me 2 years to notice that funny mistake :-)

I meant 'influential' as in 'having a lot of influence'. That being said, I also think that the 'make' tool is 'influent' as in 'lacking fluency', even 'unspeakable'. But it wasn't my intention to start the article on such a negative note.

Thanks, Debbie.

28 Sep 2007 18:46 buildsmith

Re: Make Alternatives: Rake/Rant

> One that really needs to be mentioned is Rake,

I've tried out rake. And at that time (2005) I largely

preferred SCons to rake. The main reason

was that more was available out of the box.

Also the SCons distribution named 'scons-local'

was very well matching our setup.

Anyway, it is good to notice that both rake and SCons

are targeted more at the release engineer. Meaning

somebody whos job is to design, implement and maintain

build description. For that role the powerful syntax

(Ruby or Python) and their standard libraries readily

available really shine. For the casual developer,

these same things may be real disadvantages

(significant leaning curve). He never invents

new kind of build steps, never designs new builds.

It doesn't matter to him that the string manipulation

is much better in Ruby (or Python) than with 'make'

if he is completely new to Rudy (or Python).

Sharp contrast with the closely-build-involved

developer, who will instantly fall in love...

Still, let's not forget that "the power of the Ruby

(or Python) programming language at your disposal"

is largely irrelevant to a lot of developers.

They just "add to the build" once in a while

and that is just "copy, paste, change-a-name, run".

> and it's high-powered cousin, Rant.

What a pleasant discovery. I didn't know

about this one and it looks like I will like it.

So rant (rant.rubyforge.org/) has all the features of modern build tools:

content signatures, integrated implicit dependencies

checker, end-to-end builds (from generating sources

to making the deployment package).

It also have less frequently seen features: an

outstanding approach to bootstrapping, support

for C# builds, etc.

Too bad that the last update is about 1 year old.

28 Sep 2007 18:54 buildsmith

Re: Missing alternative: OMake.
Indeed, this is an entry worth mentioning. Interesting design and interesting set of features.
Thanks, Aleksey. I'm afraid that OCAML is a bit of a barrier for the masses.

25 Oct 2007 07:48 bkleven

What do you think about Bras?
First, I'm coming at this from a different perspective - I'm a hardware designer and we use make to automate as much of our flow as we can.

I understand the basics of make, but the more advanced features are killing me. Unfortunately, at some points in the flow, there are a very large number of outputs and there are quite a few dependencies that are not easily (but still can be) derived from the name of the target. This is just one example of problems I have had.

With that said, I was recently introduced to Bras, a TCL based build tool which is hosted on Berlios.de (bras.berlios.de). I quick perusal made it look a bit promising, but at the same time a bit concerning since it hasn't been updated in over five years (!).

Compared to the other tools noted here, where does Bras stand? Being new to this, I'm still having a tough time ascertaining just what features I should be truly worried about.

Since I know TCL quite well (EDA tools are now almost exclusively TCL interface), I'm naturally drawn to Bras, but that may not be a wise choice.

03 Apr 2012 09:04 Avatar themroc

> Last but not least, one issue with GBS is the fact that it uses GNU Make and nothing other than GNU Make as a build tool.

That is plain wrong. From the automake manual:
"
15.2 Simple Tests using ‘parallel-tests’

[...]

Please note that it is currently not possible to use $(srcdir)/ or $(top_srcdir)/ in the TESTS variable. This technical limitation is necessary to avoid generating test logs in the source tree and has the unfortunate consequence that it is not possible to specify distributed tests that are themselves generated by means of explicit rules, in a way that is portable to all make implementations (see Make Target Lookup, the semantics of FreeBSD and OpenBSD make conflict with this). In case of doubt you may want to require to use GNU make, or work around the issue with inference rules to generate the tests.

20 Conditionals

Automake supports a simple type of conditionals.

These conditionals are not the same as conditionals in GNU Make. Automake conditionals are checked at configure time by the configure script, and affect the translation from Makefile.in to Makefile. They are based on options passed to configure and on results that configure has discovered about the host system. GNU Make conditionals are checked at make time, and are based on variables passed to the make program or defined in the Makefile.

Automake conditionals will work with any make program.

28.3 Why doesn't Automake support wildcards?

[...]

Wildcards are not portable to some non-GNU make implementations, e.g., NetBSD make will not expand globs such as ‘*’ in prerequisites of a target.
"

And this (IMO) is one of the reasons the GBS sometimes is a PITA: It *should* require a modern Make with "include" or conditionals instead of generating Makefiles even the dumbest prehistoric implementations of Make imaginable can understand.

> you will not be able to build a program called "clean" or "install"

Yes, you can:
--program-transform-name=program
Run sed program on installed program names.

Screenshot

Project Spotlight

Spare Deck

Provides virtual AR cards from popular mobile platforms in one Android app.

Screenshot

Project Spotlight

ddpt

A dd command variant for disks with large I/O support.