Articles / Modular vs. Monolithic: The...

Modular vs. Monolithic: The winner is ...?

The history of software development is full of controversies. One of the oldest is the controversy about modular vs. monolithic software development.

In the Free Source community, discussing this subject probably goes back to the announcement of the GNU project, including the microkernel project GNU Hurd, by Richard Stallman in 1983. The microkernel architecture, first proposed by the MACH project of the Carnegie Mellon University, should have overcome design limitations of monolithic operating systems by providing only a thin (but extendable) software layer above the running hardware which offers full flexibility (and thus greater possibilities) to the system user. It didn't have only friends, though. One of its most prominent opponents was Linus Torvalds, the founder of the Linux operating system, which was first released in 1991. Torvalds not only chose the monolithic approach, he openly described existing microkernels as crap. (Others weren't speaking nicely about Linux, either.)

Over time, every corner of the software world was subject to similar debates. While the conservatives argued for the speed of monolithic products and raised panic about the implications of the complexity of modular systems, the evangelists praised the possibilities of flexibility, configurability, re-usability and interoperability.

The evangelists developed efficient component frameworks like ORBit and popular toolkits like GTK+ and, on top of them, complete development environments like the desktop environment GNOME. But the environments became somewhat monolithic themselves in respect to size, speed, interdependencies, and interoperation with other environments. GNOME is a very good example. It started as a concurrent to the already-existing KDE and didn't share a component nor even a protocol with KDE in the beginning. The size of the installed codebase was a few hundreds of megabytes. While there were only a few native desktop applications, non-native applications showed unexpected behavior because of lack of support for the new GNOME strategies.

Developing functional, modular environments turned out to be a highly sophisticated long-term business with less usability and interoperability than expected. Monolithic environments like MS Windows and applications like MS Word still ruled the software market, and they provided even more functionality than GNOME and KDE together. In fact, many desktop environmentalists began believing that without having their own MS Office-like super-applications, the whole Free Source market could be proven obsolete by the proprietary competitors. In consequence, even component model fundamentalists like Miguel de Icaza started projects like Gnumeric, an MS Excel clone, and Evolution, an MS Outlook clone.

On the other side, monolithic software development had already changed to a modern, mixed approach. The developers wanted to profit from modern strategies like object-oriented programming (OOP) and the available component models (CORBA, COM, and others). The new approach proposed making extensive use of the environment underneath. However, the most-criticized disadvantages of monolithic software, the low re-usability and interoperability, remained. Besides, only a few of the Free monolithic projects like OpenOffice or Mozilla could attract enough developers to maintain the huge codebase.

It turned out that both sides were only half successful, and that a combination of both models was the best way to overcome the inherent limitations. However, this is just a halfhearted solution compared to a real innovative strategy aiming at overcoming this situation. Miguel de Icaza, instead, propagates a brand-new development environment called .Net, which is out to conquer the Free software market soon -- though it's being developed by Microsoft. To not miss the deadline, Miguel started the mono project, a Free implementation of the .Net programming language C#, and invested a lot in promoting his great new vision.

However, even Microsoft doesn't seem to believe that .Net will overcome the era of monolithic products. The effort to integrate all of MS Office into .Net shows that only the next step in software development history is being taken: the era of modular monolithic applications. The .Net approach only translates the advantages of both software development models into a consistent (though quite big and complex) modern software development strategy. Microsoft doesn't change anything fundamental except that it, at least for itself, ends the controversy and draws its conclusions.

The reason for this philosophical change is quite economical. The future success of Microsoft is doomed to be based on a sane component model to avoid being passed by its competitors in the long run. This is because Microsoft is the developer of hundreds of products, defacto standards and other greater projects which have grown very complex over time and have to be mastered quickly and efficiently. The typical customer is not interested in such details and wants solutions that match his individual situation. This is true for both end-users and application developers, and it forces Microsoft to provide customer-centric frontends with well-known behavior. In other words, Microsoft must be able to feed its own needs as well as every customer wish with .Net. This also explains why Microsoft will not give away control over it.

The Free Source market doesn't have a central instance. However, from the bird perspective, it is one big, interdependent system which is organizing itself automagically like a fuzzy (but functional) alternative to .Net. This .free doesn't show a clear beginning nor a clear end, but if you want to have an overview, you just have to browse the DVD of your favorite distributor. The Free Source market has already taken a similar (but more random) approach to that of Microsoft, but the individual Free Source developer is too small to see it.

So, what is the moral of the story? The winner is both? No, actually not. Is it .Net? This may happen if we don't have an eye on it. Actually, at the moment, there is no clear winner. It seems that there are only clear losers. As explained above, both the monolithic and the component model can only coexist halfheartedly. On the other hand, .Net and .free, though they appear to be two different ways of creating a natural response to this problem, are not overcoming the inherent problems, but just trying to make the best of the situation. The .Net approach is the most consequent because it not only really fits all the parts together with just a single strategy, it comes with all the nifty stuff needed to reach the goal. There is really a lot that .Net includes which may make it the better approach already. This is also because other proprietary strategies like Java do not go so far and are developing in quite isolated ways. The .free approach doesn't even look like ever being able to reach this consistency because it develops too randomly to ever find a baseline that at least most free developers will agree on.

Here we come to the biggest thread of all. Even .Net can only offer, but not fulfill. And even Microsoft is too small to provide the software world with every piece of code the customers can think of. Therefore, it needs the help of other software houses, which means that the success of the .Net approach is in the hands of the proprietary software market. However, this market itself is not as flexible and open as .Net. Though .Net shows the potential, it may fail because it ends as the playdoll of commerce.

The .free-market, on the other hand, has the liberty and openness to make the new approach a success. However, this market can't even decide on a programming language. Even industry standards like CORBA and XML do not capture every Free developer's heart and mind. The potential may be left unused. This would mean that .Net is the best way to go.

What can we do to prevent this? We can try to agree. Not about the the right programming language, nor the right desktop environment, but we can try to agree on good practices and standards, regardless of how many programming languages we use to implement them in the end.

Fortunately, some groups, like, still remember the good old rites like those celebrated by the X Consortium; they first discuss solutions and agree on them, and afterwards implement the code appropriately. Unfortunately, there are only a few such groups, and their manpower is low. Also, they are often too quick in developing the prototype and only documenting the particular API; remember CORBA and IDL. Nevertheless, we may see more people turning to this more scientific approach to software development in the near future. It may help us find good practices and standards we can all agree on -- at least in theory. Maybe then we'll come to the question of who is the winner, monolithic or modular development. Maybe then we'll ask: Who is the winner, .Net or .free?

RSS Recent comments

17 Jan 2004 01:45 peidran

Err, were you going to tell us what sort of thing .Net is,
and how it fits into the monlithic vs. modular debate? At
least a little bit?

17 Jan 2004 11:30 bsavoie

Good Topic
It has always been about how much one person can comprehend. We have too much RAM and too little time. After patching with bash and perl we have lots of duplicated subroutines located in our software. This all fits within a programming language spectrum. I like to program in Ada 95 but others don't. You can do a lot in a day with with bash scripts. After all the software is free, so what is there to be upset about?

Computers are a new subject, like a teenager, it is full of juice and going in 100 directions at once. Give it a few more years, maybe another 30 years and look what will happen.

In the beginning we had radio. As a 9 year old kid I built crystal sets until I could afford a 1T4 tube and a 45 volt battery. Wow that was cool. I still had headphones, but it was so clear! In another 30 years we had stereo components and walkmans. Now 45 years later, we don't have 'radios' any more, we have other 'things' and they all have radio parts, but be now call them cars and houses. Can you even buy a car without a radio? Things get stuck together and they get transformed from nouns into adverbs. They are not unique and separate any more, they become foot notes of something else. That something else is what is now 'cool'. That is the killer application.

The next killer application might be speech recognition and machine cognition. That way the house knows what you are in the mood for and it plays that kind of music. The first person to do that may have just written a perl script but within 20 years it will be done in hardware. Once you know where you want to go (software is the best playground for this activity) then you can build it in hardware (the best way to make money?).

It is fun to be alive! We get to see this stuff unfold..

Thanks for the article.. Bill Savoie

17 Jan 2004 13:51 garfycx

Re: .Net?

> Err, were you going to tell us what sort
> of thing .Net is,
> and how it fits into the monlithic vs.
> modular debate? At
> least a little bit?

Articles are always somewhat straight because they have to argue linear. My personal intention is to motivate people to think about what is the real motor behind software success. Is it the development model/strategy? Can new strategies overcome shortcomings just so? Or is it something deeper, something more spiritual. Stallman would talk about some moralty stuff or the like.

.Net is only a consequence of the modern software world but has to be mentioned because it is one of the bigger things. I believe that we have enough great tools but nothing like .Net. Though, I do not agree to .Net. It is a suite, not a spirit. Nevertheless, it has to be mentioned because it shows both the problem and one possible answer. It is not the only answer though. I don't stick to it but want to show that there happens a big change in software development at the moment which needs more than tools but a spirit and clear strategies either.

I'd like to motivate people to work more on the intellectual background of products than on the products themselves. Do we really have the right basement for the future? Should we re-design things to overcome some inherent problems instead of just promoting development environments (which do it all for us?)? Actually, my article is more a pledge for than for .Net.

If you know this all and are already involved, don't worry, you just don't belong to the targeted audience ;)

17 Jan 2004 16:33 aldem

We have to agree? :)
I don't think so. Because once we agree - there will be no freedom anymore - freedom of choice.

Agreements and standards tend to freeze development - think about TCP/IP (but this applies not only to software development, though). Sometimes they improve things - but anyway - the price is - delayed evolution.

So better to leave everything as is (at least in software development).

17 Jan 2004 18:17 rakaur

Re: We have to agree? :)

> I don't think so. Because once we agree
> - there will be no freedom anymore -
> freedom of choice.

He who has choice, has torment.

18 Jan 2004 00:17 darkonc

Even Linus went (partly) modular
As much as Linus liked the monolithic approach,
Linux is now full of modules (perhaps not quite
to the point of a Microkernel, but...).

I'm thinking that, like most anything else, going
too far in one extreme is often as bad as going
off in the other direction. Linus originally
decried the idea of modules, but then we learned
that they definitely have their uses (would
you go happily back to a non-moduled
I think that Linux now strikes a very pleasant
medium... The really core functionality in the
kernel, and lots of modules for the 'interesting
but non-critical' stuff.

18 Jan 2004 08:41 garfycx

Re: We have to agree? :)

> % I don't think so. Because once we
> agree
> % - there will be no freedom anymore -
> % freedom of choice.
> He who has choice, has torment.

That is the point. Again a struggle about something completely unnecessary but very emotional takes place. You cannot build metropolies in a jungle without destroying it and installing a working and well-known infrastructure. On the other hand, you cannot breathe without preserving the jungle. This is what I expressed with .Net and .free as natural responses to this inherent conflict. We need both a red thread and freedom of choice hand in hand. Thatfor we have to agree on what is the thread and what is the sandbox.



18 Jan 2004 11:54 perlchild

a slightly different perspective
I found the above text very interesting, however I did notice that it didn't explore enough this particular angle of this rather troubled question:

Why do a lot of microkernels don't catch enough 3rd party interest, whereas monoliths that develop a second personality, as it were, with modules, do. Can it be the fact that the modular-monolith's architecture has to supply all the services to a complete working system from day one, while a lot of "modular-as-a-philosophy" projects usually go "we do this much, and see how easy it is to add foo or baz feature, why don't you write it for us? We wrote this very generic interface, so any kind of module can join in!"

As opposed to: "If you write a module for L Modular Monolith, you have to follow this set of APIs, anything else and you're on your own when it comes to bugs." (talk about oversimplifying to illustrate a point)

There is also the problem that in a true modular environment, if there is a conflict between two third-party module's interfaces, how would you mediate them? Who would get to play referee? Those questions, and others, just might also have to do with the inexplicable unsuccess(to me, at least so far) of mach and other micro-kernels.

18 Jan 2004 23:38 koennecke


I think now that this modular versus monolithic debate is really about where the complexity goes. In a modular approach the complexity goes into the comminucation between components. In an monolithic approach it is handled in one large chunk.

IMHO, debugging a problem in a large executable is easier to do then debugging communication problems between coexisting modules possibly residing in separate executables. Especially when timing issues are in the game. This may also be the reason why the Hurd took so long to become operational.

According to current fashion, a well designed application consists of lots and lots of small classes doing little things. IMHO, such a structure can be as hard to understand as the old 10000 line Fortran main program, especially if the design documentation is missing (the usual case) and you have to figure out yourself where things really get done.

Probably the silver bullet is flying right through the middle in this issue and there is no right or wrong but just personal taste.

The next thought is: why do programs grow so big? Well, everybody knows this, some users want this special feauture, others another one. If all this gets implemented the result is a monster like MS-Word of which the typical user only uses 10%.

I think it should be easier for a program user to tailor an application to his needs. Attempts to this goal include the unix toolchain approach, fourth generation languages and scripting languages. Apparently this is not yet good enough for the average user. Perhaps this is the area where progress is needed.

Mark Koennecke

19 Jan 2004 09:08 garfycx

Re: Complexity
I think that there are many answers to your comments. I'll just focus on one specific kernel fact. Monolithic software is hard to extend if it is not designed in a modular fashion. Modular development should've overcome this but as you say, it only brought us communication nightmare and a quite random choice of modules.

I think that this is what .Net is targeting at. It not only brings a new language but passport, hailstorm etc. It has about 350 core libraries for whatever purpose. Nevertheless, this is huge and thus complex either, and only microsoft controls the way it develops.

Free source developers say that their market is the better alternative because you don't strand in those implications. That's not true either. If you are forced to install a nightmare of thousands of mini-products on your harddisk to get the software that you actually wanted running, that's not a solution at all.

Distributors help. But they only do what I was writing about. They aggree (internally) on a specific set of software and standards and a specific combination and strategy for maintenance reasons.

Projects that are developing environments, like GNOME, do the same when they found and push a universal mime-database or the like.

There's always something we have to agree on if we want to make a free market and the modular approach (which the free market is from a different perspective) successful. Otherwise we can only go with monolithic products and let others serve us because we can't handle the complexity of modular projects.

But there's another point. We have to discuss the modules and communication methods themselves. Think about D-Bus which is a solution to the CORBA/bobobo-nightmare. Another example is that you can stick abiword into evolution for whatever reason but you can't stick it into sodipodi, though the text support in sodipodi is really a mess and abiword would make sodipodi a real alternative to illustrators and the like. Overcoming this would not only mean developing a protocol like D-Bus. It also means asking ourselves if the CORBA/COM-strategy is the right way to go at all.

For an example: shouldn't the canvas component be a self-managing server-application that lends layers to apps instead of being plugged into the apps. It could save 'working-results' in a meta format. When the user decides that the work is done and wants a pixmap of it all, the canvas overgives all content, in a meta format, to The Gimp. If the user wants a document, the canvas overgives the stuff to abiword (...) This is quite a bit like a Gstreamer for Editors.

I think, at least if the project is something central, a part of the free source infrastructure, whe should not just go on developing but discuss design principles and the like first. Then we should agree, and then we should develop. Thatfor we'd need more than Freshmeat, Sourceforge etc., we'd need a 'Developer's Central'.


21 Jan 2004 05:49 cappicard

Re: .Net?
This is my take on this subject: monolithic in some cases is faster. On the other hand, the larger the resulting program/image becomes more cumbersome to maintain and slower (read: uses more resources).

Modularity has the communication problem, but is much easier to maintain on an individual basis. Related to the communication problem is integration (it's always an issue where many people work on separate sections of the code/project; but CVS/RCS helps in this case).

So, it's a balancing act. It also depends on the size of the project. Toward the smallest projects, there is little benefit as far as speed is concerned with modularity (like the famous "Hello, World" program ;)

21 Jan 2004 06:10 cappicard

Re: .Net?

> On the other hand, the larger the resulting
> program/image becomes more cumbersome to
> maintain and slower (read: uses more
> resources).

Time to correct my own silly typos:

On the other hand, the larger the resulting program/image becomes, the more cumbersome it becomes to maintain. Also, the larger it is, it potentially becomes slower .

28 Jan 2004 03:15 andrekloss

This is one of those "fuzzy" things...
Well, there are lots of fully monolithic programs out there, and I am sure that until they reach a certain size, they will work just fine. Look at ftp or any small unix tool out there as a common example.

Also there are some amazing apps out there (gimp being one example) that use the modular approach to great benefit. Notice these are big applications. Not your average quick hack.

My rule of thumb is: If your program gets big, go modular. But not before it gets big. Flexibility (as obtained through modularity and loose coupling) comes not only at the price of execution speed, but also of greater overall complexity.

Oh, and I also did not understand what this has to do with .Net... it's just a framework. Not a new religion or something.

28 Jan 2004 09:14 garfycx

Re: This is one of those "fuzzy" things...

> Oh, and I also did not understand what
> this has to do with .Net... it's just a
> framework. Not a new religion or
> something.

Read the comments, maybe they help. I agree, .Net is not a religion but it tries to overcome the divide between many software worlds like between monolithic vs. modular software development. The point is that I wanted to make people aware of the real question behind development, but the comments show me that, again, only the debate I wanted to end has risen up again. Please read the end of my article (from the middle) again, thanks.

02 Feb 2004 06:14 pphaneuf

Related project
XPLC (/projects/xplc/) is highly related to this subject. It allows for modularity through strong interfaces, and is extremely lightweight (unlike CORBA, for example). It doesn't offer inter-process communication, only modularity and binary compatibility (use something like D-Bus for IPC).

24 Feb 2004 23:23 erratio

Gradual modularity - sorry if this is long
I think one thing that is overlooked in this conversation is the increasing modularity of monolithic software. As projects grow and related software is developed off of it, then things like libraries are broken off, and the different pieces of software begin to optionally coexist with each other. This is of course slightly different than the deliberate intent of creating a module/framework type structure, but inextricable from the topic. If an implementation of something becomes relatively standardized in it's use, than it being modularized is to some extent is inevitable, and it being fairly fully modularized is virtually so considering the advantages in, at the very least, ease of use...particularly when it may end up being deployed in a multi-tier setup. As more and more new programs use more and more established modular code, than their architecture will shift more and more until the only things that are monolithic are things essentially unique to that program. Right now there are few good, established examples of thorough modularity, largely because computers are still, for the most part, in their infancy and have only recently acquired a strong foundation. However, if you take into account the growth of things like ODBC, .NET, ALSA, GTK, SSL, X and all the other ones I'm missing, and look at how the newer technologies and implementations become increasingly modular if no other reason than to allow other software which use them to be updated more easily (and vice versa), than the flow is more noticable. Writing a program which is for the most part monolithic may help get it off the ground (like OpenOffice...or the majority of programs), but as new variations on old features are added or other software which it uses is updated and modified, a framework structure greatly reduces maintence time and trouble. And as for performance, with speed of computers now and the growth of that speed, I think the overhead of component communication is more than compensated for by the flexibility, upgradibility, and most directly by the knowledge that the code in the module is written to be the best at what it does.

24 Feb 2004 23:37 erratio

One more quick thing
Wouldn't modular programs basically be an incarnation of OO development (whereas procedural could be analagous to monolithic).

Hmm...I lost track of what date it was and thought this conversation was more recent

05 Apr 2004 10:40 einhverfr

Not so sure

> Wouldn't modular programs basically be

> an incarnation of OO development

> (whereas procedural could be analagous

> to monolithic).

IMO, UNIX is MUCh more modular than Windows, despite the fact that Windows is more OO. I don't think that OO has anything to do with modularity. Furthermore, when your modularity is based (as in UNIX) on text streams and simple tools rather than (in Windows) on COM/object brokering, I think the system becomes more robust.

03 Sep 2004 13:36 t0yswar

Monolithic core with Modular specifications
Looking at the most efficient projects these days I feel that a monolithic core application answering the main functionnality with a modular (plug-in) multi language expendability to allow the advanced-users to add the functionality they need seems to be a good mix for user-centric applications (i.e. Eclipse, Firefox, Glade/GtK/libglade,... Linux, no user centric so...). It can also give the project maintainer the ability to integrate in the core the most accepted functionnality. It is important to well define the boundaries of the application (Eclise is getting to big trying to go outside the IDE world, and a fork would probably be a better option). What would be nice is to agree on inter-core communications at the user level (i.e. drag'n'drop) but also at the functionality level (i.e. Corba, EJB or XML/RPC) and at the data level (XML, RDB)... and while I'm at it if we could document beter to new developer how to separate the UI from the apps so that portability to Gnome, KDE, Web (XML/XSLT/JavaScript or Flash), SWT would be good.

17 Oct 2004 22:37 mdnava

Who decides standards?
Well, is true some lack of agreement on programming languages and programming standards and I think the most logical future for Open Source is that they will be agreed or followed at some point, maybe not by the Open Source comunity itself but by big software companys like Microsoft.

The meaning of many Open Source apps is to offer a free clone of something "not free" like Unix, MS Office, Photoshop.

If Microsoft is saying .NET and Open Source would follow then we could have a starting point and .NET seems eclectic enough to fulfill the balance between the monolithic and modular schemas.

Open Source needs some logic and centralized way of developing software to keep up.

29 Sep 2006 13:53 einhverfr

Re: Even Linus went (partly) modular

Just as a quick point, for general purposes you are right. However, there are times when I go as far back from a modular approach as possible. For example, I prefer it if firewalls are as non-modular as possible because it makes them much harder to change even in the event of a root compromise.

So horses for courses....


Project Spotlight


A TimeMachine style backup system for Linux.


Project Spotlight


A program that generates Java GUIs from natural language.