Articles / Modularity


In today's editorial, David Symonds shares his views on what's good and what's bad about modularity, and suggests that more tools should take advantage of modular techniques.

The Present

Modularity is often referred to as a good thing. So what's so good about it? And is it always good? Could modularity ever be bad?

The first thing most people ask about anything is: "What's in it for me?" In the software world, there are generally two categories of people: developers and users. For users, the convenience of being able to swap and rearrange configurations is great; it also removes a fair bit of dependence on the supplier. For developers, modularity can ease the production of software by breaking a program into distinct, yet interconnected, components.


The ubiquity of the Apache HTTP server is a testament not only to the effectiveness of Open Source development, but also to the benefits of modularity. Perhaps the two most widely-used modules for Apache are mod_perl and mod_php. They're both used for dynamically generating content, and both were developed separately from the Apache Project. The obvious implication is that without Apache having its clearly defined API, mod_perl and mod_php would not have succeeded anywhere near as well as they have.

Kernel Modules

Another example of the benefits of modularity comes in the form of kernel modules (for example, in Linux). I, for one, use kernel modules frequently. At the time of writing, I have 28 modules, and 4 of them are currently loaded (for my sound card). When I connect to the Internet, 3 more are loaded (for PPP); after disconnection, they're unloaded. When I need to do some printing, 3 modules are loaded (for parallel port access), and unloaded automatically after a period of time. When I mount an Iomega Zip disk, the ide-floppy.o module is loaded; after unmounting the disk, the module is unloaded.

Since my hard disk is formatted using the ext2 filesystem, the ext2.o module is compiled into the kernel. I periodically have to read and write disks for exchange with Windows 95 machines, so I have the vfat.o and fat.o modules unloaded by default.

For me as a user, all this is an incredible convenience. Quickly checking, I find that the 4 sound modules eat up 94.5K of memory. If I load all 28 modules, this figure jumps to a whopping 431.8K. Although that might not sound like much, it's huge when compared to my kernel of 408K. If all those modules were compiled in, not only would it take longer to start up, but more RAM would be taken unnecessarily by the kernel, and it would be very difficult to reconfigure some components on-the-fly. At present, if I wanted to, say, change which IRQ the sound card uses, I simply edit the configuration file (/etc/isapnp.conf), and restart the sound subsystem (/etc/rc.d/init.d/sound restart). Modularity makes reconfiguring your sound card as simple as reconfiguring Apache!


One of the best things about OpenGL is the modularity of its pipeline interface. When nVidia came out with its GeForce line of chips with geometry transformation and lighting in hardware, every OpenGL game instantly got a speed boost without even needing to be recompiled. For the users to feel the benefit, they simply bought and installed the card, installed the drivers, and ran their OpenGL games. That's the beauty of modularity -- when well designed, it makes upgrades a matter of Plug and Play.

The Dark Side of Modularity

Unfortunately, modularity isn't universally applicable. There are several situations in which modularity can make things worse, either by slowing things down, complicating matters, or providing security holes. Since most of these situations have technically obscure justifications, I'll simply list two below:

  • Microkernels -- they compromise speed for the sake of stability and security (though this is not necessarily a bad thing).
  • Some of the original SGI machines -- they were so modular that a thief could simply open the side of the case and pull various components out (even while the system was still running!).

The Future

Ok, so some things are modular. Where do we go from here?

If someone asked me this question, I'd ask him to think about it himself; there are so many things that aren't modular. Unfortunately, things are going to stay this way while backward compatibility is deemed important. This point is one of the primary reasons why BeOS is so efficient -- the designers threw away the current conventions and designed the system from the ground up. The result is exactly what it was designed to be: a fantastic multimedia operating system.

Throwing the bath-water out with the baby

So what needs to be thrown away? At the risk of becoming a smoking pile of debris (no flames please), the first thing would be the Unix commandline pipe structure. Don't get me wrong; I live and breathe the commandline, and have certain misgivings about X. But the pipeline is too specific. Let me list a few of the shortcomings I see:

  • Limited to a single pipeline.
  • Only one direction of flow.
  • Not networkable.

Yes, I'm whining. But this is what I want to be able to do:

[ds@phoenix ds]$ tar c myfiles >[1,2]
			|1 gzip -9 > backup.tgz
			|2 bzip2 -9 >{}

In my fantasy world (where, of course, I'm using Linux 4.2.18), this would tar the 'myfiles' directory and simultaneously gzip it to a backup tarball (locally) while bzip2ing it and sending it to a remote machine over the network. Not only would that be neat, it would be useful for many applications.


My favourite characters in movies have always been the snipers (assuming, of course, the movie in question actually has a sniper). They'd trek to their location (perhaps an abandoned building with a tower), open their weapons cases, and put their sniping rifles together from individual components. They'd grab the main firing chamber, screw the silencer onto the front, clip on the scope, lock in an ammo magazine, and shoot away.

Like that sniper, I prefer it when my stuff fits together. I like being able to record MP3s to cassette simply by connecting the speaker-out of my soundcard to the microphone jack of a tape deck, hitting the deck's record button, and invoking mpg123. So why do I keep seeing KDE- or GNOME-specific applications that could have quite easily been generalized to use, for example, the Qt toolkit. Why should a cardgame care whether it's running under KDE, GNOME, or plain X? Why does a clock program require KDE?

Why are so many people so unbelievably misguided? As Martin Luther King put it, "We have guided missiles, and misguided men". KDE is a desktop environment; it's for application frontends, not a basis for applications themselves.

Intel, AMD, Cyrix, Intel, Transmeta, Intel (oh, did I mention Intel?)

Just three years ago, the standard connection between x86 processors and the motherboard was the ubiquitous "Socket 7" ZIF (Zero Insertion Force) sockets. Then Intel brought out the Pentium II (April 1998), with a radically different CPU-to-motherboard connector: "Slot 1". From there, it's all gone downhill, with "Socket 370" (Intel; Celeron PPGA, April 1998), "Slot 2" (Intel; Pentium II Xeon, June 1998) and "Slot A" (AMD; Athlon, August 1999).

Unfortunately, I don't think things are going to get better anytime soon, especially with "The CPU Formerly Known As Merced" (Itanium) just around the corner, and the Pentium IV coming even sooner (Late 2000). Can anything be done? The answer to that question would be a whole editorial itself.


David Symonds ( is a first year Computer Science student at the University of Sydney, Australia. His current career ambition is to find a job.

T-Shirts and Fame!

We're eager to find people interested in writing editorials on software-related topics. We're flexible on length, style, and topic, so long as you know what you're talking about and back up your opinions with facts. Anyone who writes an editorial gets a freshmeat t-shirt from ThinkGeek in addition to 15 minutes of fame. If you think you'd like to try your hand at it, let know what you'd like to write about.

RSS Recent comments

08 Jul 2000 08:24 bnej

Some comments
>there are generally two categories ofpeople: developers and users

I think this is a bit too simple. For starters you have dedicated users and casual users, and those go into a multitude of categories. The benifits of modularity you discussed here are only really valid for dedicated users. A casual user just wants to get their work done and turn the damn computer off. The "convenience of being able to swap and rearrange configurations" may be great for dedicated users, but you'll find most casual users will find it frustrating and confusing.

Developers is too much of a generalisation too. In that category you then have to put programmers, hackers (Yes, they are different things), admins, systems analysts, coders (once again, this is not a programmer) and all of these people have different needs.

Microkernels: There is nothing to do with a microkernel (being a microkernel in itsself) which makes it more stable or more secure. If anything a monolithic kernel is more likely to be stable and secure than a microkernel. (But this is not always the case in real life)
Windows NT uses a microkernel. (Stallings Operating Systems: Internals and Design Principals, 1998) If you don't believe me.

The comment on SGI machines isn't really valid either. If you want to prevent theft, you need to invest in big metal boxes to put computers in, and use security cables to secure monitors etc. I can pull any number of components out of a standard PC in about a minute with a screwdriver. Modular hardware saves people who work with it a lot of time, and it is almost always a good thing.

Backward compatibility has nothing to do with modularaty.

The comments you made about unix pipes are true in the shell, but this is a limitation of the shell, not the system. The shell is limited in this way by a program having only stdin, stdout, stderr as standard input and output streams, and how do you represent more output streams on a terminal? (Answer: You don't - you start using files and shell scripts) A program could send data out through a dozen IPC pipes and recieve data through a dozen more. (This is nitpicking, sorry)

Shell command pipes not networkable? Try netcat, and I'm sure you could rig up a program to FTP a file up to a remote machine from stdin.

Why does a clock program require KDE? Because it's linked with KDE libraries. If you don't like it, use XClock. Or daliclock. Or Oclock. The options are there. The card game dosen't have to care. Most don't.

I felt that in this article the author tried to cover too much, and ended up covering too little. Sorry if I offended said author at all, but this issue needs better treatment than this.


08 Jul 2000 11:48 dav1d

Bad examples ;p
> [ds@phoenix ds]$ tar c myfiles >[1,2]
> |1 gzip -9 > backup.tgz
> |2 bzip2 -9 >{}

Multi pipes can be done with FIFOs (named pipe).

$ man mkfifo
$ man tee
$ man ssh || man nc # netcat

That would look like:

$ mkfifo a
$ gzip -9 backup.gz &
$ if which ssh &>/dev/null
$ then
$ tar c - myfile | tee a | bzip2 -9 | ssh login@ cat > yourfile.tar.bz2 # As Egil said...
$ else # Listen the port with netcat on the other machine
$ tar c - myfile | tee a | bzip2 -9 | nc 12345 # (The port)
$ fi
$ rm a

> Microkernels -- they compromise speed for the sake of stability
> and security (though this is not necessarily a bad thing).

I don't think the Microkernel approach responsible for those problems.

> Some of the original SGI machines -- they were so modular that a
> thief could simply open the side of the case and pull various
> components out (even while the system was still running!).

It's not the constructor's job to ensure that hardware isn't stolen ;-)

08 Jul 2000 12:50 dwelch

> Microkernels - they compromise speed for the sake of
> security and stability
Making tradeoffs between speed and security/stability is
pretty much the basis of operating systems research. It's
rather unfortunate that deadends like Mach have turned
people off microkernels, systems like L4 demonstrate that
speed didn't be affected significantly to get the benefits
of application specific resource policies.

08 Jul 2000 14:14 samial

Call me misguided, but...
> why do I keep seeing KDE- or GNOME-specific applications that could
> have quite easily been generalized to use, for example, the
> Qt toolkit. Why should a cardgame care whether it's running under
> KDE, GNOME, or plain X? Why does a clock program require
> KDE?

I don't use kde, so my comments will be about gtk and gnome. I imagine the same argument holds between qt and kde.

One big reason to write a gnome specific app instead of a gtk only one is to take advantage of the "other" pieces gnome adds. A standard and easy way to handle config files, for example.

Also, to a certain extent gnome is about providing consistency. For example. with gtk the developer can choose whether menus are tear off or not, or he can choose to make them configurable within that one application. With gnome, there is a standard location to configure this for all gnome applications, which is really nice as an end user with a strong bias against the darn things, and as a developer who doesn't have to do extra work to avoid inflicting his strong bias on his users.

And finally, I don't believe that there is a card game that cares whether or not its run under gnome. (unless someone's written one to run in the panel :) ) It'd care whether or not its run on a system with gnome libraries installed, but that's about it. Change the statement to be: ... running under gtk, qt, or raw Xlib to understand my argument.

So, I don't think applications that require gnome or kde do so gratuitously, and (a big pardon me to those with limited diskspace,
who probably accept having to pick and choose apps anyway) I don't see a big problem with running KDE apps under gnome. They look just as out of place and work just as well as QT apps running under gnome.

I thank you for your time.

08 Jul 2000 14:51 priyadiin

bad idea of 'modularity'
Your example requires the shell being able to handle networking by itself, and that isn't exactly modularity...
A shell should be able to perform its job well, that is, allowing the user to interact with the computer, it should not handle things like networking by itself...

One strength of modularity is about choice, for example, you are allowed to change your graphics board to another brand, even if it is not manufactured by the computer vendor. Things like sending data over the network are better handled by separate programs like ssh or netcat. Your example doesn't even tell on how the data should be sent over the network, is it using IP? if it is, is it using TCP or UDP? on what port, what protocol? etc... The shell simply cannot support all possible combinations of networking protocols... If it can, it will be a HUGE shell, and you don't want that...

As of your opinion about software dependencies to various libraries or environment, it is because those programs simply need them. Your sound card inside your computer won't do anything useful without the motherboard, or the speakers, or the power, the list goes on...

08 Jul 2000 15:17 chaoz

This would do the trick with any shell:

tar c myfiles | gzip -cf9 | tee backup.tar.gz | gunzip -c | bzip2 -9 - | ssh 'cat > backup.tar.bz2'

But admittedly there is some overhead involved ... :-)
There are tricks with bash and probably other shells that
will let you avoid some steps above, that has already
been shown in another comment.

Well, I see your point though. I don't know if it's really
worth adding those features to the shellpipes, but if you
really think they would be useful then why not add those
features to your favourite shell? Or perhaps just make a
helperprogram that does it.

The beauty of Unix and its commandline interface is really
how one can accomplish complex things by combining several
small and simple standard programs and the builtin
features of the shell. And if something is missing, you
can add that something yourself (if you can code).

08 Jul 2000 23:33 xoxus

Author's Response
I thought I'd better respond to some of these comments...

B!nej: True, there are different types of users, but it's entirely possible to set things up so that the "casual user" can just power-up and go, while allowing the "dedicated user" to flick a switch and get all the power. If the preset configuration is adequate for the casual user, they won't get frustrated and confused (cf Corel Linux); however, the dedicated user can still jump in and configure things relatively easily to their taste. Same things go for developers.

Microkernels: AFAIK, one of the selling points of microkernels has been the supposed stability due to the protection-ring structure preventing, say, floppy drivers from crashing the core kernel. Also, WinNT may say it uses a microkernel, but it doesn't. The whole kernel (and drivers) occupy the same address space, and can walk over each other quite easily.

The SGI box: That's exactly what they did; attached a big metal bar across the access plate. Sorry, probably a bad example.

Backward compatibility impedes the ability to completely redesign things, thus making change harder, hence making modularity harder to implement.

Networking pipes: The point I was trying to make was the difference between functional and procedural programming languages.

Linking to KDE: But why does the clock program have to link to KDE at all?

samiam: There are so many specialised programs out there that don't need the other pieces gnome adds. Clock programs barely need any configuration.

Priyadi: I'm not saying that the shell should support networking by itself; it could quite easily be configured to launch a given utility to do it for it. Also, good modularity would mean that I wouldn't have to care about which protocol to use, nor would the shell. The shell should simply hand control over to a program that can decide which protocol/port is the best to use (after considering speed, network load, security, etc).

Okay, I admit that the networkable-pipe example was probably a bad choice. I should have made it more clear that I know it's currently possible; it's just not convenient or simple.

09 Jul 2000 00:03 neilh

DLL Hell is a good bad example
For a good example of problems with modularity look at Windows DLL Hell. Many applications depend on shared System (or semi-System) DLLs. They run fine with the versions of these DLLs that they are tested with but then installing another application replaces the shared DLLs with newer (and sometimes older!) versions. The new version has a new bug or requires stricter compliance with an interface contract and so blows up.

09 Jul 2000 03:38 paulmcgarr

Trivial Program examples
I find the examples of the essentially trivial programs (ie a clock) being linked against Gnome or KDE are silly. Such things are almost certainly written by an author who wanted learn KDE/Gnome programming, not someone who has spotted an otherwise unnoticed hole in the clock application market. As such they are hand educational tools both for the people who right them and people who follow in their footsteps.

All that and a free clock! Sounds like a good deal to me!

09 Jul 2000 03:44 paulmcgarr

I'm sure those spelling mistakes weren't in the preview.....
hand->handy right->write.....

09 Jul 2000 10:57 mweers

> Why does a clock program require KDE?

That the autor wrote it to learn KDE was most probably the reason in this case. Another possible reason could be that the program wanted to use the look-and-feel from Qt/KDE.

KDE (and also GNOME) is more than just a widget set among many others.
Our 'problems' with them result from none of KDE and GNOME being the standard Desktop Environment, and surely from several technical reasons. KDE was intended to be the desktop environment, so that there were no reason against using it in your application. Look at MS Windows: If programming for that system, would you mind using the Win32 GUI? Of course, with Linux, reality is different.

KDE and GNOME should join efforts to provide at least some sort of a common configuration (Start menu, MIME types, maybe even GUI appearance).

And I do not think only frontends should use KDE. Personally, the thing I always disliked with KDE was the absence of "real applications".


Project Spotlight


A GTK version of slapt-get.


Project Spotlight


An ASP.NET MVC tag library with social media widgets.