I think now that this modular versus monolithic debate is really about where the complexity goes. In a modular approach the complexity goes into the comminucation between components. In an monolithic approach it is handled in one large chunk.
IMHO, debugging a problem in a large executable is easier to do then debugging communication problems between coexisting modules possibly residing in separate executables. Especially when timing issues are in the game. This may also be the reason why the Hurd took so long to become operational.
According to current fashion, a well designed application consists of lots and lots of small classes doing little things. IMHO, such a structure can be as hard to understand as the old 10000 line Fortran main program, especially if the design documentation is missing (the usual case) and you have to figure out yourself where things really get done.
Probably the silver bullet is flying right through the middle in this issue and there is no right or wrong but just personal taste.
The next thought is: why do programs grow so big? Well, everybody knows this, some users want this special feauture, others another one. If all this gets implemented the result is a monster like MS-Word of which the typical user only uses 10%.
I think it should be easier for a program user to tailor an application to his needs. Attempts to this goal include the unix toolchain approach, fourth generation languages and scripting languages. Apparently this is not yet good enough for the average user. Perhaps this is the area where progress is needed.