Two players ("red" and "green") take their turns on a 6x6 board. In the beginning, the board is empty except for a red "1" in the upper left and a green "1" in the lower right. Taking a turn means setting one field of the board. Setting a field makes it assume the sum of the points of the 8 surrounding fields. But there is a catch: if the field is of the opponent's color, their score is subtracted rather than added. If the resulting score is positive, the field will get your color, else your opponent's color. The same happens to the global score, which is just a colored difference of the score of all fields on the board. If the score in the end is red, red wins. If it is green, green wins. If it is black (0), it is a draw.
This is one of those "fuzzy" things...
Well, there are lots of fully monolithic programs out there, and I am sure that until they reach a certain size, they will work just fine. Look at ftp or any small unix tool out there as a common example.
Also there are some amazing apps out there (gimp being one example) that use the modular approach to great benefit. Notice these are big applications. Not your average quick hack.
My rule of thumb is: If your program gets big, go modular. But not before it gets big. Flexibility (as obtained through modularity and loose coupling) comes not only at the price of execution speed, but also of greater overall complexity.
Oh, and I also did not understand what this has to do with .Net... it's just a framework. Not a new religion or something.