RE: No invalid data in testcases?
the invalid inputs i was referring to are not cases like the example given. error handling can and should be tested with a random tester. but in some cases a spec explicitly states that a given input produces undefined results. in that case, there is no possible way to test that input, except to detect the invalid input and allow anything to happen, which isn't useful.
another case when this occurs is when an input allowed by the spec triggers a bug, but the testcase is so obscure that it is decided that fixing it is a low priority (this DOES happen in business). when that decision is made, you want to eliminate the offending input so that you can continue testing without generating a lot of false positive testcases.
I have found automated randomized testing to be very useful where it can be applied, both for finding bugs and for quickly estimating the quality of a build.
pros: much better coverage than human-made tests in less time than using a coverage tool, MTBF gives a quick estimate of quality, you can run far more testcases than if each one was hand-crafted.
cons: writing an effective randomized tester is a difficult skill to teach, random testcases are sometimes diffcult to understand, reduce and debug.
what makes writing good randomized testers difficult is that you want to automatically generate testcases that cover the entire valid input space, but generate no invalid inputs. the process of eliminating invalid inputs, if not done carefully, can reduce your test coverage and cause you to miss an important bug.
the other difficult thing with random testing is knowing the correct program behavior for a particular random input sequence. crashes are obviously a bug, but beyond that you need something that can identify an invalid output. this is no problem when checking that the answer is correct is easier than finding the answer in the first place, but what about when checking the output is just as hard as generating it? then, i have used techniques like comparing the output from two different programs with the same specs, two different builds of the same program, the same program with two testcases that should generate the same output, or against a less-efficient, but easier to write program following the same spec.
btw, i apply random testing below the GUI level. my gut feeling is that it would be much more difficult to apply this to a GUI, because
testcases need to be randomized about some meaningful navigation sequence through the GUI. just randomly clicking buttons isn't likely to get you good coverage.