> Well, you could wind up getting a
> compiler that goes on a vain attempt at
> solving the halting problem if you tried
> to do this. What would your compiler do
> if it got f(-1), an expression that
> never halts? It wouldn't be able to
> figure out that it's doing infinite
> recursion (if you found an algorithm to
> determine this in the fullest
> generality, you would have solved the
> halting problem, and proved Alan Turing
> and Kurt Goedel wrong).
> Finding the fastest code to perform a
> certain task is clearly an undecidable
> problem (this is worse than NP-complete,
> as it can be mathematically shown that
> an algorithm simply doesn't exist). The
> best I think that can be accomplished is
> a few heuristics that conform to some
> simple facts we know about how to speed
> code up.
You are correct but only in theory. However, in practice NP-completeness and Turing's results have very little meaning (IMHO). One can usually come up with algorithms where probability of failure can be made to be small enough.
For example many pattern recognition (PR) problems are NP-complete and/or mathematically ill-conditioned (I think), but one can usually make probability of failure small enough.
If one has more a priori problem specific information about the specific problem than in many PR problems then it's possible to have guaranteed bounds for error.
For example in one practical case: c*10^-n probability for failure and computational requirements: O(n). Now take n = 1000. (c