Your title, "True Internationalization" holds the key to the requirements.
Good and very relevant topic. (Stepping up on soapbox)
If it is true internationalization you are after then the only good existing answer is UTF-8 which allows multiple languages in the same document without context sensitivity for characters based on position in the document. If you really require 4 byte representation within your program you can alway convert to UCS-4 while you do your private magic.
Please read up on all the existing choices, If you do, I suspect that you will see the many advantages of UTF-8.
This also provides the advantage of being usable for multiple languages on machines that were designed for 8 bit ascii charaters without even requireing unicode conversion routines (as long as you only use ascii and utf-8). This is absolutely brilliant for embedded devices where space is still an issue.
Unfortunatly 32 bit chars do require conversion libs and will be context sensative because you cannot fit all the possible characters for all languages into a single 32 bit code space. - Or perhaps you are proposing that some people's languages are not important enough to include?
(This requires special codes to switch to a new code space. The resulting context problems are a much more severe programming problem than different byte lengths for various characters.) Also you can program your Open (or closed) source programs in utf-8 today with more than one language used in the source and it works fine, (even on my ancient systems).
You are exactly right about needing full international support in computers today. From what I have seen the people doing real work in this area go to UTF-8. You can probably tell that it is my choice as well.
(A question for Unicode wizards: why is there a common practice to be converting utf-8 to ucs-4 for storage? To me utf-8 seems to be ideal for both storage and use within programs.)