Hi!
This Unicode UTF-8 can use up to 24 bit for encoding. UTF-8 support almost
all languages so what is the reason
to use another Unicode then this UTF-8.
//Tony
UTF-8 supports the complete Unicode character set so it is a fine
choice for many applications. It can be used for nearly all of the
world's written languages, and it is a compact representation for
latin-based texts (like English), which are very common.
Except for interfacing with legacy applications, there is no good
reason to use a non-Unicode character set.
However, there are good reasons for using a Unicode character encoding
other than UTF-8.
Many platforms use UTF-16 internally (Windows NT,XP,Vista,7; the .Net
Framework, C#), so by sticking with that you can avoid conversions.
Many languages (especially Asian languages) have a more compact
representation in UTF-16 than in UTF-8. UTF-16 will be simpler to
process for many texts, since the characters in the Basic Multilingual
Plane (plane 0, which encodes the vast majority of the characters used
by living languages) are always represented by exactly 2 bytes in
UTF-16. (Characters in the higher planes are represented in 4 bytes in
UTF-16, but these characters are far less common.)
For these reasons, UTF-16 can also be an excellent choice of encoding
scheme.
There are few applications where UTF-32 is the best choice, and
probably all of them are for internal processing only. I can't imagine
a scenerio in which UTF-32 would be the best choice for storing or
transmitting text.