International characters

  • Thread starter Thread starter jyazelz
  • Start date Start date
J

jyazelz

I recently started using a laser printer. I quickly discovered that in
some cases, international characters (vowels with umlauts, vowels with
accents, etc.) required different binary codes when entering them in an
ordinary text file than when entering them in an RTF file.

If I take a text file containing these symbols entered by an ordinary text
editor and insert them into an RTF file, they end up as garbage.

I'm not sure whether that is just a requirement of the program that I'm
using (Atlantis-Nova) or some other reason.

If somebody would explain why there is a difference in some cases, I would
appeciate it very much.

Thanks.

Jack
 
I recently started using a laser printer. I quickly discovered that in
some cases, international characters (vowels with umlauts, vowels with
accents, etc.) required different binary codes when entering them in an
ordinary text file than when entering them in an RTF file.

If I take a text file containing these symbols entered by an ordinary text
editor and insert them into an RTF file, they end up as garbage.

I'm not sure whether that is just a requirement of the program that I'm
using (Atlantis-Nova) or some other reason.

If somebody would explain why there is a difference in some cases, I would
appeciate it very much.

Hi,
I can sympathize, having struggled to work with Japanese characters
using LaTeX, text files, and making efforts to port files from program
to program... yuck!

First, there appear to be many different versions of RTF, which may make
it a bit tricky to give you a solution right now:
http://en.wikipedia.org/wiki/Rich_Text_Format

Here the relevant character encoding paragraph from the Wikipedia page,
which I hope can help you (depending maybe on what the RTF version is
that the program you are reading it with can interpret?):

==================
RTF is an 8-bit format.[26] That would limit it to ASCII,[26] but RTF can encode characters beyond ASCII by escape sequences. The character escapes are of two types: code page escapes and, starting with RTF 1.5, Unicode escapes. In a code page escape, two hexadecimal digits following a backslash and typewriter apostrophe are used for denoting a character taken from a Windows code page. For example, if the code page is set to Windows-1256, the sequence \'c8 will encode the Arabic letter bÄʼ (ب).

For a Unicode escape the control word \u is used, followed by a 16-bit signed decimal integer giving the Unicode code point number. For the benefit of programs without Unicode support, this must be followed by the nearest representation of this character in the specified code page. For example, \u1576? would give the Arabic letter beth, specifying that older programs which do not have Unicode support should render it as a question mark instead.

The control word \uc0 can be used to indicate that subsequent Unicode escape sequences within the current group do not specify a substitution character.

Until RTF specification version 1.5 release in 1997, RTF has only handled 7-bit characters directly and 8-bit characters encoded as hexadecimal (using \'xx). RTF control words (since RTF 1.5) generally accept signed 16-bit numbers as arguments. Unicode values greater than 32767 must be expressed as negative numbers.[13] If a Unicode character is outside BMP, it cannot be expressed in RTF.[27] Support for Unicode was made due to text handling changes in Microsoft Word – Microsoft Word 97 is a partially Unicode-enabled application and it handles text using the 16-bit Unicode character encoding scheme.[13] Microsoft Word 2000 and later versions are Unicode-enabled applications that handle text using the 16-bit Unicode character encoding scheme.[3]

RTF files are usually 7-bit ASCII plain text. RTF consists of control words, control symbols, and groups. RTF files can be easily transmitted between PC based operating systems because are encoded as a text file with 7-bit graphic ASCII characters. Converters that communicate with Microsoft Word for MS Windows or Macintosh should expect data transfer as 8-bit characters and binary data can contain any 8-bit values.[15]
==================


Spec of RTF, Version 1.5:
http://www.biblioscape.com/rtf15_spec.htm

Here is something about how Unicode characters can be encoded into RTF
(from version 1.6 maybe?):
http://latex2rtf.sourceforge.net/rtfspec_6.html

More on RTF conversion for current version 1.9:
http://www.codeproject.com/KB/recipes/RtfConverter.aspx

Hope that helps,
 
I recently started using a laser printer. I quickly discovered that in
some cases, international characters (vowels with umlauts, vowels with
accents, etc.) required different binary codes when entering them in an
ordinary text file than when entering them in an RTF file.

If I take a text file containing these symbols entered by an ordinary text
editor and insert them into an RTF file, they end up as garbage.

I'm not sure whether that is just a requirement of the program that I'm
using (Atlantis-Nova) or some other reason.

If somebody would explain why there is a difference in some cases, I would
appeciate it very much.

Hi,
I can sympathize, having struggled to work with Japanese characters
using LaTeX, text files, and making efforts to port files from program
to program... yuck!

First, there appear to be many different versions of RTF, which may make
it a bit tricky to give you a solution right now:
http://en.wikipedia.org/wiki/Rich_Text_Format

Here the relevant character encoding paragraph from the Wikipedia page,
which I hope can help you (depending maybe on what the RTF version is
that the program you are reading it with can interpret?):

==================
RTF is an 8-bit format.[26] That would limit it to ASCII,[26] but RTF can encode characters beyond ASCII by escape sequences. The character escapes are of two types: code page escapes and, starting with RTF 1.5, Unicode escapes. In a code page escape, two hexadecimal digits following a backslash and typewriter apostrophe are used for denoting a character taken from a Windows code page. For example, if the code page is set to Windows-1256, the sequence \'c8 will encode the Arabic letter b?? (?).

For a Unicode escape the control word \u is used, followed by a 16-bit signed decimal integer giving the Unicode code point number. For the benefit of programs without Unicode support, this must be followed by the nearest representation of this character in the specified code page. For example, \u1576? would give the Arabic letter beth, specifying that older programs which do not have Unicode support should render it as a question mark instead.

The control word \uc0 can be used to indicate that subsequent Unicode escape sequences within the current group do not specify a substitution character.

Until RTF specification version 1.5 release in 1997, RTF has only handled 7-bit characters directly and 8-bit characters encoded as hexadecimal (using \'xx). RTF control words (since RTF 1.5) generally accept signed 16-bit numbers as arguments. Unicode values greater than 32767 must be expressed as negative numbers.[13] If a Unicode character is outside BMP, it cannot be expressed in RTF.[27] Support for Unicode was made due to text handling changes in Microsoft Word – Microsoft Word 97 is a partially Unicode-enabled application and it handles text using the 16-bit Unicode character encoding scheme.[13] Microsoft Word 2000 and later versions are Unicode-enabled applications that handle text using the 16-bit Unicode character encoding scheme.[3]

RTF files are usually 7-bit ASCII plain text. RTF consists of control words, control symbols, and groups. RTF files can be easily transmitted between PC based operating systems because are encoded as a text file with 7-bit graphic ASCII characters. Converters that communicate with Microsoft Word for MS Windows or Macintosh should expect data transfer as 8-bit characters and binary data can contain any 8-bit values.[15]
==================


Spec of RTF, Version 1.5:
http://www.biblioscape.com/rtf15_spec.htm

Here is something about how Unicode characters can be encoded into RTF
(from version 1.6 maybe?):
http://latex2rtf.sourceforge.net/rtfspec_6.html

More on RTF conversion for current version 1.9:
http://www.codeproject.com/KB/recipes/RtfConverter.aspx

Hope that helps,
===================================================================

Thanks very much for the info. I'll check into it.

Jack
 
I recently started using a laser printer. I quickly discovered that in
some cases, international characters (vowels with umlauts, vowels with
accents, etc.) required different binary codes when entering them in an
ordinary text file than when entering them in an RTF file.

RTF is leading you down a false path here - this is a charset issue.
Most printing from Windows goes through the GDI subsystem and output
a file in the appropriate page description language for the printer.
That will set any state it needs to print the document correctly
regardless of the default settings. In contrast it sounds like
the problem app is outputting raw text - no markup - and so it is
dependent on the printer using the corect character encoding by
default. You'll need to a) determine what character set it is
using and b) determine how to set your printer to use it.

The problem reading text in a Windows editor is likely a different
manifestation of the same issue. Windows by default is probably
using Windows-1252 and your app is using something else. Is it an
old DOS app? If so I'd hazard a guess it is code page 437. You
can try a different text editor (e.g. Notepad++) which allows
switching between caharacter encodings.

Ultimately though there are a lot of different permutations here.
It sounds to me like you have a lot of reading up ahead of you.
 
Hi there


Andrew said:
RTF is leading you down a false path here - this is a charset issue.
Most printing from Windows goes through the GDI subsystem and output
a file in the appropriate page description language for the printer.
That will set any state it needs to print the document correctly
regardless of the default settings. In contrast it sounds like
the problem app is outputting raw text - no markup - and so it is
dependent on the printer using the corect character encoding by
default. You'll need to a) determine what character set it is
using and b) determine how to set your printer to use it.

The problem reading text in a Windows editor is likely a different
manifestation of the same issue. Windows by default is probably
using Windows-1252 and your app is using something else. Is it an
old DOS app? If so I'd hazard a guess it is code page 437. You
can try a different text editor (e.g. Notepad++) which allows
switching between caharacter encodings.

Alphabet soup.
Ultimately though there are a lot of different permutations here.
It sounds to me like you have a lot of reading up ahead of you.

The best think to do is to make Unicode / UTF-8 de default charset of
all your apps and os-es.
Printers don't understand UTF-8. So you have to convert UTF-8 in
something they do understand. Most apps will do this for you.

Standardizing everything to UTF-8 is the only way to avoid alphabet soup.


Vr.Gr,
Rob
 
I recently started using a laser printer. I quickly discovered that in
some cases, international characters (vowels with umlauts, vowels with
accents, etc.) required different binary codes when entering them in an
ordinary text file than when entering them in an RTF file.

If I take a text file containing these symbols entered by an ordinary text
editor and insert them into an RTF file, they end up as garbage.

I'm not sure whether that is just a requirement of the program that I'm
using (Atlantis-Nova) or some other reason.

If somebody would explain why there is a difference in some cases, I would
appeciate it very much.

Thanks.

Jack

It depends on the OS + Font you have. Windows for example should be able
to print just about any chararcter the FONT has to offer. Or with True Type
Font (or similar) it will print English alphabets, Chinese, Arabic, as well
as Dingbat (7-bit character) which is pretty much graphic.

In some character you may need an additional util to be able to type,
display correctly, and to print (because Windows can only print what
displays on screen).

There is about 2 type of characters, and I am not talking about different
type of font like UniCode, some others are/were for Web use now most been
replaced by Unicode

7-Bit = standard English alphabet or single keystroke alphabets

8-Bit = this will require additional program to be able to combine 2 or more
key-stroke to create a special character (like Chinese, Japanese, Korean,
Arabic, Herbew, and so on).

Haha back to the DOS day before Windows get more popular, and I was young
and smarter I wrote a small util to create 8-bit character using a special
font to communicate between few friends. And some BBS operators mistaken
they were phone noise (we used to have lot of line noise issue back then)
 
Rob van der Putten said:
Hi there




Alphabet soup.


The best think to do is to make Unicode / UTF-8 de default charset of
all your apps and os-es.
Printers don't understand UTF-8. So you have to convert UTF-8 in
something they do understand. Most apps will do this for you.

Standardizing everything to UTF-8 is the only way to avoid alphabet soup.

Neah! you don't have to convert anything to anything, computer (Windows
OS for example) will print anything the FONT has to offer, and if the
computer can display (this may need to use addional program for special
8-bit character font).

In other word, the printer isn't designed specific for any language
character, but it will print whatever the OS tell it to print. Example, the
printer can print any type of GRAPHIC which isn't language character
 
Neah! you don't have to convert anything to anything, computer (Windows
OS for example) will print anything the FONT has to offer, and if the
computer can display (this may need to use addional program for special
8-bit character font).

You still need a character mapping - it doesn't happen by magic.
In other word, the printer isn't designed specific for any language
character, but it will print whatever the OS tell it to print. Example, the
printer can print any type of GRAPHIC which isn't language character

Which is of absolutely no help when printing *text*.
 
I recently started using a laser printer. I quickly discovered that in
some cases, international characters (vowels with umlauts, vowels with
accents, etc.) required different binary codes when entering them in an
ordinary text file than when entering them in an RTF file.

If I take a text file containing these symbols entered by an ordinary text
editor and insert them into an RTF file, they end up as garbage.

I'm not sure whether that is just a requirement of the program that I'm
using (Atlantis-Nova) or some other reason.

If somebody would explain why there is a difference in some cases, I would
appeciate it very much.

I need to clarify what my problem is.

Before I obtained the laser printer (Brother HL-2140), I created ordinary
text listings with a DOS style editor program (Q-Edit). I had no problem
entering accented volwels, other Spanish characters such as an upsided-down
question mark, upsided down exclamation mark and other characters such as
um-lauted vowels. Theses characters would show on the screen as they were
supposed to. They would also print on my Okidata dot=matrix printer just as
they showed on the screen.

When I got the laser printer, I discovered that I couldn't send the text
file directly to the printer from a DOS prompt window (copy file prn).
Therefore, I obtained a Windows editor (Atlantis-Nova) that has a function
to insert an ordinary text file into an existing .RTF file.

I immediately discovered that many characters were displayed on the
monitor differently than the ASCII charts that were available showed. These
characters were the ones that I discussed above. This was before I attempted
to print the file. In other words, the program was converting them to a
different binary code than it should have.

My question was, why would the program do this?

I wanted a pointer to one or more specific tutors (or maybe just a book
that would explain this.

Thanks so far for the attempted assistance.

Jack
 
I need to clarify what my problem is.

Before I obtained the laser printer (Brother HL-2140), I created ordinary
text listings with a DOS style editor program (Q-Edit). I had no problem
entering accented volwels, other Spanish characters such as an upsided-down
question mark, upsided down exclamation mark and other characters such as
um-lauted vowels. Theses characters would show on the screen as they were
supposed to. They would also print on my Okidata dot=matrix printer just as
they showed on the screen.

When I got the laser printer, I discovered that I couldn't send the text
file directly to the printer from a DOS prompt window (copy file prn).
Therefore, I obtained a Windows editor (Atlantis-Nova) that has a function
to insert an ordinary text file into an existing .RTF file.

I immediately discovered that many characters were displayed on the
monitor differently than the ASCII charts that were available showed. These
characters were the ones that I discussed above. This was before I attempted
to print the file. In other words, the program was converting them to a
different binary code than it should have.

My question was, why would the program do this?

I wanted a pointer to one or more specific tutors (or maybe just a book
that would explain this.

Thanks so far for the attempted assistance.

Jack

Back to DOS age, same like current laser printer the printer has built-in
few basic standard fonts, some older laser printers had option to add a Font
Cartrige and with will print whatever the font the font cartridge has.

Some DOS word processors gave the option to use some font type (it's way
too long for me to remember the extension), and it was continue using until
MicroSoft released the current True Type Font. And recently they released
Unicode font.

Back to the DOSage, with the special font I created to use among few
friends, we had to run a special TSR program to be able to see the true
character.

The easiest way to answer your question is asking you to Google for FREE
FONT then you will have the chance to display the character of each font. It
can be real fancy, it can be clipart photo, it can be just about anything
the font creator want it to be. It can be your photo

Here for example
 
Hi there


Neah! you don't have to convert anything to anything, computer (Windows
OS for example) will print anything the FONT has to offer,

I'm used to pseudofonts; Whenever a glyph isn't available in a
particular font, the same glyph from a similar font is used.
and if the
computer can display (this may need to use addional program for special
8-bit character font).

In other word, the printer isn't designed specific for any language
character, but it will print whatever the OS tell it to print. Example, the
printer can print any type of GRAPHIC which isn't language character

The software has to convert glyphs into graphics for this to work.


Regards,
Rob
 
Hi there


Andrew said:
You still need a character mapping - it doesn't happen by magic.

Mapping what to what exactly?
Which is of absolutely no help when printing *text*.

Create a printqueue with a filter which converts text into graphics.
I can print UTF-8 plain text even though my printer's default charset is
iso-8859-1. Like this file;
http://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-demo.txt
The filter uses a TTF instead of the printer's build-in fonts.


Regards,
Rob
 
Rob van der Putten said:
Hi there




I'm used to pseudofonts; Whenever a glyph isn't available in a
particular font, the same glyph from a similar font is used.


The software has to convert glyphs into graphics for this to work.

No no and no, for the clipart and you don't have skill or patient to draw
then sure you may have to scan the image then import to font creator. But
if you have some talent or patient then you should be able to create or
modify the existing.

Yes, it may look like graphic on paper, and it can be call graphic but not
like the regular graphic/image/photo etc..

"glyphs" I know what glyphs is, but do not understand what you mean. Or
with the font is has nothing to do with glyphs, but a trick to type the
ASCII-Codec (Windows calls Character Map) to display a special character of
what the font has. I have been doing it for around 2 decades now.

I also trying to say that the word GRAPHIC doesn't really mean much but a
word so we all understand what other talking about. In the computer and
printer language, the program just send the command to the printer, the
printer will print/spray tons of dots to create something, and if you should
it to the English speaking person then s/he will jump up and down yelling
GRAPHIC! you show it to non-English speaking person then s/he may jump up
and down yelling ^%$#$%$^ or whatever.

You show some clipart to someone then they may ask what FONT is it?, what
CAD is it?
 
Hi there


No no and no, for the clipart and you don't have skill or patient to draw
then sure you may have to scan the image then import to font creator. But
if you have some talent or patient then you should be able to create or
modify the existing.

Yes, it may look like graphic on paper, and it can be call graphic but not
like the regular graphic/image/photo etc..

"glyphs" I know what glyphs is, but do not understand what you mean. Or
with the font is has nothing to do with glyphs, but a trick to type the
ASCII-Codec (Windows calls Character Map) to display a special character of
what the font has. I have been doing it for around 2 decades now.

I also trying to say that the word GRAPHIC doesn't really mean much but a
word so we all understand what other talking about. In the computer and
printer language, the program just send the command to the printer, the
printer will print/spray tons of dots to create something, and if you should
it to the English speaking person then s/he will jump up and down yelling
GRAPHIC! you show it to non-English speaking person then s/he may jump up
and down yelling ^%$#$%$^ or whatever.

You show some clipart to someone then they may ask what FONT is it?, what
CAD is it?

You are _COMPLETELY_ missing the point!

Question: How do you print a glyph which isn't in any of the printer's
build-in fonts?
Answer: you tell the printer to _DRAW_ that particular glyph.
This means that you use a font in your computer instead of one in your
printer. TrueType and Postscript fonts contain instructions on how to
draw glyphs. The software on your computer simply translates those in
instructions for your printer. For instance a translation from TTF to PS
or PCL.

If you have the printer 'draw' an entire page, you are not dependent on
any of the printer's build-in fonts, nor on it's default character set.
This way you can have the printer print any glyph you want. For instance
this page;
http://www.unicode.org/iuc/iuc10/x-utf8.html
And you will never have problems with characterset settings. It just works.
The best way to avoid alphabet soup is to standardize everything on Unicode.


Regards,
Rob
 
Rob van der Putten said:
Hi there




You are _COMPLETELY_ missing the point!

I am not missing any point, but you do not be able to get what you can't
seem to follow
Question: How do you print a glyph which isn't in any of the printer's
build-in fonts?

Just FORGET about printer build-in font as you won't see none besides
Laser Printer may have few basic built-in fonts.
Answer: you tell the printer to _DRAW_ that particular glyph.

That's your answer not mine.
This means that you use a font in your computer instead of one in your
printer. TrueType and Postscript fonts contain instructions on how to
draw glyphs. The software on your computer simply translates those in
instructions for your printer. For instance a translation from TTF to PS
or PCL.

You DO NOT need to know exactly what the printer may communicate with the
printer and font. All you need to know that the printer will get the
command from computer to print whatever already made in the font file. Each
font file has couple HUNDREDS of block, and each block is a character. Each
character can be anything, it can be English alphabet, Arabic alphabet,
dingabat.

When you understand this then you won't need to think about glyph or
asking about built-in font vs TTF, Postscript font etc.. because when you go
for detail it may not mean any specific thing.
If you have the printer 'draw' an entire page, you are not dependent on
any of the printer's build-in fonts, nor on it's default character set.
This way you can have the printer print any glyph you want. For instance
this page;
http://www.unicode.org/iuc/iuc10/x-utf8.html
And you will never have problems with characterset settings. It just works.
The best way to avoid alphabet soup is to standardize everything on Unicode.

Unicode, well if you are talking about your READING then I may have to
agree with you, or I sure can agree with you about what Unicode means to be.
BUT if you try to tell me about the real world then PLEASE, I have been
using Unicode for years and having lot of problem with Unicode.

Yes, I do use Unicode and do get it to work with some application, some
version of some application but not all, and depending on the Windows
version some works with most aps some won't work with most aps.

So, if you wanna share the real story then it would be nice if you share
your very own experience instead of poitning to some link with general
information about something.

I started using matrix printer back in late 60 or early 70, laser printer
in 70 (til now), and inkjet probably in late 80 or early 90's (as soon as
ninkjet first available).

And in one of the message I also mentioned that way back in DOSage, I even
created a special font set and TSR program to type and display a specific
font. I don't pay attention to newer laser printer but I do understand it
has built-in few basic font, my older printer I even had the Font-Cartridge
for extra font.
 
Hi there


I am not missing any point, but you do not be able to get what you can't
seem to follow
Just FORGET about printer build-in font as you won't see none besides
Laser Printer may have few basic built-in fonts.
That's your answer not mine.
You DO NOT need to know exactly what the printer may communicate with the
printer and font. All you need to know that the printer will get the
command from computer to print whatever already made in the font file. Each
font file has couple HUNDREDS of block, and each block is a character. Each
character can be anything, it can be English alphabet, Arabic alphabet,
dingabat.

When you understand this then you won't need to think about glyph or
asking about built-in font vs TTF, Postscript font etc.. because when you go
for detail it may not mean any specific thing.

Sorry, but I can't make heads or tails of this.
It appears we have a complete failure to communicate.

If I understand correctly, the original question was about characterset
incompatibilities. More specificity characterset incompatibilities and RTF.

If what you see on your screen is different from the printer output,
there is something wrong with your setup. I made some general
suggestions about changes in the setup. Apparently these are not understood.
Unicode, well if you are talking about your READING then I may have to
agree with you, or I sure can agree with you about what Unicode means to be.
BUT if you try to tell me about the real world then PLEASE, I have been
using Unicode for years and having lot of problem with Unicode.

Yes, I do use Unicode and do get it to work with some application, some
version of some application but not all, and depending on the Windows
version some works with most aps some won't work with most aps.

So, if you wanna share the real story then it would be nice if you share
your very own experience instead of poitning to some link with general
information about something.

I started using matrix printer back in late 60 or early 70, laser printer
in 70 (til now), and inkjet probably in late 80 or early 90's (as soon as
ninkjet first available).

And in one of the message I also mentioned that way back in DOSage, I even
created a special font set and TSR program to type and display a specific
font. I don't pay attention to newer laser printer but I do understand it
has built-in few basic font, my older printer I even had the Font-Cartridge
for extra font.

I have never had any problems with Unicode. Not even in a text-mode
environment.

I don't use Windows, but I can hardly imagine that you can't get Unicode
to work in a Windows environment.
Dos is an other matter of course.


Regards,
Rob
 
Rob van der Putten said:
Hi there







Sorry, but I can't make heads or tails of this.
It appears we have a complete failure to communicate.

That could be a good thing, cuz we shouldn't against what we don't
understand.
If I understand correctly, the original question was about characterset
incompatibilities. More specificity characterset incompatibilities and RTF.

If you try to understand a little more then you may have known that the OP
doesn't know what to ask.
If what you see on your screen is different from the printer output,
there is something wrong with your setup. I made some general
suggestions about changes in the setup. Apparently these are not understood.

It could be true, but because the way the current OS (Operating System)
designed it usually print what it display on screen. Some character (TTF
for example) may look right on some system, some graphic card *but* some
system can't display it, and it will be displayed as a BLOCK.

Other than that (not talking about FONT cuz you now talking about what
display on screen) the printer should print what the computer sees and tell
it to print. One of the reason the printer may print wrong is COLOR or
something happens to the connection (like the printer cable has some bad
pin/connection) then it may print (wrong) character (if print text) instead
of graphic.
I have never had any problems with Unicode. Not even in a text-mode
environment.

It's possible, because it depends on what type of Unicode you use, what
language the Unicode may be. If you understand this far then you may find
some eBook reader can or can't support some language (even Unicode).
I don't use Windows, but I can hardly imagine that you can't get Unicode
to work in a Windows environment.
Dos is an other matter of course.

Before Windows I first started with CP/M, then DOS, Commodore, OS/2 etc.
so I don't know about other computer. But if we just forget about the kind
of computer we use, but look at all the electronic devices like GPS, eBook,
Cellphone, Camera etc. we may agree that they are computers too. And as I
mention above, many current eBook can't support all languages even Unicode
has been available and can handle just about all living languages.

*But* it doesn't mean that Unicode font will work with all machines. Even
MicroSoft can't be able to come up with Unicode for all languages.
 
Back
Top