Projects / uni2ascii


uni2ascii and ascii2uni provide conversion in both directions between UTF-8 Unicode and more than thirty 7-bit ASCII equivalents, including RFC 2396 URI format and RFC 2045 Quoted Printable format, the representations used in HTML, SGML, XML, OOXML, the Unicode standard, Rich Text Format, POSIX portable charmaps, POSIX locale specifications, and Apache log files. It can also convert between the escapes used for Unicode in languages such as Ada, C, Common Lisp, Java, Pascal, Perl, Postscript, Python, Scheme, and Tcl.

Operating Systems

Recent releases

  •  15 May 2011 08:04

    Release Notes: A bug was fixed in uni2ascii in which the substitution count was too high in certain cases. The code was patched to handle lack of getline in NetBSD. The semantics of the pure option were clarified when converting characters in the ASCII range other than space and newline. A bug in which this was not implemented correctly for UTF8 types was fixed.

    •  16 Feb 2011 19:14

      Release Notes: This release adds U+0085, U+00B7, U+2022, and U+2028 to the characters converted to the nearest ASCII equivalent when this option is invoked.

      •  13 Dec 2010 04:16

        Release Notes: The Q format (HTML character entities) works again in ascii2uni.

        •  30 Aug 2010 04:16

          Release Notes: endian.h was renamed to avoid conflict with the external file of the same name.

          •  05 Aug 2009 04:18

            Release Notes: This release fixes several small bugs, including one that interfered with the use of the Q format (generate character entities if possible) in uni2ascii.

            Recent comments

            13 Jan 2006 09:42 billposer

            Re: Recode

            Recode and uni2ascii are complementary. Briefly put, Recode converts from one encoding to another (where the expectation is that the target character set will be the same as, or a superset of, the source character set), whereas Uni2ascii converts between UTF-8 Unicode and ASCII representations of Unicode. In practical terms, Uni2ascii will not convert between, say, ASCII and EBCDIC,
            which Recode will, whereas Recode will not convert between Unicode and the \x{00E9} format, which Uni2ascii will. (I should say that Recode lists but does not explain the encodings that it knows so it is not always easy to figure out what it handles. It is possible that it can handle things that I am not aware of. But at least as far as I can tell, it does not handle the textual representations of Unicode characters that Uni2ascii handles.)

            Thus, if you've got a text in, say, TIS-620 (the Thai national standard) and you want to get it into Unicode, you would use Recode. If you want to include that Thai text in a blog posting using Movable Type, which is not 8-bit safe, you would use Uni2ascii to convert your Unicode version of the Thai text to HTML numeric character references. Similarly, if you wanted to include that Thai text as a string in a program in Java, Python, Scheme, or Tcl, you would use uni2ascii to convert the Unicode to the \uxxxx format.

            My conception of the difference is this. When you have the same character set but different associations between the characters and the integers, conversion between the two is pure encoding conversion. ASCII and EBCDIC are different encodings of the same character set; converting between them is a matter of encoding conversion.
            On the other hand, when you have radically different character sets, conversion from one to the other is a matter of transliteration. Transliteration may be perfect, or nearly so, if both writing systems have been adapted for the same language (e.g. in the case of the roman and cyrillic writing systems for Serbo-croatian) or quite imperfect, (e.g. when Vietnamese is written using only the English alphabet.)

            A third situation is when you use escape sequences to represent the characters of one character set in another.
            That's what we're doing hen we use the sequence of ASCII characters \x{00E9} to represent the Unicode character U+00E9 "Latin small letter e with acute".

            Recode is basically intended to handle encoding conversion. Uni2ascii, on the other hand, is aimed at the third case, the representation of Unicode characters by ASCII escape sequences. Other programs (e.g. my own Xlit) deal with transliteration.

            Of course, the division I've made here, while, I think, the one that people usually make, is not quite so simple, since what are generally thought of as different encodings of the same character set may in fact use somewhat different character sets. For example, decomposed Unicode uses sequences of two or more Unicode characters to represent what in other encodings are single characters. For example, e with acute accent is a single character in ISO-8859-1 (0xE9) but is a two character sequence (0x0065 0x0301) in non-composed Unicode, where it is treated as plain e followed by acute accent. Encoding conversion programs like recode are therefore, in the strict sense, doing more than pure encoding conversion.

            At one level, all of these conversions are the same since they can all be treated as mappings of one set of byte strings to another. However, there is a conceptual difference among them that, with some fuzzy edges, seems to correspond to the functionality of the software designed to handle them.

            Returning to practicalities, Uni2ascii and Recode also provide different approaches to and degrees of control over disparities between character sets, e.g. what to do with characters with diacritics when converting to ASCII.

            12 Jan 2006 02:24 ed_avis

            How does this compare to GNU recode?


            Project Spotlight


            A Fluent OpenStack client API for Java.


            Project Spotlight

            TurnKey TWiki Appliance

            A TWiki appliance that is easy to use and lightweight.