Print

Print


Friends,
I think the issue is twofold.
1. What editor can handle unicode 8.
2. What editor can display multibyte characters as single characters
to the user.

I use NotePad Pro. I also use UniRed 2003 to translate all my
characters into NCR (base 10) before I edit it in NotePad Pro. Any
good parser will be able to tell you if you have characters that are
not valid for the character set of your xml declaration (UTF-8 is the
default). When a raw-unicode multibyte character is displayed in my
text editor it appears as ASCII. When it is an NCR, then it appears
as an NCR. If I cut and paste that NCR into my HTML template and open
the template in my browser I see the character. Thus, there are two
worlds we are dealing with, the font world and the unicode world.
(Welcome to the Matrix...)

If you are cutting and pasting, then you better do it in some editor
that can convert your character set appropriately. An encoding of
ISO-8859-1 does not insure that cutting and pasting from other
programs will result in valid characters (errors will be given for
wierd software junk characters, etc.).

As a coder I use text editors (not X-Metal), I prefer to see and use
the NCR rather than composed or raw unicode. I reduce my entire
document to ASCII (0020 to 007E with the exception of hard returns
[000A] and tabs [0009]) with UniRed. This single byte character set
(ranges given), is also UTF-8. The NCR are escaped out from '&#' to
'&#_'. The text editor sees only ASCII. This insures that I can
control my character set. The search and replace is reversed when the
document is ready to go on the server, etc.

My points are as follows:
1. Even if you could get single character display in an editor, the
fonts on 99% of the operating systems do not cover all the Latin
characters (extended A and B, try [& # 557;] [& # x022D;] for
instance - Latin small letter o with tilde and macron - extended
Latin B - 3.0 edition of unicode set). The idea that you can get a
single composed character for even the full Latin set is a mistake.

2. The problem is not the editor, but instead the conversion software
used to insure that clipboard data is translated into the correct
character set when a document is being coded. Finding an UTF-8 editor
does not necessarily solve the problem. But it does insure that if
your original document has correct unicode values behind it, the
editor will carry them into your document. Word processors are famous
for creating non-unicode characters of their own.

3. There needs to be a way to test the conversion of clipboard data
with the original document. Tests are needed to be done to
demonstrate if the word processor has support for its character set.
Usually this can be done with the IE browser HTML document with a
character set META tag for UTF-8.

Solutions:
1. If the document MUST have an encoding of ISO-8859-1, then the
stylesheet can convert the output document to UTF-8. This can even be
done from XML to XML. Most users think of stylesheets as merely
creating a display markup (HTML or XSL-FO), but the parsers can do
much more (given you are not doing Asian languages).

2. Develop a tool set that insures the clipboard data is converted to
UTF-8. I suggest using UniRed (template below). UniRed is advocated
on the Unicode site as a freeware utility that can handle any Latin
character sets.

3. Update your operating system. If you are a windows user, you will
need to update your office package. Other font options are available
(there is a freeware font for 5$). Although the fonts will not insure
that you will see a single composed character, updating the fonts and
the operating system will insure that you will carry your characters
over from their original document to another program (editor).

Conclusions:
What you really need to do is to design a workflow that produces
UTF-8 XML EAD documents. This is really the world you want your
document to live in being XML. Anything less will ultimately have to
be converted at some future date.

Just don't confuse the ability to see a multi-byte character in its
composed form with the ability to use any text editor. The issues are
really very different.

Sincerely,
Mike Ferrando
Library of Congress
Washington, DC
301-772-4454

UniRed:
http://www.esperanto.mv.ru/UniRed/ENG/index.html

Template:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html40/loose.dtd">
<HTML>
<HEAD>
<TITLE>Example6.htm</TITLE>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html;charset=utf-8">
<META NAME="generator" CONTENT="NoteTab Pro 4.85">
</HEAD>
<BODY>

<P>test data here</P>

</BODY>
</HTML>



--- Elizabeth Shaw <[log in to unmask]> wrote:

> Hi,
>
> In fact, XML's default character set is UTF-8. However if you don't
> have
> the character set available you  can set the encoding to ASCII, or
> IS0-8859-1 which is what NoteTab Pro is doing.
>
> I believe that Notetab Pro (at least older versions) does not
> handle
> Unicode - so it can not output in Unicode but rather uses latin-1.
> Perhaps another editor that is fully Unicode compliant is in order.
> There are tons of them now that *are* unicode compliant.
>
> If you are entering the data into your ACCESS database in UTF-8 but
> you
> are using no characters other than those found in ascii and latin-1
> (ISO-8859-1) then the data that your are producing requires no
> transformation (the ASCII and Latin1 characters set are a subset of
> the
> first 256 characters in UTF-8) and the techies can output the data
> in an
> XML file that uses UTF-8. Because you are using no other characters
> then
> you should be able to edit in an editor that can handle latin-1.
> (Although it the file is formatted absolutely correctly it might
> have a
> header at the beginning of the file that indicates that it is a
> unicode
> file and your editor might choke on it)
>
> However, a problem will occur if you are using characters other
> than
> those available in ASCII and latin-1 and you want to use an editor
> that
> is not fully unicode compliant. Your techies can output UTF-8 but
> your
> editor will choke.
>
> In sum, it is your tool, not XML, that is the problem.
>
> If you don't know what the ISO-8859-1 character set is there is a
> listing at
>
> http://www.htmlhelp.com/reference/charset/
>
> Liz Shaw
>
>
>
> Susan Hamburger wrote:
> > Our tech people are mapping an Access database to output both EAD
> and MARC
> > as XML documents. Currently, the SGML conversion to XML in
> NoteTabPro that
> > I use generates this string
> >
> > <?xml version="1.0" encoding="ISO-8859-1"?>
> >
> > The techies want to know if the encoding can be changed from
> ISO-8859-1 to
> > UTF-8 to support Unicode. My notes from the Publishing EAD
> Finding Aids
> > course indicate that XMetaL (which I use to create my SGML
> documents)
> > stores the document as UTF-8 and it needs to be changed to
> ISO-8859-1.  Is
> > this only for the ASCII editor or does XML not support Unicode?
> My final
> > output HTML document has it converted back to UTF-8. Must
> ISO-8859-1 be in
> > the XML document so it can be converted to HTML and PDF? Or is
> there some
> > other reason why the encoding is in ISO and not Unicode?
> >
> > Thanks for any help and advice.
> >
> > Sue
> >
> >
> > Susan Hamburger, Ph.D.
> > Manuscripts Cataloging Librarian
> > Cataloging Services
> > 126 Paterno Library
> > The Pennsylvania State University
> > University Park, PA 16802
> >
> > 814/865-1755
> > FAX 814/863-7293
>





_______________________________
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com