I believe that the question was not data coming from a clipboard  but it
was a question of what would happen if UTF-8 was the output of the
ACCESS database.

Those are two different issues.

If indeed they are outputting UTF-8 then the issues will be both a
matter of whether the editor can interpret the unicode multi-byte
character and whether you have the glyph/font to display it.

[[ An aside: And yes, if you are cutting and pasting from other
applications the issues become complex because, for example, older
versions of many microsoft products have their own idiosyncratic
character sets. And most editors don't convert them to their unicode
equivalents.But that is another long and complicated issue. If on the
other hand you are absolutely sure that you are using ISO-8859-1 then
you can cut and paste into a UTF-8 document without trouble (ISO-8859-1
doesn't include the extended latin characters A and B, BTW)

Anyway, I sort of think that using NCR and all the rest while helpful to
a programmer is not what the average desktop user of EAD is going to
want to do. 99% of archivist have neither the time nor inclination to
learn such arcane things. And good software should support their work
not require them to learn things only a programmer wants to know.

I use my unicode editor now and make sure I have the appropriate fonts
and presto, the document looks just as it should and I can see all the
characters correctly display. And I don't have to worry about how they
are represented "under the hood". That is the way *good* standards
compliant software makes archivists able to do their job- describing
archives rather than battling a software package into submission.



Any reasonably good standards compliant XML editor can now handle UTF-8.
Choose an editor capable of correctly interpreting the character
encoding of the data coming out of the database. Oxygen, XMLSpy, Stylus
studio, epcEdit, and on and on . Take you pick.

Then, if you are using characters that are beyond the basic latin-1
character set (let's say you have chinese in your document) make sure
that you have the font sets that will enable you to display those

It is as simple as that.

For most US archivists, and many Western European archivists, their
computers already have the fonts for the characters that they commonly use.

Liz Shaw

Mike Ferrando wrote:
> Friends,
> I think the issue is twofold.
> 1. What editor can handle unicode 8.
> 2. What editor can display multibyte characters as single characters
> to the user.
> I use NotePad Pro. I also use UniRed 2003 to translate all my
> characters into NCR (base 10) before I edit it in NotePad Pro. Any
> good parser will be able to tell you if you have characters that are
> not valid for the character set of your xml declaration (UTF-8 is the
> default). When a raw-unicode multibyte character is displayed in my
> text editor it appears as ASCII. When it is an NCR, then it appears
> as an NCR. If I cut and paste that NCR into my HTML template and open
> the template in my browser I see the character. Thus, there are two
> worlds we are dealing with, the font world and the unicode world.
> (Welcome to the Matrix...)
> If you are cutting and pasting, then you better do it in some editor
> that can convert your character set appropriately. An encoding of
> ISO-8859-1 does not insure that cutting and pasting from other
> programs will result in valid characters (errors will be given for
> wierd software junk characters, etc.).
> As a coder I use text editors (not X-Metal), I prefer to see and use
> the NCR rather than composed or raw unicode. I reduce my entire
> document to ASCII (0020 to 007E with the exception of hard returns
> [000A] and tabs [0009]) with UniRed. This single byte character set
> (ranges given), is also UTF-8. The NCR are escaped out from '&#' to
> '&#_'. The text editor sees only ASCII. This insures that I can
> control my character set. The search and replace is reversed when the
> document is ready to go on the server, etc.
> My points are as follows:
> 1. Even if you could get single character display in an editor, the
> fonts on 99% of the operating systems do not cover all the Latin
> characters (extended A and B, try [& # 557;] [& # x022D;] for
> instance - Latin small letter o with tilde and macron - extended
> Latin B - 3.0 edition of unicode set). The idea that you can get a
> single composed character for even the full Latin set is a mistake.
> 2. The problem is not the editor, but instead the conversion software
> used to insure that clipboard data is translated into the correct
> character set when a document is being coded. Finding an UTF-8 editor
> does not necessarily solve the problem. But it does insure that if
> your original document has correct unicode values behind it, the
> editor will carry them into your document. Word processors are famous
> for creating non-unicode characters of their own.
> 3. There needs to be a way to test the conversion of clipboard data
> with the original document. Tests are needed to be done to
> demonstrate if the word processor has support for its character set.
> Usually this can be done with the IE browser HTML document with a
> character set META tag for UTF-8.
> Solutions:
> 1. If the document MUST have an encoding of ISO-8859-1, then the
> stylesheet can convert the output document to UTF-8. This can even be
> done from XML to XML. Most users think of stylesheets as merely
> creating a display markup (HTML or XSL-FO), but the parsers can do
> much more (given you are not doing Asian languages).
> 2. Develop a tool set that insures the clipboard data is converted to
> UTF-8. I suggest using UniRed (template below). UniRed is advocated
> on the Unicode site as a freeware utility that can handle any Latin
> character sets.
> 3. Update your operating system. If you are a windows user, you will
> need to update your office package. Other font options are available
> (there is a freeware font for 5$). Although the fonts will not insure
> that you will see a single composed character, updating the fonts and
> the operating system will insure that you will carry your characters
> over from their original document to another program (editor).
> Conclusions:
> What you really need to do is to design a workflow that produces
> UTF-8 XML EAD documents. This is really the world you want your
> document to live in being XML. Anything less will ultimately have to
> be converted at some future date.
> Just don't confuse the ability to see a multi-byte character in its
> composed form with the ability to use any text editor. The issues are
> really very different.
> Sincerely,
> Mike Ferrando
> Library of Congress
> Washington, DC
> 301-772-4454
> UniRed:
> Template:
> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
> "">
> <HTML>
> <HEAD>
> <TITLE>Example6.htm</TITLE>
> <META HTTP-EQUIV="Content-Type" CONTENT="text/html;charset=utf-8">
> <META NAME="generator" CONTENT="NoteTab Pro 4.85">
> </HEAD>
> <BODY>
> <P>test data here</P>
> </BODY>
> </HTML>
> --- Elizabeth Shaw <[log in to unmask]> wrote:
>>In fact, XML's default character set is UTF-8. However if you don't
>>the character set available you  can set the encoding to ASCII, or
>>IS0-8859-1 which is what NoteTab Pro is doing.
>>I believe that Notetab Pro (at least older versions) does not
>>Unicode - so it can not output in Unicode but rather uses latin-1.
>>Perhaps another editor that is fully Unicode compliant is in order.
>>There are tons of them now that *are* unicode compliant.
>>If you are entering the data into your ACCESS database in UTF-8 but
>>are using no characters other than those found in ascii and latin-1
>>(ISO-8859-1) then the data that your are producing requires no
>>transformation (the ASCII and Latin1 characters set are a subset of
>>first 256 characters in UTF-8) and the techies can output the data
>>in an
>>XML file that uses UTF-8. Because you are using no other characters
>>you should be able to edit in an editor that can handle latin-1.
>>(Although it the file is formatted absolutely correctly it might
>>have a
>>header at the beginning of the file that indicates that it is a
>>file and your editor might choke on it)
>>However, a problem will occur if you are using characters other
>>those available in ASCII and latin-1 and you want to use an editor
>>is not fully unicode compliant. Your techies can output UTF-8 but
>>editor will choke.
>>In sum, it is your tool, not XML, that is the problem.
>>If you don't know what the ISO-8859-1 character set is there is a
>>listing at
>>Liz Shaw
>>Susan Hamburger wrote:
>>>Our tech people are mapping an Access database to output both EAD
>>and MARC
>>>as XML documents. Currently, the SGML conversion to XML in
>>NoteTabPro that
>>>I use generates this string
>>><?xml version="1.0" encoding="ISO-8859-1"?>
>>>The techies want to know if the encoding can be changed from
>>ISO-8859-1 to
>>>UTF-8 to support Unicode. My notes from the Publishing EAD
>>Finding Aids
>>>course indicate that XMetaL (which I use to create my SGML
>>>stores the document as UTF-8 and it needs to be changed to
>>ISO-8859-1.  Is
>>>this only for the ASCII editor or does XML not support Unicode?
>>My final
>>>output HTML document has it converted back to UTF-8. Must
>>ISO-8859-1 be in
>>>the XML document so it can be converted to HTML and PDF? Or is
>>there some
>>>other reason why the encoding is in ISO and not Unicode?
>>>Thanks for any help and advice.
>>>Susan Hamburger, Ph.D.
>>>Manuscripts Cataloging Librarian
>>>Cataloging Services
>>>126 Paterno Library
>>>The Pennsylvania State University
>>>University Park, PA 16802
>>>FAX 814/863-7293
> _______________________________
> Do you Yahoo!?
> Declare Yourself - Register online to vote today!