Print

Print


Elizabeth S.,
Hi,

--- Elizabeth Shaw <[log in to unmask]> wrote:
> I believe that the question was not data coming from a clipboard
> but it
> was a question of what would happen if UTF-8 was the output of the
> ACCESS database.

Answer:
1. Access 2003 can output XML UTF-8 as a text document (yes that is
right!) or as an xml document.
  a) Right click on your table
    (1) select Export
    (2) choose your document type for exporting (html, xml, etc.)
    (3) choose schema, or keep default schema.
  b) Create a Report
    (1) Click on "New"
    (2) Choose table from drop-down menu
    (3) Create text box in "Detail" section of report.
    (4) Insert SQL text into text box.
       ="<row><entry>"&[column_name_1]&"</entry></row>"
    (5) Save Report
    (6) Preview
    (7) File>Save As
    (8) Choose the character set
    (9) Choose the document type (text or html or xml)

2. Access 2000 outputs Windows character set 1252.
    [Follow above steps (no UTF-8 or schema choices)

The Windows character set 1252 is NOT the same as ISO-8859-1. This
fuzzy character set includes characters from Latin A and Latin B. The
closest glyph set is the HTML glyph set (& eacute;). The result will
be invalid characters for the xml declaration encoding ISO-8859-1.

Your best bet is to get the 2003 Access update for your PC.

If you can't do that, you will have to export mdb 2000 as HTML, and
then you can convert them to base 10 or base 16 using entity
documents (similar to the ones supplied with the M. Fox conversion
kit -- though those files convert the HTML glyphs to base 10).

[I am attaching a file containing a sample database, and Chris Bayes
javascript file (edited and adapted for outputting and EAD 2002
table). Just put all the files in the same folder, edit the batch
file and double click. If there are wierd characters or corruptions
in your Access table, the script will abort, leaving an unfinished
table.]

So if you have Access 2000 you will need an extra step to achieve a
valid UTF-8 document.

> Those are two different issues.
>
> If indeed they are outputting UTF-8 then the issues will be both a
> matter of whether the editor can interpret the unicode multi-byte
> character and whether you have the glyph/font to display it.

I am encouraged to read your response. We have the same evaluation of
the problem (my articulation was in outline form, yours conditional
sentence).

However, I remind you that this being true, their is no valid
criticism of coding in text editors. The issue is the font, not the
editor. If Access (software X) is exporting/save as the data in
UTF-8, it will not be effected in a text editor. ASCII is a subset of
UTF-8. UTF-8 will appear as ASCII in a text editor. For example, the
outputting of Access 2003 as txt. Open Note Pad and choose the Arial
Unicode Font and you get all the pretty characters you want.

As you have said, the issues are really two, neither of which result
in any valid reason to abandon or condemn text editors.

Sincerely,
Mike Ferrando
Library of Congress
Washington, DC

p.s. When I want a multi-byte character I look at my code page and
type in my NCR. This gives me an un-interrupted ability to use the
keyboard for creating or editing any document. I know what I am
getting, and I know it is right. How do you create your multi-byte
characters? I suspect you stop typing, use your mouse, open a window
of characters, choose one, select, copy, paste, and then resume
typing. As a coder and as an archivist, the first scenerio seems much
less work intensive. -MF


>
> [[ An aside: And yes, if you are cutting and pasting from other
> applications the issues become complex because, for example, older
> versions of many microsoft products have their own idiosyncratic
> character sets. And most editors don't convert them to their
> unicode
> equivalents.But that is another long and complicated issue. If on
> the
> other hand you are absolutely sure that you are using ISO-8859-1
> then
> you can cut and paste into a UTF-8 document without trouble
> (ISO-8859-1
> doesn't include the extended latin characters A and B, BTW)
> ]]
>
> Anyway, I sort of think that using NCR and all the rest while
> helpful to
> a programmer is not what the average desktop user of EAD is going
> to
> want to do. 99% of archivist have neither the time nor inclination
> to
> learn such arcane things. And good software should support their
> work
> not require them to learn things only a programmer wants to know.
>
> I use my unicode editor now and make sure I have the appropriate
> fonts
> and presto, the document looks just as it should and I can see all
> the
> characters correctly display. And I don't have to worry about how
> they
> are represented "under the hood". That is the way *good* standards
> compliant software makes archivists able to do their job-
> describing
> archives rather than battling a software package into submission.
>
> So
>
> THE SOLUTION:
>
> Any reasonably good standards compliant XML editor can now handle
> UTF-8.
> Choose an editor capable of correctly interpreting the character
> encoding of the data coming out of the database. Oxygen, XMLSpy,
> Stylus
> studio, epcEdit, and on and on . Take you pick.
>
> Then, if you are using characters that are beyond the basic latin-1
> character set (let's say you have chinese in your document) make
> sure
> that you have the font sets that will enable you to display those
> characters.
>
> It is as simple as that.
>
> For most US archivists, and many Western European archivists, their
> computers already have the fonts for the characters that they
> commonly use.
>
>
> Liz Shaw
>
>
> Mike Ferrando wrote:
> > Friends,
> > I think the issue is twofold.
> > 1. What editor can handle unicode 8.
> > 2. What editor can display multibyte characters as single
> characters
> > to the user.
> >
> > I use NotePad Pro. I also use UniRed 2003 to translate all my
> > characters into NCR (base 10) before I edit it in NotePad Pro.
> Any
> > good parser will be able to tell you if you have characters that
> are
> > not valid for the character set of your xml declaration (UTF-8 is
> the
> > default). When a raw-unicode multibyte character is displayed in
> my
> > text editor it appears as ASCII. When it is an NCR, then it
> appears
> > as an NCR. If I cut and paste that NCR into my HTML template and
> open
> > the template in my browser I see the character. Thus, there are
> two
> > worlds we are dealing with, the font world and the unicode world.
> > (Welcome to the Matrix...)
> >
> > If you are cutting and pasting, then you better do it in some
> editor
> > that can convert your character set appropriately. An encoding of
> > ISO-8859-1 does not insure that cutting and pasting from other
> > programs will result in valid characters (errors will be given
> for
> > wierd software junk characters, etc.).
> >
> > As a coder I use text editors (not X-Metal), I prefer to see and
> use
> > the NCR rather than composed or raw unicode. I reduce my entire
> > document to ASCII (0020 to 007E with the exception of hard
> returns
> > [000A] and tabs [0009]) with UniRed. This single byte character
> set
> > (ranges given), is also UTF-8. The NCR are escaped out from '&#'
> to
> > '&#_'. The text editor sees only ASCII. This insures that I can
> > control my character set. The search and replace is reversed when
> the
> > document is ready to go on the server, etc.
> >
> > My points are as follows:
> > 1. Even if you could get single character display in an editor,
> the
> > fonts on 99% of the operating systems do not cover all the Latin
> > characters (extended A and B, try [& # 557;] [& # x022D;] for
> > instance - Latin small letter o with tilde and macron - extended
> > Latin B - 3.0 edition of unicode set). The idea that you can get
> a
> > single composed character for even the full Latin set is a
> mistake.
> >
> > 2. The problem is not the editor, but instead the conversion
> software
> > used to insure that clipboard data is translated into the correct
> > character set when a document is being coded. Finding an UTF-8
> editor
> > does not necessarily solve the problem. But it does insure that
> if
> > your original document has correct unicode values behind it, the
> > editor will carry them into your document. Word processors are
> famous
> > for creating non-unicode characters of their own.
> >
> > 3. There needs to be a way to test the conversion of clipboard
> data
> > with the original document. Tests are needed to be done to
> > demonstrate if the word processor has support for its character
> set.
> > Usually this can be done with the IE browser HTML document with a
> > character set META tag for UTF-8.
> >
> > Solutions:
> > 1. If the document MUST have an encoding of ISO-8859-1, then the
> > stylesheet can convert the output document to UTF-8. This can
> even be
> > done from XML to XML. Most users think of stylesheets as merely
> > creating a display markup (HTML or XSL-FO), but the parsers can
> do
> > much more (given you are not doing Asian languages).
> >
> > 2. Develop a tool set that insures the clipboard data is
> converted to
> > UTF-8. I suggest using UniRed (template below). UniRed is
> advocated
> > on the Unicode site as a freeware utility that can handle any
> Latin
> > character sets.
> >
> > 3. Update your operating system. If you are a windows user, you
> will
> > need to update your office package. Other font options are
> available
> > (there is a freeware font for 5$). Although the fonts will not
> insure
> > that you will see a single composed character, updating the fonts
> and
> > the operating system will insure that you will carry your
> characters
> > over from their original document to another program (editor).
> >
> > Conclusions:
> > What you really need to do is to design a workflow that produces
> > UTF-8 XML EAD documents. This is really the world you want your
> > document to live in being XML. Anything less will ultimately have
> to
> > be converted at some future date.
> >
> > Just don't confuse the ability to see a multi-byte character in
> its
> > composed form with the ability to use any text editor. The issues
> are
> > really very different.
> >
> > Sincerely,
> > Mike Ferrando
> > Library of Congress
> > Washington, DC
> > 301-772-4454
> >
> > UniRed:
> > http://www.esperanto.mv.ru/UniRed/ENG/index.html
> >
> > Template:
> > <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
> > "http://www.w3.org/TR/html40/loose.dtd">
> > <HTML>
> > <HEAD>
> > <TITLE>Example6.htm</TITLE>
> > <META HTTP-EQUIV="Content-Type"
> CONTENT="text/html;charset=utf-8">
> > <META NAME="generator" CONTENT="NoteTab Pro 4.85">
> > </HEAD>
> > <BODY>
> >
> > <P>test data here</P>
> >
> > </BODY>
> > </HTML>
> >
> >
> >
> > --- Elizabeth Shaw <[log in to unmask]> wrote:
> >
> >
> >>Hi,
> >>
> >>In fact, XML's default character set is UTF-8. However if you
> don't
> >>have
> >>the character set available you  can set the encoding to ASCII,
> or
> >>IS0-8859-1 which is what NoteTab Pro is doing.
> >>
> >>I believe that Notetab Pro (at least older versions) does not
> >>handle
> >>Unicode - so it can not output in Unicode but rather uses
> latin-1.
> >>Perhaps another editor that is fully Unicode compliant is in
> order.
> >>There are tons of them now that *are* unicode compliant.
> >>
> >>If you are entering the data into your ACCESS database in UTF-8
> but
> >>you
> >>are using no characters other than those found in ascii and
> latin-1
> >>(ISO-8859-1) then the data that your are producing requires no
> >>transformation (the ASCII and Latin1 characters set are a subset
> of
> >>the
> >>first 256 characters in UTF-8) and the techies can output the
> data
> >>in an
> >>XML file that uses UTF-8. Because you are using no other
> characters
> >>then
> >>you should be able to edit in an editor that can handle latin-1.
> >>(Although it the file is formatted absolutely correctly it might
> >>have a
> >>header at the beginning of the file that indicates that it is a
> >>unicode
> >>file and your editor might choke on it)
> >>
> >>However, a problem will occur if you are using characters other
> >>than
> >>those available in ASCII and latin-1 and you want to use an
> editor
> >>that
> >>is not fully unicode compliant. Your techies can output UTF-8 but
> >>your
> >>editor will choke.
> >>
> >>In sum, it is your tool, not XML, that is the problem.
> >>
> >>If you don't know what the ISO-8859-1 character set is there is a
> >>listing at
> >>
> >>http://www.htmlhelp.com/reference/charset/
> >>
> >>Liz Shaw
> >>
> >>
> >>
> >>Susan Hamburger wrote:
> >>
> >>>Our tech people are mapping an Access database to output both
> EAD
> >>
> >>and MARC
> >>
> >>>as XML documents. Currently, the SGML conversion to XML in
> >>
> >>NoteTabPro that
> >>
> >>>I use generates this string
> >>>
> >>><?xml version="1.0" encoding="ISO-8859-1"?>
> >>>
> >>>The techies want to know if the encoding can be changed from
> >>
> >>ISO-8859-1 to
> >>
> >>>UTF-8 to support Unicode. My notes from the Publishing EAD
> >>
> >>Finding Aids
> >>
> >>>course indicate that XMetaL (which I use to create my SGML
> >>
> >>documents)
> >>
> >>>stores the document as UTF-8 and it needs to be changed to
> >>
> >>ISO-8859-1.  Is
> >>
> >>>this only for the ASCII editor or does XML not support Unicode?
> >>
> >>My final
> >>
> >>>output HTML document has it converted back to UTF-8. Must
> >>
> >>ISO-8859-1 be in
> >>
> >>>the XML document so it can be converted to HTML and PDF? Or is
> >>
> >>there some
> >>
> >>>other reason why the encoding is in ISO and not Unicode?
> >>>
> >>>Thanks for any help and advice.
> >>>
> >>>Sue
> >>>
> >>>
> >>>Susan Hamburger, Ph.D.
> >>>Manuscripts Cataloging Librarian
> >>>Cataloging Services
> >>>126 Paterno Library
> >>>The Pennsylvania State University
> >>>University Park, PA 16802
> >>>
> >>>814/865-1755
> >>>FAX 814/863-7293
> >>
> >
> >
> >
> >
> >
> > _______________________________
> > Do you Yahoo!?
> > Declare Yourself - Register online to vote today!
> > http://vote.yahoo.com
>




_______________________________
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com