Print

Print


See my comment below:

Amanda Xu Sent from my iPhone

On Sep 26, 2011, at 9:30, "Myers, John F." <[log in to unmask]> wrote:

> -----Original Message-----
> Karen Coyle replied
> 
> Quoting "J. McRee Elrod" <[log in to unmask]>:
>> Where a resource is published is data only second to title and
>> statement of responsibility as wanted information, I suspect, and
>> perhaps equal with date of publication.
> 
> We're going to need real data on that. I don't recall myself EVER  
> looking at place of publication when identifying and selecting and  
> obtaining from a library catalog.
> 
> ------------------------------
> 
> Mac would answer this question with his usual example of the implications of London, Ont. And London, England for his law clients.  And I would add that in cataloging early books, the place of publication has mattered for identification of the resource or of the corresponding record, sometimes because the publisher name was not clear, not present, or published in multiple places.  This can also be pertinent for all time frames when working with resources outside one's preferred character set, where place of publication may be easier to decipher (or may have parallel representation).  
> 

AX: I would argue for pub data, e.g. names of publishers, 260$b, and dates  260$c for identification, branding, credibility rating, verification, etc. Such field and subfields apply to both old and new materials. 

It's hard to locate the place of a publication for new publications unless it states so in the title page of a book.  We might see more bibs using [ bracket as most publishers are being globalized.  Remember this is the effect of "World is flat."

However, for old publications where sources for items and bib verification  are scared, we might as well keep data in field 260$a. 

In short, transcribing and markup what's available in the piece to be cataloged are still applicable in today's increasingly digital environment as far as I know.

> But, to launch a larger dialogue, I am not unconvinced that conducting a zero-sum analysis of the perceived relative worth of individual data elements in a data format is a bit of a straw man.  

Welcome to the modeling world!!  Most  of us here are designers/info architects, who are often being forced to make decisions in half-darkness or with a glimpse of light.  

The more we are informed through research, experiment, and trials of all kinds, the closer we are capable to make the straw man a precise match to the real object that we are modeling.  Experience counts here.

> We can lob personal and anecdotal assessments around all we want, but the data element needs are going to come from the respective descriptive rules -- a conversation for another list or at least another day.  Any prospective upgrade or replacement to MARC needs to be sufficiently robust and flexible to accommodate the descriptive requirements for any number of communities.
> 
Well, Jeffrey Trimble said very well in his remark something like 'Don't throw the water with the baby in the bath tub'.  We've invested a lot into MARC if we ever coded our collections to its full-level encoding. 

I have checked MARBI's updates constantly and been amazed by the speed it tries to catch up with the needs of the community.  

I don't know about retrospective rules.  I recently piled updates from LCRIs and other tools that reflected RDA spec and trainings into one big binder.  It's pretty intimidating for anyone to stay on top of the Cataloging practice.  Maybe you have other viewpoints that I missed.  Please bear with me if it is the case.


> It was embarrassingly recent that I learned MARC was not the be-all and end-all of ISO2709, but rather a significantly constrained application of it.

Well, I viewed it as a functional component/piece within the puzzle of bib info exchange using Web as Infrastructure.  Don't have access to ISO 2709 yet.  Thanks for the info!!!

>  I don't know if "NEWMARC" as Jeffrey Trimble proposes would or should be the answer, but it certainly has merit as part of the conversation.  
> 
We can label Jeffrey Trimble as "Global MARC."  More to come!!!

> Regardless of the mechanism for the resulting communication format, I think there is a need to address the larger issues of:
> * transcribed vs. controlled data, 

AX: It is important to allow both in today's participatory content creation and collaborative tagging environment.

> * controlled data rendered in representational (i.e. text) and non-representational (e.g. URIs) forms (because regardless of the disadvantages of the former, there will likely be instances where that is the only option available), 

If no option, we choose URI, and allow more Catalogers to be in the  stewardship of taxonomy/thesaurus/authority list maintenance in local and global settings.  As we all know, they are in the 1st line to assimilate new concepts, etc. into these files.

> * how to connect those respective representations of the same data element, 

Remember 'Access Control Records' in  AACR2/MARC, and 'the same as' rule in OWL, etc.

> * how to connect the individual data elements applicable to a given resource,  

Ideally, deriving data elements out of given resources as much as we can.  This is the only way to avoid straw man fears.

> * how to connect those out to other resources and entities,

Adding appropriate relationship terms, role and link types, coding and processing them.

> * extensibility, as much as possible, with respect to future data needs (witness the cited regret of Avram in developing different bibliographic formats for specific media, and the implications of FRBR that alters the bifurcation between authority/bibliographic data and replaces it with data for entity groups). 
> (And all while keeping the data relatively compact!) 
> 
Remember what matters in the end is user-satisfaction!!!

> Somewhat farther afield, how to realize all of that into an interface that:
> * translates computer friendly, language-neutral coding (element labels and data) into something intelligible for those performing the data entry in the context of multiple languages and descriptive codes, 

Get plug-in Lingua Franca from vendors, etc.

> * won't require full double entry of transcribed and controlled data, and ultimately 

Optional feature is fine.  Differentiated terms from transcription mode is great if the term is programmed to anticipate the matching of user's queries.

> * renders a coherent and consistent display to end-users of the data.
> 
> 
It depends on rendering context and user preferences.  But it is important requirement for data integrity validation when we perform data quality and reliability assessment in the backend.

> 
> John F. Myers, Catalog Librarian
> Schaffer Library, Union College
> 807 Union St.
> Schenectady NY 12308
> 
> 518-388-6623
> [log in to unmask]
> 
> 
>