Karen-
Does the following spreadsheet for the combined BSRs help: http://www.loc.gov/catdir/pcc/bibco/bsr_cmb_tbl_20110617a.pdf ?
Kathy Glennan
Head, Special Resources Cataloging / Music Cataloger
University of Maryland
[log in to unmask]
-----Original Message-----
From: Bibliographic Framework Transition Initiative Forum [mailto:[log in to unmask]] On Behalf Of Karen Coyle
Sent: Monday, November 07, 2011 1:54 PM
To: [log in to unmask]
Subject: Re: [BIBFRAME] What the data tells us
Ooof! If these were in a spreadsheet format (or tabbed text) it would be easy to see what fields they have in common. Also, it would be helpful to describe them in other than MARC terms. For example, the Books one has ISBN, the serials one ISSN, and both of these are covered by the RDA "identifier for the Manifestation" which not only covers them both but allows other useful identifiers to be input.
Also, Date1 and Date2 are MARC designations but have a bunch of different meanings depending on the date code.
I always seem to end up at the need to figure out what's really in MARC:
http://futurelib.pbworks.com/w/page/29114548/MARC%20elements
kc
Quoting "Bartl, Joseph" <[log in to unmask]>:
> For "core" fields, why not look at PCC's BSR (BIBCO Standard Record)
> for each format: http://www.loc.gov/catdir/pcc/bibco/BSR-MAPS.html
>
>
>
> These were cooperatively developed structures that convey pretty much
> what, from the standpoint of catalogers anyway, seems essential. These
> would need, of course, to be discussed more broadly, internationally,
> but these would make a credible point of departure.
>
>
>
> Joe
>
> Joe Bartl
> Head, Music Bibliographic Access Section 1 Music Division Library of
> Congress
> 101 Independence Avenue, SE
> Room LM 542
> Washington, DC 20540-9420
> Desk: 202-707-0013
> Email: [log in to unmask]<mailto:[log in to unmask]>
>
>
> -----Original Message-----
> From: Bibliographic Framework Transition Initiative Forum
> [mailto:[log in to unmask]] On Behalf Of Karen Coyle
> Sent: Monday, November 07, 2011 11:24 AM
> To: [log in to unmask]
> Subject: Re: [BIBFRAME] What the data tells us
>
>
>
> The only two I am familiar with are Dublin Core and RDA Core. Dublin
> Core is unrelated to library cataloging and is missing some things
> that some folks would consider essential, like a data element for
> edition. It also doesn't have a way to specify a series name or a
> conference name. (Those can be input, but they go into a more general
>
> field.) Then again, you might not consider those core. DC has
> possibilities but some of the elements seem to be aimed specifically
> at academic materials rather than general ones (e.g.
>
> "instructionalMethod".) DC should definitely be considered as a
> candidate, although it may not be everything that is needed. It has
> 65 elements, some of which I would not consider to be core.
>
>
>
> RDA core has 120 elements (I'm basing this on the LC training
> materials)[1], and includes things like:
>
> - Vertical scale of cartographic content
>
> - Form of musical notation
>
> and others that might be better seen as extensions based on format.
>
> RDA's "core" reads something like the minimal bibliographic record.
>
> [2] It is "core" within the context of the full RDA, which means that
> it defines what is core for all material types, not what is core
> regardless of material type. That latter definition would be closer to
> DC, although even that has some odd biases.
>
>
>
> kc
>
> [1] http://www.loc.gov/catdir/cpso/RDAtest/training2word7.doc
>
> [2] http://loc.gov/marc/bibliographic/nlr/
>
>
>
>
>
> Quoting "Riley, Charles"
> <[log in to unmask]<mailto:[log in to unmask]>>:
>
>
>
>> I've only been cataloging for the last decade, but in that time I
>
>> can't remember how often I've been introduced to the idea of some new
>
>> 'core' that I should be getting trained into and used to using, or at
>
>> least familiar with. To wit:
>
>>
>
>> Core-level (E/L 4)
>
>> PCC Core
>
>> Dublin Core
>
>> RDA Core
>
>> Bibliographic Standard Reference
>
>>
>
>> Agreed, that a modular approach is conceptually appropriate and
>
>> workable--it is a worthy undertaking. But are all the existing
>
>> attempts at a core standard so woefully inadequate that they can't be
>
>> used? If so, what were the common failings in the approach to
>> develop
>
>> them?
>
>>
>
>> Charles Riley
>
>>
>
>>
>
>> ________________________________________
>
>> From: Bibliographic Framework Transition Initiative Forum
>
>> [[log in to unmask]] on behalf of Karen Coyle
>
>> [[log in to unmask]]
>
>> Sent: Sunday, November 06, 2011 3:00 PM
>
>> To: [log in to unmask]<mailto:[log in to unmask]>
>
>> Subject: Re: [BIBFRAME] What the data tells us
>
>>
>
>> Roy, I wish you'd said all of this to begin with! Yes, we need to
>
>> create a simple core structure that can be extended. This is what we
>
>> do not have with MARC, and we definitely do NOT have with RDA.
>
>> Unfortunately, RDA is more like MARC than what you describe below. We
>
>> do have an opportunity to create a something more workable in this
>
>> transition, but if we do not then we will be stuck with an unworkably
>
>> complex data carrier for a very long time. As some said when RDA was
>
>> still in progress, this may be our last chance to get it right
>> because
>
>> we are falling further and further behind as information providers.
>
>>
>
>> Coming up with a core is tricky, to say the least. RDA's core
>> includes
>
>> elements that are core for all of the formats that it supports -- so
>
>> there are core music elements, core maps elements, etc., all as part
>
>> of a single core. I'm not sure that helps us. FRBR's entities are
>
>> probably a better core -- although I find there to be some
>
>> idiosyncrasies in FRBR (the four Group 3 entities, to start) that
>> need
>
>> to be ironed out. I do think that it is essential that we start from
>
>> zero and re-think core for the purposes of a new framework.
>
>>
>
>> kc
>
>>
>
>> Quoting Roy Tennant <[log in to unmask]<mailto:[log in to unmask]>>:
>
>>
>
>>> Karen,
>
>>> I think you missed my point. The point wasn't to enrage music
>
>>> catalogers by leaving a field or subfield behind that they simply
>
>>> must have -- it was rather to determine a core of bibliographic
>
>>> description (which I submit the data DOES tell us), then allow
>
>>> communities of interest to specify ways in which that core can be
>
>>> decorated with what they require without ending up where we did with
>
>>> MARC -- with an arguably bloated record (and I'm including subfields
>
>>> here) that tries to be prepared for every eventuality. That's why I
>
>>> suggested modularity as being an excellent strategy for
>>> accomplishing one of my pet goals (to respond to Hal Cain's
>>> request):
>
>>>
>
>>> · Simple aims should be simple to accomplish.
>
>>>
>
>>> · Complexity should be avoided unless it is absolutely required to
>
>>> achieve the goal.
>
>>>
>
>>> · If complexity is required, it should be sequestered. That is,
>
>>> complexity should not spill over to affect those who don¹t need it
>>> to
>
>>> achieve their goals.
>
>>>
>
>>> When a MARC subfield is used 17 times out of 240 million records we
>
>>> may want to consider just how important it is to create it, document
>
>>> it, and write software to process it.
>
>>> Roy
>
>>>
>
>>> On 11/5/11 11/5/11 € 1:24 PM, "Karen Coyle"
>>> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
>>>
>
>>>> Quoting Roy Tennant <[log in to unmask]<mailto:[log in to unmask]>>:
>
>>>>
>
>>>>> I believe you are missing the point. The evidence is clear -- the
>
>>>>> vast majority of the some 3,000 data elements in MARC go unused
>
>>>>> except for a small percentage of records in terms of the whole.
>
>>>>> What isn't there cannot be indexed or presented in a catalog, no
>
>>>>> matter how hard you try. In other words, which fields were coded
>>>>> is
>
>>>>> the only relevant information. It is the ONLY relevant information
>>>>> when you are discussing how to move forward.
>
>>>>
>
>>>> I disagree. (As does the OCLC report, BTW) To some extent the stats
>
>>>> on MARC records reflect the many special interests that MARC tries
>
>>>> to address. I have spent more time on the Moen statistics [1] than
>
>>>> the OCLC ones, although since they were done on the same body of
>
>>>> data I don't see how they could be very different.
>
>>>>
>
>>>> In the case of what Moen turned up, the most highly used fields
>>>> were
>
>>>> ones that systems require (001, 005, 008, 245, 260, 300) -- it's a
>
>>>> bit hard to attribute that to cataloger choice. But for the
>
>>>> remainder of the fields there is no way to know if the field is
>
>>>> present in all of the records that it *should* be, or not.
>
>>>>
>
>>>> At least some of the low use fields are ones that serve a small-ish
>
>>>> specialized community. Only 1.3% of the OCLC records have a
>
>>>> Cartographic Mathematical Data (255), but according to the OCLC
>
>>>> report that represents a large portion of the Maps records (p. 23
>>>> of
>
>>>> OCLC report). It's harder to make this kind of analysis for fields
>
>>>> that can be used across resource types. For example, 35-47% of the
>
>>>> records (OCLC v. LC-only, respectively, from Moen's stats) have a
>
>>>> Geographic Area code (043). Undoubtedly some records should not
>>>> have
>
>>>> that field, so is this field a reliable indicator that the resource
>
>>>> has geographic relevance? We have no way of knowing. In addition,
>>>> as
>
>>>> MARC fields are constantly being added, some fields suffer from not
>
>>>> having been available in the past. (Moen does a comparison of
>>>> fields
>
>>>> used over time [2], and the OCLC report also looks at this; see
>
>>>> below.)
>
>>>>
>
>>>> Neither the Moen stats nor the OCLC report really tell us what we
>
>>>> need to know. It's not their fault, however, because we have no way
>
>>>> to know what the cataloger intended to represent, nor if the MARC
>
>>>> record is complete in relation to the resource. My experience with
>
>>>> some specialized libraries (mainly music and maps) was that these
>
>>>> communities are diligent in their coding of very complex data.
>
>>>> These, however, represent only small numbers in a general catalog.
>
>>>>
>
>>>> The OCLC report reaches this conclusion:
>
>>>>
>
>>>> "That leaves 86 tags that are little used, or not used at all, as
>
>>>> listed in the ?MARC 21 fields little or not used? table (Table
>>>> 2.14,
>
>>>> p. 32). Of these infrequently occurring fields, 16 are fields that
>
>>>> were introduced between 2001 and 2008. Three of these fields
>
>>>> (highlighted in orange) have no occurrences in WorldCat since OCLC
>
>>>> has no plans to implement them."
>
>>>>
>
>>>> This means that there are really 67 fields that seem to be underused.
>
>>>> That is out of 185 tags (not 3000, which would be more like the
>
>>>> number of subfields). That's about 1/3. Having sat in on many MARBI
>
>>>> meetings, however, I am sure that there are communities that would
>
>>>> be very upset if some of these fields were removed (e.g. musical
>
>>>> incipits, GPO item number). Admittedly, some fields were introduced
>
>>>> that then turned out not to be useful. If those can be identified,
>>>> so much the better.
>
>>>>
>
>>>> Basically, there is no way to know a priori what fields *should* be
>
>>>> in a MARC record other than the few that are required. Deciding
>
>>>> which fields can be left behind is going to take more than a
>
>>>> statistical analysis. I agree that we should not carry forward all
>
>>>> MARC data just "because it is there." The analysis, though, is
>>>> going
>
>>>> to be fairly difficult. Even more difficult will be the analysis of
>
>>>> the fixed fields. I could go on about those at length, but that
>
>>>> analysis will be complicated by the fact that the fixed fields are
>
>>>> frequently a duplicate of data already in the record, and we never
>
>>>> should have expected catalogers to do the same input twice for the
>
>>>> same information -- we should have had a way to accomplish indexing
>
>>>> and display with a single input.
>
>>>>
>
>>>> kc
>
>>>> [1] http://www.mcdu.unt.edu/?p=41
>
>>>> [2] http://www.mcdu.unt.edu/?p=47
>
>>>>
>
>>>>>
>
>>>>> The one thing you said that I agree with wholeheartedly, is that
>>>>> we
>
>>>>> should know what data is useful to users. Yes. That.
>
>>>>> Roy
>
>>>>>
>
>>>>>
>
>>>>> On 11/4/11 11/4/11 € 10:41 PM, "J. McRee Elrod"
>>>>> <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
>>>>>
>
>>>>>> Roy Tennant <[log in to unmask]<mailto:[log in to unmask]>> wrote:
>
>>>>>>
>
>>>>>>
>
>>>>>>> "Implications of MARC Tag Usage on Library Metadata Practices"
>
>>>>>>> http://www.oclc.org/research/publications/library/2010/2010-06.p
>>>>>>> d
>
>>>>>>> f
>
>>>>>>
>
>>>>>> This study told us what fields were in records, not whether those
>
>>>>>> fields were utilized in OPACs. MARC has a wealth if information
>>>>>> never
>
>>>>>> put to practical use. Which fields were coded is fairly useless
>
>>>>>> information.
>
>>>>>>
>
>>>>>> A study of what fields OPACs actually use might be helpful, but
>
>>>>>> that still does not tell us what fields might be helpful to
>
>>>>>> patrons if they were utilized,'
>
>>>>>>
>
>>>>>>
>
>>>>>> __ __ J. McRee (Mac) Elrod
>>>>>> ([log in to unmask]<mailto:[log in to unmask]>)
>
>>>>>> {__ | / Special Libraries Cataloguing HTTP://www.slc.bc.ca/
>
>>>>>> ___} |__
>
>>>>>> \__________________________________________________________
>
>>>>>>
>
>>>>>
>
>>>>
>
>>>>
>
>>>
>
>>
>
>>
>
>>
>
>> --
>
>> Karen Coyle
>
>> [log in to unmask]<mailto:[log in to unmask]> http://kcoyle.net
>
>> ph: 1-510-540-7596
>
>> m: 1-510-435-8234
>
>> skype: kcoylenet
>
>>
>
>
>
>
>
>
>
> --
>
> Karen Coyle
>
> [log in to unmask]<mailto:[log in to unmask]> http://kcoyle.net
>
> ph: 1-510-540-7596
>
> m: 1-510-435-8234
>
> skype: kcoylenet
>
--
Karen Coyle
[log in to unmask] http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet
|