On Jan 18, 2013, at 11:54 AM, Jeffrey Allen Trimble <[log in to unmask]> wrote: > I concur to Dr. Tillett's comments about dumbing down the elements. Granularity/detail is what sets MARC aside, and to replace it with something that can express that granularity is important and will make the migration to a new communications format seamless in so many ways. And it may seem counter productive that the ILS on the market don't always use that data in a way for the end user, many of us have run reports from our ILS systems looking for LDR, 007/008 data for projects. I'm not sure what, exactly, we're arguing about here. MARC data is only as detailed as the creator of the record chooses it to be. The vast majority of the time, this is *far* less detailed than I, as a developer, would like it to be. How often, on aggregate, is the 100 or 700 $4 or $e used? Since I've parsed a lot of MARC records, I can tell you. Not much (and the inconsistency of the $e makes this even less useful). Tell me, what makes the 100/700 fields in: http://lccn.loc.gov/76019078/marcxml "better" than <http://lccn.loc.gov/76019078> <http://rdvocab.info/roles/authorWork> <http://id.loc.gov/authorities/names/n50037401> ; <http://rdvocab.info/roles/illustrator> <http://id.loc.gov/authorities/names/n79054194> . ? The latter by no means dumbs anything down, but, just like MARC today, you have to use it. -Ross. > > Jeffrey Trimble > Associate Director & > Head of Information Services > William F. Maag Library > Youngstown State University > 330.941.2483 (Office) > [log in to unmask] > http://www.maag.ysu.edu > http://digital.maag.ysu.edu > "For he is the Kwisatz Haderach..." > > > From: Barbara Tillett <[log in to unmask]> > Reply-To: Bibliographic Framework Transition Initiative Forum <[log in to unmask]> > Date: Friday, January 18, 2013 9:55 AM > To: "[log in to unmask]" <[log in to unmask]> > Subject: Re: [BIBFRAME] Input screens > > Let's please not dumb down the possible elements for the sake of a very simple element set - that was done with Dublin Core, which has proven helpful for limited applications but has limits when more granularity/detail is needed. Let's learn from that and enable the future framework to meet the needs of all users. > > Our future framework really must accommodate details that will be important for humans. Plus we need to do it in a way that machines can easily manipulate so humans can understand and see connections/relationships and confirm/distinguish among similar things in order to do what they wish with information. If we can accommodate details, then we can simplify our display and uses of that data - condensed into a shorter/brief view of data or mapped to broader categories of data; but if we only provide a few elements, we can't go from generalities to details when that is needed.