On Tue, Nov 4, 2014 at 10:21 AM, Karen Coyle <[log in to unmask]> wrote:
On 11/4/14 4:46 AM, [log in to unmask] wrote:
On Nov 3, 2014, at 8:25 PM, Karen Coyle <[log in to unmask]> wrote:
This is true, but it leaves us with the dilemma of how do we add new types. In the MARC world, this has been a real problem. When you cannot use a string, the new type has to be defined in the vocabulary before it can be used "in the wild."
[...] If types are URI's then a library can mint its own URI (which will not be understood by anyone else, and may not be correctly used by its own system). If types are subclasses, then we have the problem that BF is "owned" by LC, and to add new subclasses we need an extension method that doesn't break our ability to share.
It is not true that adding a new type is difficult: in the Linked Data world, that is no more difficult than defining it using the same language as was used for the original types and then publishing the definition at an HTTP URL, as has been discussed on this list previously. In fact, it is vastly easier than the process of updating a standard under the control of some semi-central organization.

It is not true that an URI minted by Library X will not be understood by any other institution, if Library X takes the straightforward steps of using an HTTP URI, using a standard language to create the definition, and publishing its definition at the URI. This is just Linked Data. If, later,  I am examining some set of triples that uses that unfamiliar URI, my software can dereference it, examine the (machine-processable) definition, and act thereon.

It is not true that a special extension method is needed to create subclasses that do not prevent information sharing. Simple triples on webpages will do because anyone can create new classes in that way, and our freedom to share data is not going to be impinged by someone who publishes a badly-made new class. The effect of a new type will be limited to its area of use. I can publish all the bad types I like, but until you use them in your data or someone whose data you want to use uses them, they do not affect you. If you do decide to use my new types, LC has absolutely nothing to say about it.

It's not technically difficult. It is "habitually" difficult because of the way the library world has handled standards in the past. And it sure looks to me like we're headed along that same path with BIBFRAME and with RDA. It has a lot to do with how we share data, and our use of vendor systems. I would love for that to all change, for it to be both easy and acceptable for libraries to extend metadata as needed. I also would like libraries to be able to modify their local systems for local needs rather than there being one and only one way to do things in library-land. I'm not terribly hopeful, however.

Some comments-

Technical points:
  1. Anyone can create a subclass. A subclass does not have to be in the same namespace as the original class. 
  2. If a system encounters an instance of a class that it has no specific knowledge of,  it can handle it  as an instance of one of the class's  superclasses. 
  3. If a system encounters a string that it has no specific knowledge of, it has a string. 
  4. If several subclasses are defined in different places that have the same meaning, they can be declared to be equivalent classes.
  5. Metadata can be attached to classes;  this can include labels, identifiers, and display hints.
  6. It is possible to define classes as being things that have a specific string as the value of a property. This can be used to infer an instance's class given the property value, or the property value given the class.   
  7. It is possible to specify Key properties for a class, so that any two URIs which have the same values for all of those properties can be inferred to be two different names for the same thing. 
  8. rdf:value has a range of rdfs:Resource.  It is thus a source of URI/String punning.
Policy points.
  1. If only there were some sort of precedent for setting up some sort of  program for cooperative cataloging that could let appropriately trained library staff create values that could somehow be turned into linked data objects... some sort of type authority cooperative program (TACO).