This is exactly the point. If we're getting information from multiple sources and it does not completely agree, we can use the provenance to determine whether there's a preponderance across all sources, if the most reliable sources agree, if the 'freshest' sources agree (in the case of something likely to change) or all of the above. We can build in measures of quality, reliability, or any other criteria we think is important. 

We can also 'smarten up' the data we have, using strategies already described and used by others. This is not the way we think about data now, but it's surely the way we'll need to think about it in future.


On Sun, Jan 15, 2012 at 10:10 AM, Karen Coyle <[log in to unmask]> wrote:
Obviously if some libraries use less precise information matching and de-duplicating can also be less precise. However, the totality of the information that you have, meaning some information from different sources, can be used to make sense of things. So if MOST libraries have stated that the title page title is "Title X" then you can assume that an undifferentiated title "Title X" is a title page title. It's not a 100% kind of match any more, but a more nuanced match (that Simon probably has the correct terms for!). The Open Library used a version of this in their determination of Works by using the information from records that did have uniform titles to bring in records for the same manifestation but that had not included the uniform title.