Print

Print


On 8/1/14, 8:13 AM, [log in to unmask] wrote:
> The claim I'm making is that it's not the absolute number of blank nodes coming out of an automated transform that should bother us. It's the number that are more-or-less inherent in the model, and we cannot gauge_that_  number over unreconciled data. We can predict it to some limited extent directly from the model, but we are going to produce different volumes of blank nodes by exercising Bibframe over different bodies of data, with different regimes of co-reference management, with different schemes of reconciliation, and so forth.

I'd add a somewhat different criterion for determining what to do about 
blank nodes, and that is "application functionality." Blank nodes affect 
functionality. If some particular functionality is needed and cannot be 
achieved because of blank nodes, then one most change the model to 
support the functionality. This turns it from a theoretical question 
("blank nodes are bad") to a practical one ("blank nodes don't allow me 
to do X, which I need for my application").

I'd be interested to hear from the experimenters if they 1) retain the 
bnodes as blank, or if they "skolemize" them 2) what functionality they 
have tested (e.g. what searches do you support?).

kc

-- 
Karen Coyle
[log in to unmask] http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet