Print

Print


On Nov 7, 2014 12:35 PM, "[log in to unmask]" <[log in to unmask]> wrote:
> In some cases, it may be possible for Bibframe applications to do a
_better_ interpretation if types are present or can be inferred, but it
should never be the case that an application can make no sense or can only
offer a bizarre or nonsensical interpretation of some Bibframe triples
without typing information.

I'm going to stay out of the meta-discussion and just make some technical
remarks.

1. If a set of assertions where all entities are fully typed has a model,
then removing an assertion does not eliminate the model ; it just allows
for other models.  So removing type information cannot leave *only* the
bizarre models.

2. *Some* inferences for *some*  axioms can be found extremely cheaply in
*some* implementations. (Asterisks indicate bold, italic, comic sans, and
blink).

One example is inferring a type of an entity that is the subject (value) of
a property with explicit simple domain (resp. range) statements, when
querying a data store  that is mapped from a relational database.
In this situation, some type information comes straight from the mapping,
so it can be done "for free" . This inferencing is sound but not complete.
Of course, entities mapped from an rdb will usually have a type, though it
may not be the one you want.

Finding the superclasses given a type only needs to be done once.  This
does not find all the classes of an individual but it does a lot.

3. Restricted profiles of owl can perform some inference tasks in real
time.  Production quality databases using  appropriate owl profiles operate
at scale in mission critical applications.

4. Adding redundant assertions that could have been inferred cannot make a
dataset wrong. It does give more places to add wrong data, but if they are
added automatically then that is less of a concern.

5. If  inferable assertions are added to a dataset in order to avoid end
systems doing inference themselves, it is probably necessary to add all
inferences. This approach is used by triple stores that compute inferences
at load time, and can cause the dataset to become larger. Data compression
can give high compression ratios here, but the receiver still has to deal
with more assertions.

Simon