it is tempting to view RDF and RDBMS as alteratives just because they
both are data models. But they really are not, because only RDF has
some properties that turn out to be extremely helpful for data
de-siloification and integration on a global scale.
Let me outline a few of the features from one of my GitHub posts:
- explicit global identifiers (URIs) allow linking databases at an
unprecedented level of granularity. In RDBMS terms, think about a
universal way to make foreign keys to remote datasources -- that's
- databases are schema-less and effortless to merge. Lets say 2
institutions manage a list of world countries but each with a
different (and large) set of indicators. Now if they are merging, or a
customer has data from both, even if a lot it is duplicated, like
country names and codes, the differences in table schemas can make the
merge of these datasources a non-trivial effort, likely requiring
programming. Merging RDF is instead simple concatenation.
- canonical data model. You can map incompatible formats to an RDF
representation and use it as a bridge in a pivotal (indirect) data
There are more, but you can find the whole post here:
Thinking about a data model or format, it is also important whether
it's an international standard, and what is the size and quality of
its software infrastructure. That is why XML and RDF blow YAML out of
Getting used to RDF takes some experience, but it is rewarding and I
would say feels much more natural afterwards. If you say you get
JSON-LD, you should be able to get RDF, because that is just one of
On Sat, Apr 11, 2015 at 9:01 AM, Brian Tingle <[log in to unmask]> wrote:
> I've been following the interesting discussion on this list, and I'd like to
> share a thought I had about this topic, that I hope will be productive to
> the conversation.
> We all heard the rallying cry of "MARC must die" and now we hear "BIBFRAME,
> is the new MARC (but now it is not just for libraries)". I support the
> overall "linked data movement" but I'm just not really into the all this RDF
> and SPARQL stuff -- but I do think the data modeling going on here is going
> to pay off by enabling better interoperability and data sharing -- I just
> don't see anything fundamentally different than other data model systems or
> query technologies. And I just can't hold a triple graph in my head, all
> the arrows, and then httprange-14 blows out my suspension of disbelief. I
> don't know if it is like one of those pictures if you cross your eyes or
> whatever and you see it in 3d? I can't see those 3d pictures, and I can't
> old a graph of triples in my head.
> I'm not saying there is not a time and a place to store stuff in a triple
> store with a SPARQL endpoint (or use a Linked Data Platform), I just don't
> see why it needs to be the one true way™.
> Some people like beer, some people like wine. Some people like RDF, some
> people like RDBMS. Just because we don't have the same tastes does not mean
> we can't have a data party together. Let's embrace technodiversity.
> I do really like JSON-LD. http://json-ld.org
> I can understand that. It can package a graph of triples for me into
> something that looks like a record. It supports CURIEs/xmlns in a way that
> does not have so much of a smell to me.
> JSON is nice for programmers, at least web hacks like me, but it is not
> necessarily the best format for human editing. For example, no comments
> YAML is a superset of JSON with "human readability" as a design goal
> Well, you can do YAML that will convert / is compatible with JSON-LD
> "@context" and "@id"
> So, the thought I had was what if you had a LD aware YAML editor /
> Cataloging IDE -- maybe implemented with a web based IDE framework or an
> eclipse or emacs plugin -- that would parse your YAML-LD "@context" and
> automatically hook up autocomplete drop downs against the
> vocabularies/ontologies/whatever they are. CURIEs will be filled in
> according to how you had your @context set up. Domains would be used to
> limit your autocomplete to appropriate values. Cataloging rules could be
> displayed in the IDE. The editor would check validity and have syntax
> YAML also supports multiple records per file, so for batch processing use
> cases, it might be sort of similar to MARC, where I will often get single
> files with thousands of records. If you want to put the batch of records in
> a triple store or a solr index -- data party people don't care.
> Anyway, that was my thought -- consider storing it as YAML records that
> follow JSON-LD rules.
> Have a nice day and link away.
> -- Brian