{"id":765,"date":"2007-02-01T08:46:55","date_gmt":"2007-02-01T12:46:55","guid":{"rendered":""},"modified":"2007-02-01T08:46:55","modified_gmt":"2007-02-01T12:46:55","slug":"rdf_dump_vs_dereferencable_uris","status":"publish","type":"post","link":"https:\/\/fgiasson.com\/blog\/index.php\/2007\/02\/01\/rdf_dump_vs_dereferencable_uris\/","title":{"rendered":"RDF dump vs. dereferencable URIs"},"content":{"rendered":"<p>In a recent mail thread, someone was asking what was the best way to get RDF data from a source [having more than a couple of thousands of documents]: a RDF dump or a list of dereferencable URIs?<\/p>\n<p>None is better than the other. Personally what I prefer is to use both.<\/p>\n<p>If we take the example of Geonames.org, getting all the 6.4 million of RDF documents from dereferencable URI would take weeks. However, updating your triple store with new or updated RDF documents with a RDF dump would force you to download and re-index it completely every month (or so). This task would take some days.<\/p>\n<p>So what is best way then? There is what I propose (and currently do):<\/p>\n<p>In fact, the first time I indexed <a href=\"http:\/\/geonames.org\">Geonames<\/a> into a triple store, I requested a RDF dump to Marc. Then I asked him: would it be possible for you to ping <a href=\"http:\/\/pingthesemanticweb.com\">Pingthesemanticweb.com<\/a> each time a new document appears on Geonames or each time a document is updated? In less than a couple of hour he answered to my mail and then Geonames was pinging PTSW.<\/p>\n<p>So, what it means? It means that I populated my triple store with geonames with a RDF dump for the first time. By proceeding that way I saved one to two weeks of work. Then I am now updating the triple store via Pingthesemanticweb.com. By proceeding that way, I save 2 or 3 days each month.<\/p>\n<p>So what I suggest is to use both methods. The important point here is that Pingthesemanticweb.com acts as an agent that send you new and updated files for a specific service (Geonames in the above example). This simple infrastructure could save precious time to many semantic web developers.<\/p>\n<p><font face=\"Arial, Helvetica, sans-serif\" size=\"-2\">Technorati:   <a href=\"http:\/\/technorati.com\/tag\/Uri\" rel=\"tag\" target=\"_blank\">Uri<\/a> | <a href=\"http:\/\/technorati.com\/tag\/rdf\" rel=\"tag\" target=\"_blank\">rdf<\/a> | <a href=\"http:\/\/technorati.com\/tag\/dump\" rel=\"tag\" target=\"_blank\">dump<\/a> | <a href=\"http:\/\/technorati.com\/tag\/geonames\" rel=\"tag\" target=\"_blank\">geonames<\/a> | <a href=\"http:\/\/technorati.com\/tag\/pingthesemanticweb\" rel=\"tag\" target=\"_blank\">pingthesemanticweb<\/a> | <a href=\"http:\/\/technorati.com\/tag\/semantic\" rel=\"tag\" target=\"_blank\">semantic<\/a> | <a href=\"http:\/\/technorati.com\/tag\/web\" rel=\"tag\" target=\"_blank\">web<\/a> | <\/font><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In a recent mail thread, someone was asking what was the best way to get RDF data from a source [having more than a couple of thousands of documents]: a RDF dump or a list of dereferencable URIs? None is better than the other. Personally what I prefer is to use both. If we take [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[69,84,64],"tags":[],"class_list":["post-765","post","type-post","status-publish","format-standard","hentry","category-pingthesemanticweb","category-semantic-web","category-web"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/765","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=765"}],"version-history":[{"count":0,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/765\/revisions"}],"wp:attachment":[{"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=765"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=765"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fgiasson.com\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}