Ping the Semantic Web and its future SPARQL endpoint

Soon enough I’ll add a SPARQL endpoint to the Ping the Semantic Web service. What it means?

It means that anybody will be able to send SPARQL queries (SPARQL look-like the SQL query language but is used to query RDF graphs) to retrieve information from the RDF documents know by the web service. As soon as someone ping with a RDF document’s URL, other people will be able to search it using the SPARQL endpoint.


How it will work?

Users will have access web interface where they will be able to write and send SPARQL queries to the triple store (this is the name given to the type of database systems that archive RDF graphs)

For example, they will be able to send queries like:


PREFIX sioc: <>
PREFIX rdf: <>
GRAPH ?graph
?s rdf:type sioc: Post


That query to the triple store will return all the resources (things) that have been described (type) as a sioc: Post (a blog post, a forum post, etc.)


How to visualize the triples store?

Creating this SPARQL endpoint will be somewhat easy to do. In fact, the structure will remain the same but we will add one new server: a SPARQL endpoint that gives access to a RDF triple store.

There is how one could imagine how triple store works:


Figure 1




Figure 2


If we take a look at the schemas, each RDF document is a graph in itself. A RDF graph is composed of the relations between resources <subject , verb, object>. For example a relation could be <peter , hair-color, brown> (so Peter’s air color is brown (so the resource “Peter” has the property “hair-color” brown)).

With the triple store, we have the possibility to merge two RDF graphs together. That way, we create a sort of meta-graph with all the relations between one graph and the other.

This is where things are getting interesting.

Ping the Semantic Web’s graph will be created by merging the graph of each RDF documents it knows (via pinging).

That way, users will have the possibility to search this sort of Meta-Graph of relationship between resources by querying it using SPARQL queries.


We could possibly talk about the semantic web in a nutshell.


Virtuoso to create the RDF triple store

I’ll use a database management system called Virtuoso to create this RDF triples store.


A first prototype version

Consider the first version of the triple store as a prototype. In fact, the RDF triple store feature of Virtuoso is relatively new. It is always in development and some things have to be created (to enhance the functionality) and upgraded. However, it is perfect for a couple of hundred of millions of triples (relations), but when we will reach the billion of triples, it is possible that some queries to the system will become unworkable. At that time, I’ll possibly be obligated to restrict users’ requests possibilities to ensure that the system will always be working at its full potential.

In any case, the triple store and the SPARQL endpoint will “live” on another server, so the performance of the current pinging system will not be affected by the performance of the endpoint, they are two totally different entities in our system.


Why a triple store with a SPARQL endpoint?

At first: for research and education purposes. People will have the possibility to query a system that aggregate RDF documents “from the wild”. Eventually, such initiative could lead to more interesting technologies development (user interface, anything) that could be used by a broader range of people.

Having this system in hands, one could search the triple store to extract statistics on the RDF documents it knows for research purposes.

Also, it is a way for OpenLink to debug, upgrade and enhance its service that will ultimately benefit to everyone (since an open source version of Virtuoso is available).



Keep me in touch if you have any thoughts about that new development with the Ping the Semantic Web service.

Technorati: | | | | | | | | |

How to participate to the Web 3.0 using your blog: participating to the Semantic Web to enhancing your blog visibility


Do you like my catchy title (Update: okay I agree with Danny: “Web 3.0 love secrets of the French” is a more catchy title)? A little bit ironic considering all the brouhaha (1) (2) (3) (4) (5) (6) (and a way to much more) that generated this New-York Times article wrote by John Markoff. Web 3.0… semantic web… semantic web 3.0… call it what you like, I don’t really care: really. What is fantastic is that more and more people get interested in what many people are working on since about 12 years: the Web of Data.

Without caring about all the recent hype (and misunderstanding) it recently got, some people could ask themselves about how they could easily participate to the idea of the Semantic Web: the Web of Data.

Is it possible for the common of mortals? Yeah, even my mom could (at least if she had a blog).

If you have a blog, you can easily participate to the semantic web by installing a simple add-on to your blog system and by starting pinging a server called Ping the Semantic Web each time you publish a new blog post.

The idea here is to get the articles you wrote (and will write) and publish them on the web not as a web page, but as a document for the semantic web. You can see the Web like that:



At top, you have a source of data: the articles you wrote on your blog for example.

Then with that same source of information, you can participate to two different Webs:

  1. At the left, you have the “web of humans”: the Web that can easily be understands by humans when they take a look at the screen. This is your blog.
  2. At the right, you have the “web of machines”: the Web that can easily by read and processed by machines. This is another version of your blog but for machines.

Well, it seems complex, so how the hell my mom is supposed to be able to participate to the semantic web?!?!?!?

Easy, In a hypothetical World, my mom is using: WordPress for her blog on cooking, Dotclear for her blog about design, b2Evolution for her family blog and Drupal for her new French mothers` community website.

The only thing she has to do is to install one of the add-on available for each of these blogging systems.



The instructions to install the add-on on WordPress are simples:

1. Copy the following files to the WordPress wp-content/plugins/ directory:

2. Enable “SIOC Plugin” in the WordPress admin interface (Admin -> Plugins -> action “Activate”)



    For Dotclear, the installation package can be found here, and the source code of the add-on can be found here.



    For b2Evolution: Copy the following files to the /xmlsrv/ directory of your b2Evolution installation folder:



    For the Drupal add-on, all the information can be found here.


As soon as she installed these add-ons, she started to participate to the semantic web.


Why people should take the time to install these add-ons? What is the advantage?

Increasing the visibility of your blog


By doing so, you are exposing your blog`s content to many other web crawlers ( web crawlers of a new generation, propelled by the adoption of the semantic web).

From that point, you only have to ping a new pinging service called Ping the Semantic Web to make sure that your blog is visible to these new web services. The process is the same as pinging or for your web feed (RSS or Atom), but you are pinging a specialized pinging service for the semantic web.

Doing that helps you to increase your visibility on the Web.

How can you setup your blog system to automatically ping this pinging service?

Simple, the process is the same for each system described above. By example, if you are using WordPress you only have to:

  1. Log into your WordPress Dashboard
  2. Select Options
  3. Then select the Writing tab
  4. Near the bottom you should see a space labeled “Update Services”: Add “” on a new line in this space
  5. Finally press the Update Options button

So, you only have to make your system pinging



In two simple steps (1) installing an add-on and (2) adding a service to ping, a blogger can get more visibility for his blog and can start to participate to the semantic web.


Technorati: | | | | | | | | | | | | | |

Talk Digger now serialize its SIOC and FOAF RDF documents using N3



A couple of weeks ago I make Ping the Semantic Web detecting and indexing RDF documents serialized using N3. Now I took a part of yesterday to serialize Talk Digger’s content using N3 as well.

So Talk Digger now export most of the relations it knows in RDF using 10 ontologies: SIOC, FOAF, GEO, BIO, DC, CONTENT, DCTERMS, DC, ADMIN, RSS and serialized with two languages: XML and N3.

Check at the bottom of each conversation page, or user page, and you will see SIOC and FOAF RDF documents serialized in both XML and N3.


I started to play with N3 serialization when I implemented it in Ping the Semantic Web. At first I was telling me: why another serialization method, why confusing users and developers with yet another way to write things?

Then I found my answer: N3 is basically a simplified teaching language used to express RDF documents (so, to serialize) developed by Sir Tim Berners-Lee. Once you get the basis of the language, you can easily read and write RDF documents in an elegant way. The parsing of N3 documents is much easier than its counter part (XML).

This serialization language gain to get know and its adoption would certainly encourage the usage of RDF by the fact that developers could concentrate their efforts on the RDF documents instead of the way they are serialized (there are so many ways to serialize something in RDF using XML; sometime I wonder if it is bounded and boundless…).


There are some links to getting started with N3:

Primer: Getting into RDF & Semantic Web using N3
Notation 3: An readable language for data on the Web
Turtle – Terse RDF Triple Language

Technorati: | | | | | | | | | | |

How Talk Digger fit in the second Web dimension: the Services-Web


    To know how Talk Digger fit into the Services-Web dimension, we have to know how user and systems can interact with Talk Digger functionalities. We have to remember that the Services-Web dimension is the Web of functionalities: how human and machines can play with the functionalities of a system?

Talk Digger web services

At the time I write this article, no web services are available for Talk Digger. There is only an interface users can use to play (add, modify and remove) their data in the system.

Talk Digger users doesn’t have the freedom of choice when come the time to manage the data they put in the system. They are bound to the existing user interface.

Right now, all the data created by a user is publicly available (if wanted by the user) in many ways: RDF documents supported by the use of ontologies like FOAF, SIOC etc., via RSS feeds and OPML files. However, all these things belong to the next Web: the Data-Web.

So, what about the Services-Web? When Talk Digger users will have the freedom to choose the user interface they wish to interact with the system?


In a near future, web services will be available to developers to let them create other web services or software to interact with Talk Digger system. Such web services will let them:


  • Manage users profile (FOAF) hosted on Talk Digger
  • Retrieve tracking list with new in-bound links and new comments for each item
  • Add new tracks to users tracking list
  • Monitoring what a user’s friends are tracking and commenting in the system
  • Etc.


Then users will have the entire freedom to play with the data they create with the tools they want.

In the next article, we will see how Talk Digger will fit into the third dimension of the Web: the Data-Web.


Series of articles about ZitGist, Talk Digger, Ping the Semantic Web and the Semantic Web:

Article 1: Talk Digger and Ping the Semantic Web became ZitGist
Article 2: The first three dimensions of the Web: Interactive-Web, Service-Web and Data-Web
Article 3: How Talk Digger fit in the first Web dimension: the Interactive-Web

Technorati: | | | | | |

How Talk Digger fit in the first Web dimension: the Interactive-Web


To know how Talk Digger fit into the Interactive-Web dimension, we have to know how users interact with the system. We have to remember that the Interactive-Web dimension is the Web of humans: document formatted for humans understanding (HTML, DOC, PDF etc.). So, how people are interacting Talk Digger? How people are using Talk Digger? How people are interpreting its information? Etc.


Talk Digger finds links between websites and create conversations according these relations.

So users will use this list of links to discover web pages (articles, blog posts, forum threads, etc.) that link (so that is talking) about a specific web page.



Then it lets people tracking the evolution of these conversations

Users will use this functionality to track a conversation evolving around a specific web page: so they track what are the new web pages that create link to that specific web page.



People can search for conversations tracked by Talk Digger

Users can search inside Talk Digger as they would in a normal search engine. If they search for «Windows », they will get results of web pages that talk about « Windows ».



It explicit relationship between conversations

Users have the possibility to see the relationship between web pages tracked (indexed) by Talk Digger. They will use this feature to find other web pages that are in relation with the current one. If we take a look at the image bellow, you will find that the results at the left are blogs that talk about Web 2.0 services. If you check at the right, you will see a list of Web 2.0 services. It is how Talk Digger can help users to find web pages that are in relations.



It aggregates people around Web conversations to create communities

The premise here is: people that are tracking the same conversation probably have personal interests in common. That said, Talk Digger users use this feature to find people which whom they could make in contact.



It lets people expressing their thoughts vis-à-vis a conversation

Users can express themselves and converse with other users in relation to a conversation.



It connects people

Users can explicit their relationship with other Talk Digger users, or with other people having a virtual profile on the Web. Social groups are shown and help users to get in contact with people of interest.

Talk Digger users can also create their online Web profile that could be use in Talk Digger to interact with other users or anywhere else on the Web (more information about that possibility in a future article of this series).



It lets users following the activity of their social network

Another way to discover new stuff is by following what your Talk Digger friends are tracking and what they have to say about some conversations.



It explicit relationship between people



This article explains how Talk Digger fit into the Interaction-Web dimension. It explains how users interact with the system and how they analyze the information that is presented to them. So, this is how Talk Digger fit into the Web of human.

In the next article, we will see how Talk Digger will fit into the second dimension of the Web: the Services-Web.

Series of articles about ZitGist, Talk Digger, Ping the Semantic Web and the Semantic Web:

Article 1: Talk Digger and Ping the Semantic Web became ZitGist
Article 2: The first three dimensions of the Web: Interactive-Web, Service-Web and Data-Web

Technorati: | | | | | | |