The first three dimensions of the Web: Interactive-Web, Service-Web and Data-Web

My colleague Kingsley introduced the concepts of a multi-dimensional Web (compared to the multi-dimensional universe). He described the first four dimensions as:

 

Dimension 1 = Interactive Web (Visual Web of HTML based Sites aka Web 1.0)

Dimension 2 = Services Web (Presence based Web of Services; a usage pattern commonly referred to as Web 2.0)

Dimension 3 = Data Web (Presence and Open Data Access based Web of Databases aka Semantic Web layer 1)

Dimension 4 = Ontology Web (Intelligent Agent palatable Web aka Semantic Web layer 2)

 

So, the Web as we know it today would have three dimensions:

  1. Interactive-Web
  2. Services-Web
  3. Data-Web

 

Personally I would define them as (without talking about Web 1.0 or Web 2.0 or Web X.0):

 

The Interactive-Web dimension is the Web of humans: document formatted for humans understanding (HTML, DOC, PDF etc.).

The Services-Web dimension is the Web of functionalities: how humans and machines can play with functionalities of a system.

The Data-Web dimension is the Web of data presence: availability of open and meaningful data. How machines can play with the data of a system.

 

The Interactive-Web

The Interactive-Web is the Web of humans: a Web where all documents (HTML, PDF, DOC, etc.) are formatted at the intention of the humans with visual markers (headers, footers, bold characters, bigger fonts etc.) to help them scanning and quickly finding the right information.

But the problem with the Interactive-Web is that it is only intended to humans, so machines (software agents for example) have real difficulty to analyze and interpret this type of documents.

 

The Services-Web

The Services-Web also exists in the current landscape of the Web: a Web where protocols exist to let people and machines (web services, software, etc) playing with the functionalities of a system.

With this Web, one can manipulate the information within a system (web service) without using the primary user interface developed for this purpose. That way, the power is gave back to the users letting them manipulating (in most cases) their data using the user interface they like.

The Services-Web dimension already exists and is extensively used to publish information on the Web. Fewer web services will use the Services-Web to let people adding, modifying and deleting data (their own) in the system.

 

The Data-Web

The Data-Web dimension also exist in the current Web, but it is much more marginal than the two firsts dimensions. This dimension belongs to the idea of the Semantic Web: developing standards to let machines (software) communicating together in a meaningful way. The idea here is to publish structured data at the intention of machines (and not human) to help them communicate (and the communication is assured by the use of standards).

 

A switch from Services-Web to the Data-Web

What I think that will happen is that the Services-Web dimension will not be used to publish information from a system to another as it is today. In fact, the Services-Web will only let users trigger functionalities of a system to add, modify and delete data in the system, and the Data-Web will publish (the communication of the data will be assured by the use of standards such as the one of the Semantic Web) data in a meaningful way from a system to another system.

So the way we use the Services-Web today is not the way we will use it tomorrow.

 

Final word

Yesterday I started to write a series of articles to explain the creation of ZitGist and to explain how Talk Digger and Ping the Semantic Web will evolve in the next months and years.

This article is the foundation of my explanation. This is the basic framework I’ll use to explain how Talk Digger and Ping the Semantic Web work and how they interact together and with the Web.

In the next few articles, I’ll explain how these two systems fit in this framework.

 

Technorati: | | | | | | | | | | |

Talk Digger and Ping the Semantic Web became ZitGist

I said that I would write about what is happening with Talk Digger and Ping the Semantic Web in the last month, so I am now taking some time to start to write the story.

In September Kingsley Idehen, CEO of OpenLink Software Inc., contacted me to talk about my projects, the database management system developed by OpenLink (called Virtuoso) and the Semantic Web.

Our talks leaded us in a direction that I unanticipated: we started to talk about creating a company that would develop both Talk Digger and Ping the Semantic Web projects. So far these two projects were prototypes I was developing to test my ideas, to help the adoption of the semantic web and to learn.

Creating a company in partnership with OpenLink would give me the possibility to get the resources to develop these two projects in a professional way: giving me the time, the computer infrastructure and the human resources to develop, extend, refine and enhance these two services. All that for the benefit of my users: to enhance their experience with the systems.

After one month of discussion we created a company called ZitGist (pronounced: Zeitgist) that would own and develop both Talkdigger and Ping the Semantic Web. The legal entity is now created, but much work have do be done in the next weeks (releasing the website and logo, publishing the official press release, etc).

Both OpenLink Software Inc. and me are members of ZitGist. However , I didn’t closed the deal only to have financial resources to develop my projects. In fact, a big part of OpenLink’s investments in the project is their Virtuoso DBMS. This database system will replace the current one used in both projects (MySQL) and will increase their capabilities in many ways. I will write about the integration of Virtuoso in both systems later, but I can guarantee you that the decision goes in the mission I gave me more than one month ago:

This vision is drove by a personal goal: make the semantic web a reality. This is ambitious and probably arrogant: I know. “Who dares win” a SAS motto says. It is what I will do: dare.

Do I have a chance to reach my goal? I hope so, but I have no idea. The only thing I know is that it will be a reality only if everybody tries to do a little thing in that direction; there is the little things I will try do to:

  • Make Talk Digger results computer processable
  • Develop semantic web applications that will interact with the Talk Digger system
  • Write about the subject in such a way that any Internet users will understand
  • Educate people to this future reality through writings and oral presentations

This vision drove my last year and there is where I am. The implementation of Virtuoso in both Talk Digger and Ping the Semantic Web, the creation of ZitGist and my partnership with OpenLink have been took accordingly to that vision.

In the next days I’ll write more about ZitGist, the new vision of Talk Digger and Ping the Semantic Web, the deployment of Virtuoso and the new possibilities (features) it will enable.

Technorati: | | | | | | | | | |

Ping the Semantic Web now support N3/Tutle serialization

 

    I am pleased to announce that I finally put online a new version of the crawler (1.2) that crawls RDF documents for Ping the Semantic Web. Now the web service is able to detect and index RDF files serialized in N3/Turtle. It means that much more RDF documents will be visible via Ping the Semantic Web since many RDF documents are serialized using N3 (and I think that more and more RDF documents will be serialized that way in the future).

Also, I entirely re-wrote the crawler. It is now (supposed to be) much more tolerant to the different way people could write their RDF documents. It is also much faster.

I also changed the exporting file format for the version 1.2. I changed the “topic” attribute for a “serialization” attribute. Why did I removed the topic attribute? Because it will be replaced by something else in the next month or so. The new “serialization” attribute can have one of these two values: “xml” or “n3”. It explicit the serialization format the crawler should expect by crawling this document.

In the mean time, if you find any documents that are not processed well by Ping the Semantic Web please leave a message in my mail box with the URL to that document so that I’ll be able to debug what is wrong.

Technorati: | | | | | | |

Ping the Semantic Web.com service now support RDFs and OWL documents

 

    I didn’t have the time to work on PingtheSemanticWeb.com web service in the last few weeks, so I took a couple of days to fix some issues with the detection of RDF/XML documents (some use cases were not handled well by the detection module).

I also make PTSW recognize and archive RDFs and OWL documents as well. That way, people will be able to track the evolution of ontologies.

What is next? By the end of the next week, PingtheSemanticWeb should not only detect RDF/XML documents, but also N3 and N3/Turtle documents.

I’ll also have to update the export module to let people getting these new RDFs, OWL and N3 documents.

So, if you have any ideas on how to upgrade/enhance this web service, or if you find any bugs (by example if the system doesn’t recognize your RDF documents, etc), please contact me by email.

Technorati: | | | | | | | | |

I had a dream for the Semantic Web

 

A year ago I had big ambitions for Talk Digger. At that time, I was dreaming that Talk Digger could help the Semantic Web to develop, to exist, to be used by thousands of people without them even knowing it.       

      

You have to know that at that time I hadn’t the vision clear enough, the knowledge and the resources to make such a dream reality. But slowly I directed me efforts toward that vision, that goal, hoping it could lead to something interesting. Opportunities after opportunities my vision became clearer, my knowledge of the subject evolved and the resources increased. I developed a new version of Talk Digger that broadcast its content in RDF using some specialized ontologies such as SIOC and FOAF. I developed a service called Ping the Semantic Web that aggregates and exports lists of semantic web documents (RDF) to anyone who request them.

Today I came one step closer to reach my goal: I make Talk Digger pinging Ping the Semantic Web each time a new user is created or a new conversation is started or updated on Talk Digger.

It probably doesn’t mean much for most people; however it means much for me.

I created a service that generates content from many different sources (traditional search engines results, users’ interactions with the system [creating comments, following conversations, creating links with other users, etc.], etc). I created a service that aggregate semantic web documents from around the Web to export them to any developers that wish to do something with them. Finally I make these two services interacting together.

What it means? It means that I created a prototype infrastructure for what I consider to be the first step toward the semantic web: creating semantic web formatted documents and making them freely and easily accessible to other web services and software agents; and all that using live and real data from “traditional web” resources and normal web users.

So that is it. Talk Digger documents are now living in the same world as other semantic web documents from around the Web. One developer can have access to all these documents from the same source. Talk Digger’s documents, as long as all others, are multiplexed by Ping the Semantic Web and have the possibility to live in a full set of different incarnations (different user interfaces and different data manipulations).

This is the dream I had.

Technorati: | | | | | | | | | |