The future of Talk Digger

I had a lot of time, in the last weeks, to think about the future of Talk Digger. What is the future the project belongs to? The semantic web.

Two months ago, I had the idea of Talk Digger. One month and a half ago, I built it. Now I think about the future of the project. The service appear to be reliable and people use and talk about it. I learned how the system could be use while talking with other users.

Now I have a better view of the system, a vision of how it could be used, and an idea of his potential.

The future of the Internet is the semantic web: a web where his documents are computer processable. It is in this web that Talk Digger will evolve and get his full potential.

Why? The current state of the semantic web is really exciting. Many technologies are already available and reliable to make the vision a reality. Now people have to use them to make it live. To make the semantic web a reality, we need to have access to a wide range of semantic web formatted documents. The only way to reach this state is that people and companies start to make their information available in these semantic web formats. It is in this direction that Talk Digger will evolve: make the information broadcasted by the service available, in RDF, to the semantic web agents. I will also create new sub-services that will (1) gather, (2) analyze, (3) process, and (4) display such information.

This vision is drove by a personal goal: make the semantic web a reality. This is ambitious and probably arrogant: I know. “Who dares win’ a SAS motto says. It is what I will do: dare.

Do I have a chance to reach my goal? I hope so, but I have no idea. The only thing I know is that it will be a reality only if everybody tries to do a little thing in that direction; there is the little things I will try do to:

  • Make Talk Digger results computer processable
  • Develop semantic web applications that will interact with the Talk Digger system
  • Write about the subject in such a way that any Internet users will understand
  • Educate people to this future reality through writings and oral presentations

This is the future of Talk Digger, my blog, and my professional carrier. As you know, I have been in Vancouver two weeks ago. The aim of this trip was to meet the guys behind Qumana, Lektora, and AdGenta. Last month I got a contract from them to develop a new feature in Lektora. Now, I got the contract to develop a new version of Lektora in the next months. Guest what? I will re-design it in such a way that I will be able to easily upgrade it to enter it into the semantic web era. In which way? Secret. But it is why I say that my new goal will also influence my professional career.

So what is next? The implementation of new search services such as Google Blog Search, Yahoo!, Altavista, and Alltheweb into Talk Digger.

After? I will come back on this later.

Technorati: | | | | | |

Ajax and the Semantic Web

Ajax and the Semantic Web: currently two buzz terms; one that describe a new way to create interactive web interface; the other that describe documents in such a way that computers could “understand” their semantic meaning.

Tim Berners-Lee wrote something interesting: RDF-AJAX: 7 letters that open a window on a new world. We have two layers: one that shows things (Ajax), and the other that describe, by their semantic meaning, things (Semantic Web document).



You have to see the interactions of these two layers as the man-machine interactions. The Ajax layer will read a Semantic Web Document (RDF, for example) and make it human readable. The document will be computer readable by other software agents.

Big deal, you are thinking? Think about it. Right now, databases information is serialized in HTML files to help human to read and understand its information. Good, however, what happen if I wish to create a software agent to help me to automate some processes? There is the big deal. What I want is to serialize the databases information in Semantic Web formats, like RDF, instead of HTML. That way, the information help in these databases will be computer readable and understandable. Then, the problem is that I will not be, anymore, able to read and understand these big chunks of RDF documents.

There is the utility of the Ajax layer: to make RDF, or any other Semantic Web format, documents human readable. We could use an Ajax library that would understand RDF documents, and display their content in a browser. That way a single web page could be both processed by computers and humans. The Web would not be composed of HTML documents anymore, but Semantic Web formats ones.

There is another view of the future Web.

Technorati: | | | | | | |

Vancouver, Northern Voice 2006, and blogging

For them who do not know, I am in Vancouver since four days, and I need to say that I love that city. I met a lot of really interesting people that work in the blogging and social software industry. This is probably The Canadian city for all the social software hype.

The second Northern Voice conference (the first and only Canadian blogging conference) will be, for the second time, in Vancouver this next February. If you want to meet great people with a lot of ideas related with social software, knowledge management, blogging, and the web 2.0; take 2 days and come to the meet them here. I hope and I will try to be there.

I leave for Banff tonight, so I am not sure if I will be able to post anything else for the next week, but it is sure that I will have a lot of stuff to write about the Web 2.0 and social softwares when I will come back home (the best time to think about such things is probably on a plane, don’t you think?)

Technorati: | | | | | | |

Systems openness: a characteristic of the Web 2.0

Recently, many people said that companies will need to open and share their APIs to enter into the Web 2.0. Many people also said that the future of the Internet, the Web 2.0, is to share APIs [see paragraph 3], to give possibilities provided by an API to anybody who needs it to develop their own system with these capabilities. Yahoo! Already do it with technologies like his Content analysis web service use by TagCloud, Google too with his map API, and Microsoft will start soon too. The question is: is there only APIs to share?

No. People seem to forget that we need information to use with these capabilities. What I say is: companies will need to start to share the information they gather and analyze in the same way they share their APIs. It is a premise of the Web 2.0: the information will be decentralized in such a way that people will have information to share with everybody, and that information will be formatted in such a way that computers will be able to process, analyze and understand it. To reach such a state, developers, companies and hobbyist will need to start to share their information that way. The relation between all this information and their structure will form what we could call the Web 2.0. It is not just a question of functionalities given by APIs, but one of knowledge: of information. We need information to use these APIs, and right now, the information is partial and hard to extract.

The most beautiful example we have of this type of information are the web feeds (RSS or Atom). If you check at a web feed, you will not understand anything (at a first glance at least). It is an example of an information document formed for computers and not humans (like HTML documents). These protocols (RSS and Atom) are really primitives; however, many, many way to use them has been found. Hundred of application uses them to gather or publish information in different ways. The information is presented in such a way that any software, platform independent, can understand and display the information available in these documents, notwithstanding of what the information is.

If you go to the Web 2.0 Conference 2005 web page, you will be able to read at the top of the web page:

“Web 1.0 was making the Internet for people, Web 2.0 is making the Internet better for computers.”

— Jeff Bezos

All the Web 2.0 idea in one quote. It is really beautiful to have all these online APIs available; now we need the information.

Tim Berners-Lee already said:

“Envisioning life in the Semantic Web is a similar proposition. Some people have said, “Why do I need the Semantic Web? I have Google!” Google is great for helping people find things, yes! But finding things more easily is not the same thing as using the Semantic Web. It’s about creating things from data you’ve complied yourself, or combining it with volumes (think databases, not so much individual documents) of data from other sources to make new discoveries. It’s about the ability to use and reuse vast volumes of data”

Now we need this data: this information. But we also need it in a format that software agents will be able to understand and efficiently process.

Technorati: | | | | | | |

Steve Jobs

 
“Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma — which is living with the results of other people’s thinking. Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary. ”

— Steve Jobs