Developments with PingtheSemanticWeb.com

PingtheSemanticWeb.com is 3 days old and many people already started to take a look at it. The purpose of this blog post is to know what are the next steps, what the open questions are and what changed since.

    

What changed?

In the last 3 days I fixed some little bugs, I upgraded the auto-discovery feature and I corrected the grammar (thanks to Uldis Bojars).

 

The open question: the export feature

The file format used to export the list of pings is still an open question. Right now, I am using a simple XML file (with two elements) to export the list of pings.

Some people wonder why I don’t use a custom RDF document, a Scutter Vocab document or a RSS document?

Christoph Görn proposed this implementation for the export feature

Personally the problem I have by exporting the list of pings in RDF instead of XML is the overhead (in number of characters) it adds without seeing significant benefit to do so. Read the comments on this blog post to be aware of the current positions on the question.

Please leave your comments about this question on this blog post. Personally I’ll wait until Tim Finin (of Swoogle) contact me with the method they wish to use to make Swoogle interacting with PingtheSemanticWeb.com. However, any new opinions are welcome.

 

Future developments

 

SIOC detector

Uldis is currently working on a new version of his SIOC detector. This new version will detect SIOC, DOAP and FOAF files on web pages (By the way, the SIOC detector is a FireFox plug-in). If the detector finds an instance of these ontologies on a web page, it will instantly ping PingtheSemanticWeb.com pinging service.

It will be a really great (and easy) way to find new documents. By example, if 100 people install that plug-in on their FireFox browser, each time one of those find a SIOC, DOAP or FOAF document, while they are surfing the web, the pinging server will be noticed.

 

Thanks

I would like to thanks Uldis, Alex Passant, Christoph and Harry Chen (do I forgot somebody) for their ideas, work and writing on the project.

 

Technorati: | | | | | | | | |

Ping the Semantic Web.com: a pinging service for the Semantic Web

 

    One of the problems I found with the semantic web is how it could be difficult to find new and fresh data. Recently I was confronted with a problem: how to notify a web service that Talk Digger had new and updted semantic web data ready to be crawled (the SIOC and FOAF ontology for people familiar with semantic web technologies).

Then I questioned myself about why nobody, at my knowledge, developed a sort of weblogs.com or pingerati.net pinging service for semantic web documents? This solution already proved that it is working considering that weblogs.com archive and export millions of pings every day.

 

What is PingtheSemanticWeb.com?

PingtheSemanticWeb.com is a web service archiving the location of recently created/updated FOAF, DOAP or SIOC RDF documents on the Web. If one of those documents is updated, its author can notify the service that the document have been updated by pinging it with the URL of the document.

PingtheSemanticWeb.com is used by crawlers or other type of software agents to know when and where the latest updated FOAF, DOAP and SIOC documents can be found. So it requests a list of recently updated documents as a starting location to crawl the semantic web.

More information about supported ontologies can be found here:

 

Using the Bookmarklet

I greatly suggest to anyone to use pingthesemanticweb.com’s Bookmarklet. You only have to install this bookmarklet in your browser, and click on it from any Web page. If a FOAF, SIOC or DOAP document is found, it will be immediately indexed by the pinging service.

It is the easiest way for anyone to help PingtheSemanticWeb.com to find new documents to index.

 

How to install the Bookmarklet

Read the instructions on how to install the Bookmarklet (Browser Button) into your browser.

 

How does it works

You can use the URL of a HTML or RDF document when pinging PingtheSemanticWeb.com web service. If the service found that the URL points to a HTML document, it will check if it can find a link to a FOAF, a DOAP or a SIOC rdf document. If it founds one, it will follows the link and check the RDF document to see if SIOC, DOAP and/or FOAF elements are defined in the document. If the service found that the RDF document has SIOC, DOAP and/or FOAF elements, it will archive the ping and make it available to crawlers the export files. Otherwise it will discard it.

 

 

Custom needs, suggestions and bug reports

This service is new, so if you have any suggestions to improve it, if you find any bugs while pinging URLs or importing ping lists, or if you have any custom needs for you semantic web crawler of software agents, please contact me by email [fred ( at ) fgiasson.com], that way I’ll be able to help you out as quickly as possible.

Technorati: | | | | | | |

Supervized Search Indexing with Yahoo! Search Builder

Yahoo! Search Builder: The idea is great: the power of Yahoo!’s search engine with its colossal database with all the advantages (no spam) of supervised indexing. In fact, niche networks (groupd of people) will probably use this new service to make search engines for their niche domains and will meticulously add new crawlable sources over time. That way, no spam website will be indexed, the results will be much more accurate and useful and the result will be that users will spend less searching time.

Other search engines [Rollyo and Eurekster] already do that. The main difference is that they developed “social” features around the search results and Yahoo! didn’t. Some people think it is sad, but personally I think that Yahoo! just don’t care. Social features are cool, but for some purposes only, not for everything. But personally, the big difference is Yahoo!’s database compared to Rollyo’s and Eurekster’s.

Technorati: | | | | | |

Visualizing Web conversations using Talk Digger

In this article, I will talk about the recent developments with the alpha version of Talk Digger and how it could be use to visualize the interactions between conversations tracked by it.

 

Recent developments

Yesterday I started to crawl most of the URLs submitted to Talk Digger in the past 6 months and indexing all the results in its new database.

Right now Talk Digger is tracking about 2500 URLs (so it has about 2500 conversations), and it indexed about 80 000 sources (other web pages linking to these 2500 conversations).

These numbers are not big, but the preliminary results are quite impressive (in my humble opinion). In fact, each time new URLs were tracked, new conversations was created, new sources was indexed, I discovered new ways to use it, to discover new stuff, to visualize relations between the data, etc.: the patterns were starting to emerge.

 

Visualizing interactions between Web conversations

In only 30 minutes of conversation browsing, I noticed 7 interesting use cases (patterns) in the system. I will present all of them by describing what is happening with each of them.

I added two visualization tools in the right sidebar of each conversation page.

 

 

The first tool

The first tool will help the users to answer to these two questions:

  • What are the other conversations that are talking about the current one?
  • What are the conversations the current one is talking about?

 

 

The current conversation is the one in light-blue, in the middle of the panel: “Talk Digger: find, follow and join discussions evolving on the Internet”.

From there, I know that the “Talk Digger: find, follow and join discussions evolving on the Internet” conversation is talking (in relation) with the other conversation “Frédérick Giasson – Computer scientist, software developer and consultant”.

It makes sense considering that I am the creator of Talk Digger and that the conversation “Frédérick Giasson – Computer scientist, software developer and consultant” is created by the URL of my personal web page.

I can also see that the conversations: “3spots”, “Library clips”, “ Digg Tools”, “decor8”, are also in relation with the current one.

That way, I can easily visualize the relationship between the conversations tracked by Talk Digger.

 

The second tool

The second tool will help the users to know what are the different conversations tracked by Talk Digger that came from the same source (URL).

 

 

From this panel, I know that Talk Digger is tracking two other conversations closely related to the current one: “Talk Digger Tools: Bookmarklet” and “Talk Digger Tour: Use the bookmarklet”.

In reality, these two other conversations are two different pages from a same domain name: talkdigger.com

Okay, now it is the time to check at the use cases to understand how these two tools can be used.

 

Use case #1: A normal blog or personal webpage.

This is the case of a conversation evolving around a single blog (or personal web page) and its interactions with other conversations:

 

 

In this example, the current conversation is the one of my personal web page.

What is interesting here is that we can see how it relates to itself. We can see that from my main page, I link to two other pages that have their conversations tracked by Talk Digger.

Also, I see that “jotsheet – blog o’ tom Sherman” also has a relation with me. In fact, tom Sherman is a old user of Talk Digger and talked about it in many of his blog posts.

 

 

I can also see other pages, from the same domain name, which has a conversation tracked by Talk Digger.

The difference between these results and the above ones is that they are not necessarily linking together (in opposition to the above relations).

 

Use case #2: Discovering the relation between a web page and its blog

 

In this example, I found the relation between a normal website (Library Law) and its blog (LibraryLaw Blog). What is interesting is that if you go on the Library Law’s web site, its blog is not clearly displayed. However, the relation between the two is clearly apparent.

 

Use case #3: Topic specific blogs and web sites.

Another interesting pattern is the one created by topic-specific blogs and web sites.

 

 

In this example, I used the Micro Persuasion blog wrote by Steve Rubel. This blog is focus on Web 2.0 news. As you can see, the “Micro Persuasion” conversation is relation (talk about) the conversations of other Web 2.0 services like “del.icio.us”, “Rollyo” and “Netvibes”.

So the relations here are topic centered.

 

Use case #4: Egocentric blogger.

This use case is fascinating because it shows how someone can relate to itself.

 

 

In this example, Robert Sanzalone, the writer behind the Pacific IT blog started track conversations for many of his blog posts. That way, we can easily visualize of one post is relating with the others.

 

Use case #5: Who cares about my photos?

Some people also care about what people say about their photo.

 

 

If we check the conversation evolving around nattu’s Flikr photo album, we will see two things:

  1. That the conversation created by this photo album is in relation with another conversation tracked by Talk Digger.
  2. That there is many other people that care about the conversation evolving around other people’s photo album.

 

Use case #6: In the news

Other people like to know what is the conversation evolving around specific pieces of news.

 

 

This is really interesting. We have a piece of news from ZDNet called “The new meaning of programming”. We instantly know that it relate with another conversation called “SocialNets & The Power of The URL”.

We also know that later, other pieces of news talked about it: “Mark Cuban is Wrong”, etc.

This is really interesting to find out how news could relate one between the other.

 

Use case #7: Online communities’ users.

Other people like to know the conversation evolving around their online persona present on online communities’ web site like MySpaces and LiveJournal.

 

 

In this example, we want to see the conversation about the user “2220s” on MySpaces. As we can see, 22-20s’s LiveJournal is talking about him.

We can also see a list of conversations evolving around many other MySpaces users’ page.

 

Conclusion

As we saw, depending on the source (URL), many different relationship patterns can emerge from Talk Digger’s conversations.

These preliminary results are quite exciting considering that I just started to crawl URLs since yesterday. I think the current infrastructure I developed in the past months is promising, the next steps is to continue crawling URLs and to get users using it.

 

Subscribe to the Alpha version of Talk Digger

If you would like to test these features, you can always subscribe for a user account. The next round of account created is planned for mid-August.

Technorati: | | | | | | | |