Companies check blogs too – What customers say about your products?

Blogging is good for anyone. It’s an ease to use publication platform. Everybody can easily create and maintain their blog. Blogsphere is an environment where people can talk about what they thing, without restrictions. They wrote about what they really think.

This special conversation environment is a gold mine for businesses. Why? Because they can know what their customers really think about their products; in good or in bad. This is possible because bloggers are also customers; not normal ones but customers that talk about what they really think.

Jet Brains, the Russian enterprise that created Omea Reader, seem to be one of these companies. Every time I wrote “Omea” in a post, I always see incoming Russian connections from Feedster or Technorati, with intellij.net as domain; with “Omea” as query string.

What’s this tell me? It’s telling me that Jet Brains care about what people say about their products. It’s probably a way to upgrade them with features their clients’ whish to use. They probably have a client-centric vision of application development. They will not put features for fun. They will put features their users whish to have. They will upgrade already existing features their users uses.

I’m probably right or possibly wrong by writing this but ‘s what the situation look like. True or not, I think every enterprise should do it. It’s a gold mine for them. They have the unbiased opinions of thousands of clients, users and customers.

Technoratie: [] [] [] []

Two things why writing is now so important to me

Why writing is now so important to me?

Because writing is thinking.

While I write things, I think about them. It’s a moment I take in a day to think about things that fly in my mind.

Sometime I wrote my short and long term goals on a sheet. I check them; I check what I’m doing right now to reach them. While I’m writing them down, I think about them, I make them clear in my mind.

Because writing is learning.

While I write things, I learn from them. Sometimes, things will emerge from my unconsciousness: I’ll learn from them. It will give a new angle of attack to understand the thoughts I was writing about.

Technoratie: [] [] []

Semantic web is not a myth: it’s a future reality

I just finished reading this old post that talk about the myth of Semantic Web. The two points of the author are:

  1. Pure laziness. It’s extra work to tag everything with metadata.
  2. RDF is nearly impossible to understand. That’s the biggest rub. RDF, like so many other standards to come out of IETF/W3C is almost incomprehensible to anyone who didn’t write the standard.

The thing is that RDF is not intended to be easily understood by humans like simple XML documents. RDF is intended to be understood by machines. It’s a really basic ontology language. RDF and RDFS are flexible but not as expressive as we would like. It?s why other ontology languages have been created.

I said that this is a simple language… for machine, not for us. The thing is that I think that we need such languages in the future to be able to handle the mass of information that the Internet became.

It’s why we will need to build applications that will build these file for us. It’s what misses the Semantic Web: an infrastructure of fully integrated, and easy to use, user applications.

Semantic Web is not a myth, it’s a future reality. It’s at his infancy and it will grow. RSS is a result of the Semantic Web. Mr Cauldwell also said:

“The closest that anyone has come to using RDF in any real way is RSS, which has turned out to be so successful because it is accessible. It’s not hard to understand how RSS is supposed to work, which is why it’s not really RDF. “

He is right, but the thing is that we will need to develop applications to get these ease to create and understand RSS files and migrate them, automatically, in a more expressive ontological language like RDF or OWL. It’ll not be our job; it will be the job of applications. Why? Because these languages aren’t suitable for humans.

Think about the infancy of computer programming. We first started to code in assembler. It was a simple wrapper on machine code but it was not really suitable for humans to use. Eventually we created upper level languages, like C or Pascal to handle the suitability problem. They were much more comprehensive for humans. This was their only task: be comprehensible to human programmers. This is how it works: a special application transforms the human readable code in C in a less readable code in assembly to finally be converted in machine code incomprehensible to humans but fully understood by machines. It’ the same thing we will need to do with these languages.

Technoratie: [] [] [] [] [] []

The problems with tags depend on two factors: the authors and the tag’s used word

Tagging is the action of annotate words to a resource (a document, an image, etc). It’s a way to categories and organizes these resources.

You can perform this tagging action for your own, while using Gmail by attaching tags to your incoming message, or use it in a social network by tagging bookmark entries in Del.icio.us for example.

Basically tags are only separate words linked to a resource. The author can put words with or without a semantic meaning one between another. He can put words in semantic relation with or without the tagged resource. The entire tagging job is done at the discretion of the author.

The action of tagging will be an operand of the whole formula that describe the success or the failure of a system using these tags.

The second factor will be how a system will use these tags. A basic system will bind and show all resources with the same tag, wrote with the same letters. Then, “blog” and “blogs” are not the same tags. Resources with “blog” as a tag will not be bound and shown with resources tagged with the word “blogs”.

Given this problem, some systems using tags will suggest a list of related tags. Some systems perform well at this task, like Technorati who seem to suggest tags with related semantic.

Others perform poorly because they only suggest tags that contain the tag?s word. By example, if I search for the “blog” tag, such a system will suggest me categories like “blogs”, “anablog”, “tierryisblog”, etc. This method is clearly ineffective and probably useless.

I think that this feature proposed by systems using tags is just a plug-in implemented to try to cope with this problem.

How could we upgrade the tagging idea to get rid of such feature and remove a part of the responsibility of the tagging authors in the whole process? I think that the principles of the semantic web would help us to upgrade the tagging idea.

How would this work? Intuitively it would work like this:

  1. Consider the group of tags that describe a resource as a resource in itself.
  2. Systems like Technorati would scan posts to extract these “tag resource”.
  3. After the system would semantically link all these “tag resource” according to an ontology to relate, semantically, each “tag resources”.
  4. Finally when a user would make a tag search query, results would not only be the resources with the specific tag but also all the other resources according to the semantic of the tag(s) searched.

In this post I’m talking of a new way to see tags; tags as resources with a semantic meaning; not about words that, theoretically, describe a resource.

Technoratie: [] [] [] []

The life span of a blog discussion seem to be ephemeral – Is there a way to change the situation?

It seems that there are two problems with blog discussions that use comments:

  1. People who start a discussion by commenting a post didn’t seem to check back for new comments on the message.
  2. If the post is older than some days, nobody will comment on it.

Some will say that this is normal because blogs are used to publish thoughts of the moment and old thoughts didn’t worth commenting. If they see blogs as this, they are probably right.

The thing is that I don?t see blogs this way. Blogs seems to be a really interesting knowledge management tool. In this optic, it would be healthy to comment old posts: to upgrade the idea behind it with the new knowledge people have at this time.

The problem is that nobody will see these changes because the posts will be lost in all other new posts.

If we take as premise that comments are integral part of a post, with the same information value, would it be interesting to change his position in the lifeline of the blog with an updated date? A good way to do this would probably to include an “update” section that relate the last changes performed on posts. A change would be an update in the post?s body or a new comment posted on it by a reader.

Think about Wikis; it would be a good and elegant way to give life back to old posts (ideas, knowledge).

Few blogs had implemented comments feed. The idea is good but are they increasing the life span of blog’s discussions? Take Scoble?s comment blog (are you reading all “scoble” words of the Blogsphere’s posts? 😉 ). Is it increasing the life span of his posts? I don’t perceive it. If the post is the sixth of the day, comments attached to it will fade out and the post will be leaved for death.

In this case, would a solution be to include comments in the main feed of the blog? Have in mind that we are thinking with the assumption that comments are integral part of a post, with the same information value. Personally I think that it would be a solution but it wouldn?t be applicable with the current RSS specification; it just not specified for this purpose.

Finally, I don’t think that current blogs’ structure is well built to give a respectable life span to posts. It just can’t work well with the current structure.

Technoratie: [] [] [] [] []