Why using SWFP rather than HTTP over SSL?

This legitimate question has been asked by Daniel Lemire after his reading of the SWF protocol. There is my answer to his question. I added it as the section 7 of my SWFP paper.

The question is hard to answer because it depends on many factors. I’ll compare the two methods together and try to show you the differences between the two protocols.

Usually SSL is used to authenticate the server to the client and, optionally, the client to the server. With the cost of authentication certificates (about 100£ each), the normal clients can’t afford these authentication certificates. It’s why SSL is mainly used to authenticate servers.

Our goal is especially to authenticate the readers to the server. It’s a reason why using SSL as a secure channel and an authentication protocol is not so useful: because the implementation cost is too high; like the revised version of SWFP at section 5.

This is the big difference between SWFP and SSL: their goals.

A solution could be to use HTTP over SSL (HTTPS) with HTTP Authentication. HTTPS would provide the secure channel and HTTP Authentication would provide the authentication mechanism. The problem with this solution is that some feed readers only implement HTTPS, others HTTP Authentication and few implement both. Another problem with this solution is that who says HTTP Authentication also says login and password. In SWFP the authentication is inherent to the system. It’s made with the public key of the legitimate reader present in the secure database of the server. The authentication steps of the reader to the server are transparent to him. I think that this transparency feature is an important one because it simplify the process and brings non-expert users to use it. Only the simpler things, in appearance of, are widely used.

Two types of feed readers are available: the web applications like Bloglines or the standalone software like Omea Reader. Both principles, HTTPS with HTTP Authentication and SWFP, could be implemented in standalone software and the implementation time, cost and difficulty are probably comparables. However, I think that SWFP would be much more easer to implement in web applications. Why? To use HTTPS with HTTP, the web applications would need to create the secure channel themselves with the feed’s server. By example, Bloglines itself would need to create the secure channel with each private feed server. I don’t think that it’s imaginable. However, with SWFP nothing like that would be necessary because the encrypted feed is viewable by anyone who needs it, even web applications. If I check the FeedBurner stats of my blog: 30% of my readers use Bloglines. I think that it’s considerable and that we need to take this fact in count.

Another problem with the HTTP Authentication solution is that it’s not an optimal solution to our problem. If a user is subscribed to many private feeds then he’ll need to enter, each time, a login and password to check the feeds. Personally I don’t think that this is viable. Think about the pain such a situation would engender… nobody would subscribe to such feeds.

Finally one of the beauties of web feeds is that you can archive them for future readings. The problem with the HTTPS solution is that you didn’t really have the choice to archive the encrypted or the unencrypted content. But such a choice is possible with SWFP.

Technoratie: [] [] [] [] []

SWFP: Secure Web Feed Protocol – A protocol to ensure a secure channel to web feeds

The last weekend an idea passed through my mind: “It seems that more companies are using content syndication technologies to broadcast their news or information to their employees”. Then I started to write a protocol to take this fact in count. It’s called: SWFP, Secure Web Feed Protocol.

“SWF is a protocol to ensure the secure broadcasting of web feeds’ content over a local network or the Internet. The protocol ensures the encryption of the feeds and the distribution of their encryption symmetric keys.”

It was supposed to be the draft of an idea, something to post here. Finally it revealed to be an article of 12 pages. I worked on it this week and came with this first draft:


View: SWFP: Secure Web Feed Protocol [PDF file]

If you have any question about this paper, don’t hesitate to contact me. If you find flaws in the protocol or modifications to suggest send them to me, they’ll be warmly welcome. I also invite you to leave your comments about this paper here, on this post.

Technoratie: [] [] [] []

Internet technologies in our high schools – My journey in a high school web site integration project

8h30am. I was en route to one of the worse high school of the region. The worse? It’s what people say. Is that because there are some fights in classes that this is the worse school? Worse… it’s just a school, it’s not a prison. How a school could be as worse as people say? There are only children?

Why am I going to one of the worse high school of the region, at 8h30am? It was to help one of my friends with one of her school project. I was not there as a bodyguard in case that her students start another fight in the class; no.

She’s a French teacher and she wondered how to teach the concept of the explicative text. She didn’t want to ask them to do a simple explicative text. No, she wanted a full multi-discipline integrated project. What was her project? It was to build a web page for all of her students’ explicative text, on the website of the school, for 3 whole groups.

There were some problems. No one in the school ever started such a project. She got some help from the technical staff, but it was long to get them move and work. The real problems were that she needed to build the web site with Dreamweaver. Yup, Dreamweaver. I was astonished to hear it. One of the worse high schools of the region bought 32 Dreamweaver licenses for their computers. I never understood the reason why; but it was a fact. No one in the whole school knows how to use the software. It’s why the direction asked her to use it for his web site project.

It’s where I do my apparition in the story. I shown her how to use Dreamweaver; I helped her to build the website architecture and I helped her students to build their web pages.

She was really courageous to start such a project and I’m really impressed by what she done. She spent between 30 and 40 hours of non-paid job in the last weeks to build, correct and integrate this website. I have a great respect for these peoples that don’t fear things they don’t know and work to learn how they work.

9h00am. I was in the class, with her students. I distributed them a sheet that described what they had to do with Dreamweaver to build their web pages. The class his started, the students are wonderful. They worked in team. One was typing the text in their web page and the other was playing with a Flash game, somewhere on the Internet. You are thinking that the procedure was not really productive? In 1 hour, all teams had their text wrote. All web pages had graphs and animated gifts included everywhere. There were no major problems. Yes, it was a productive hour.

10h20am. I couldn’t believe it but everything was done, the bell had ring and the class was finished. Was this as worse as people said? Certainly not. The students were wonderful and worked really effectively. I never imagined it before.

It’s sure that using Dreamweaver to do this type of project was not the best idea in the world. This is an administrative decision and my friend overcame the problems linked with it. In my high school time I hadn’t the possibility to work on such a project. Computers were not what they are today and Internet wasn’t really well known. But now we have the technologies to develop such projects, why not using them? I had loved to build a web page when I was studying the explicative text in my French classes.

Are the professors ready to enter in the information age with their students? I don’t think so. Only the daring teachers, like my friend, will enter in it. Why? Because technologies are not well understood and much of the time they are hard to use. Building such a project take time, hard work and frustrations. People fear to do it because they don’t know how it works or because they don’t need to put the time it take to build it.

I encourage teachers to try to integrate technologies such as Internet in their traditional classes. It’s always appreciated by students and it will show them the possibilities that such technologies can give them in the future. It’s the first step they need to climb to eventually tame technologies. It’s not because you are a French teacher that you can’t do it; she is and she done it.

16h00pm. Was the school as worse as people say? Definitely not. It’s sure that some students have problems, like anybody on this earth, but they were polite and kind. It was a wonderful journey in the worse high school of the region.

Technoratie: [] [] [] [] []

Too many information is as useless as not enough

I have been notified by Lifehacker that a new Del.icio.us interface was being tested. I followed the instructions to see it and put http://del.icio.us/new/fredonsomething in my browser and pressed Enter.

When I see the thing for the first time, I stopped breathing. My brain was not able to compute what I was seeing. I was like a protagonist in a story of H.P Lovecraft. What was that?

I started, slowly, to understand what was going on with Del.icio.us. Thousands of words were spread in my screen like an endless vortex made of words and colors.

I wondered where my tags were. I had closely checked my screen, trying to decipher something to this gibberish. I finally find out that my tags was there; with some different colors dependant of the number of links tagged with them. I figured out that it was my old one colon tag list squeezed in a table.

I started to use Del.icio.us some months ago. Lately I only added bookmarks to my account, without looking at my home page. I never, ever thought that I generated as many tags with 112 entries.

After I realized it, I started to like the new interface, it’s an improvement on the old one, no doubts. But it raises a question about tags.

It’s sure that a simple word can’t catch the meaning of a resource by itself. It’s why you need many tags to describe the meaning of a resource. In this case, it’s normal to have more tags than resources (links in our case).

But, is the interface that manipulate these tags is really what a user need? Do I need to see all tags that describe resources? Could we introduce a concept of meta-tags to help the user to handle this mass of (not always useful) information?

Tags can be useful but too many tags are like not enough: it’s useless.

Semantic web is not a myth: it’s a future reality

I just finished reading this old post that talk about the myth of Semantic Web. The two points of the author are:

  1. Pure laziness. It’s extra work to tag everything with metadata.
  2. RDF is nearly impossible to understand. That’s the biggest rub. RDF, like so many other standards to come out of IETF/W3C is almost incomprehensible to anyone who didn’t write the standard.

The thing is that RDF is not intended to be easily understood by humans like simple XML documents. RDF is intended to be understood by machines. It’s a really basic ontology language. RDF and RDFS are flexible but not as expressive as we would like. It?s why other ontology languages have been created.

I said that this is a simple language… for machine, not for us. The thing is that I think that we need such languages in the future to be able to handle the mass of information that the Internet became.

It’s why we will need to build applications that will build these file for us. It’s what misses the Semantic Web: an infrastructure of fully integrated, and easy to use, user applications.

Semantic Web is not a myth, it’s a future reality. It’s at his infancy and it will grow. RSS is a result of the Semantic Web. Mr Cauldwell also said:

“The closest that anyone has come to using RDF in any real way is RSS, which has turned out to be so successful because it is accessible. It’s not hard to understand how RSS is supposed to work, which is why it’s not really RDF. “

He is right, but the thing is that we will need to develop applications to get these ease to create and understand RSS files and migrate them, automatically, in a more expressive ontological language like RDF or OWL. It’ll not be our job; it will be the job of applications. Why? Because these languages aren’t suitable for humans.

Think about the infancy of computer programming. We first started to code in assembler. It was a simple wrapper on machine code but it was not really suitable for humans to use. Eventually we created upper level languages, like C or Pascal to handle the suitability problem. They were much more comprehensive for humans. This was their only task: be comprehensible to human programmers. This is how it works: a special application transforms the human readable code in C in a less readable code in assembly to finally be converted in machine code incomprehensible to humans but fully understood by machines. It’ the same thing we will need to do with these languages.

Technoratie: [] [] [] [] [] []