Winnipeg City’s NOW [Data] Portal

The Winnipeg City’s NOW (Neighbourhoods Of Winnipeg) Portal is an initiative to create a complete neighbourhood web portal for its citizens. At the core of the project we have a set of about 47 fully linked, integrated and structured datasets of things of interests to Winnipegers. The focal point of the portal is Winnipeg’s 236 neighbourhoods, which define the main structure of the portal. The portal has six main sections: topics of interests, maps, history, census, images and economic development. The portal is meant to be used by citizens to find things of interest in their neibourhood, to learn their history, to see the images of the things of interest, to find tools to help economic development, etc.

The NOW portal is not new; Structured Dynamics was also its main technical contractor for its first release in 2013. However we just finished to help Winnipeg City’s NOW team to migrate their older NOW portal from OSF 1.x to OSF 3.x and from Drupal 6 to Drupal 7; we also trained them on the new system. Major improvements accompany this upgrade, but the user interface design is essentially the same.

The first thing I will do is to introduce each major section of the portal and I will explain the main features of each. Then I will discuss the new improvements of the portal.

Datasets

A NOW portal user won’t notice any of this, but the main feature of the portal is the data it uses. The portal manages 47 datasets (and growing) of fully structured, integrated and linked datasets of things of interests to Winnipegers. What the portal does is to manage entities. Each kind of entity (swimming pools, parks, places, images, addresses, streets, etc.) are defined with multiple properties and values. Several of the entities reference other entities in other datasets (for example, an assessment parcel from the Assessment Parcels dataset references neighbourhoods entities and property addresses entities from their respective datasets).

The fact that these datasets are fully structured and integrated means that we can leverage these characteristics to create a powerful search experience by enabling filtering of the information on any of the properties, to bias the searches depending where a keyword search match occurs, etc.

Here is the list of all the 47 datasets that currently exists in the portal:

  1. Aboriginal Service Providers
  2. Arenas
  3. Neighbourhoods of Winnipeg City
  4. Streets
  5. Economic Development Images
  6. Recreation & Leisure Images
  7. Neighbourhoods Images
  8. Volunteer Images
  9. Library Images
  10. Parks Images
  11. Census 2006
  12. Census 2001
  13. Winnipeg Internal Websites
  14. Winnipeg External Websites
  15. Heritage Buildings and Resources
  16. NOW Local Content Dataset
  17. Outdoor Swimming Pools
  18. Zoning Parcels
  19. School Divisions
  20. Property Addresses
  21. Wading Pools
  22. Electoral wards of Winnipeg City
  23. Assessment Parcels
  24. Libraries
  25. Community Centres
  26. Police Service Centers
  27. Community Gardens
  28. Leisure Centres
  29. Parks and Open Spaces
  30. Community Committee
  31. Commercial real estates
  32. Sports and Recreation Facilities
  33. Community Characterization Areas
  34. Indoor Swimming Pools
  35. Neighbourhood Clusters
  36. Fire and Paramedic Stations
  37. Bus Stops
  38. Fire and Paramedic Service Images
  39. Animal Services Images
  40. Skateboard Parks
  41. Daycare Nurseries
  42. Indoor Soccer Fields
  43. Schools
  44. Truck Routes
  45. Fire Stations
  46. Paramedic Stations
  47. Spray Parks Pads

Structured Search

The most useful feature of the portal to me is its full-text search engine. It is simple, clean and quite effective. The search engine is configured to try to give the most relevant results a NOW portal user may be searching. For example, it will positively bias some results that comes from some specific datasets, or matches that occurs in specific property values. The goal of this biasing is to improve the quality of the returned results. This is somewhat easy to do since the context of the portal is well known and we can easily boost scoring of search results since everything is fully structured.

Another major gain is that all the search results are fully templated. The search results do not simply return a title and some description for your search results. It does template all the information the system has about the matched results, but also displays the most relevant information to the users in the search results.

For example, if I search for a indoor swimming pool, in most of the cases it may be to call the front desk to get some information about the pool. This is why different key information will be displayed directly in the search results. That way, most of the users won’t even have to click on the result to get the information they were looking for directly in the search results page.

Here is an example of a search for the keywords main street. As you can notice, you are getting different kind of results. Each result is templated to get the core information about these entities. You have the possibility to focus on particular kind of entities, or to filter by their location in specific neighbourhoods.

now--search-1

Templated Search Results

Now let’s see some of the kind of entities that can be searched on the portal and how they are presented to the users.

Here is an example of an assessment parcel that is located in the St. John’s neighbourhood. The address, the value, the type and the location of the parcel on a map is displayed directly into the search results.

now--template-search-assessment-pacels

Another kind of entity that can be searched are the property addresses. These are located on a map, the value of the parcels and the building and the zoning of the address is displayed. The property is also linked to its assessment parcel entity which can be clicked to get additional information about the parcel.

now--template-search-property-address

Another interesting type of entity that can be searched are the streets. What is interesting in this case is that you get the complete outline of the street directly on a map. That way you know where it starts and where it ends and where it is located in the city.

now--template-search-street

There are more than a thousand geo-localized images of all different things in the city that can be searched. A thumbnail of the image and the location of the thing that appears on the image appears in the search results.

now--template-search-heritage-building-image

If you were searching for a nursery for your new born child, then you can quickly see the name, location on a map and the phone number of the nursery directly in the search result.

now--template-search-nurseries

There are just a few examples of the fifty different kind of entities that can appear like this in the search results.

Mapping

The mapping tool is another powerful feature of the portal. You can search like if you were using the full-text search engine (the top search box on the portal) however you will only get the results that can be geo-localized on a map. You can also simply browse entities from a dataset or you can filter entities by their properties/values. You can persist entities you find on the map and save the map for future reference.

In the example below, it shows that someone searched for a street (main street) and then he persisted it on the map. Then he search for other things like nurseries and selected the ones that are near the street he persisted, etc. That way he can visualize the different known entities in the portal on a map to better understand where things are located in the city, what exists near a certain location, within a neighbourhood, etc.

now--map

Census Analysis

Census information is vital to the good development of a city. They are necessary to understand the trends of a sector, who populates it, etc., such that the city and other organizations may properly plan their projects to have has much impact as possible.

These are some of the reason why one of the main section of the site is dedicated to census data. Key census indicators have been configured in the portal. Then users can select different kind of regions (neighbourhood clusters, community areas and electoral wards) to get the numbers for each of these indicators. Then they can select multiple of these regions to compare each other. A chart view and a table view is available for presenting the census data.

now--census

History, Images & Points of Interest

The City took the time to write the history of each of its neighbourhoods. In additional to that, they hired professional photographs to photograph the points of interests of the city, to geo-localize them and to write a description for each of these photos. Because of this dedication, users of the portal can learn a much about the city in general and the neighbourhood they live in. This is what the History and Image sections of the website are about.

now--history

Historic buildings are displayed on a map and they can be browsed from there.

now--history-heritage-buildings

Images of points of interests in the neighbourhood are also located on a map.

now--history-heritage-resources

Find Your Neighbourhood

Ever wondered in which neighbourhood you live in? No problem, go on the home page, put your address in the Find your Neighbourhood section and you will know it right away. From there you can learn more about your neighbourhood like its history, the points of interest, etc.

now--find-your-neighbourhood

Your address will be located on a map, and your neighbourhood will be outlined around it. Not only you will know in which neighbourhood you live, but you will also know where you live within it. From there you can click on the name of the neigbourhood to get to the neighbourhood’s page and start learning more about it like its history, to see photos of points of interest that exists in your neighbourhood, etc.

now--find-your-neighbourhood-result

Browsing Content by Topic

Because all the content of the portal is fully structured, it is easy to browse its content using a well defined topic structure. The city developed its own ontology that is used to help the users browse the content of the portal by browsing topics of interest. In the example below, I clicked the Economic Development node and then the Land use topic. Finally I clicked the Map button to display things that are related to land use: in this case, zoning and assessment parcels are displayed to the user.

This is another way to find meaningful and interesting content from the portal.

now--topics

Depending on the topic you choose, and the kind of information related to that topic, you may end up with different options like a map, a list of links to documents related to that topic, etc.

Export Content

Now that I made an overview of each of the main features of the portal, let’s go back to the geeky things. The first thing I said about this portal is that at its core, all information it manages is fully structured, integrated and linked data. If you get to the page of an entity, you have the possibility to see the underlying data that exists about it in the system. You simply have to click the Export tab at the top of the entity’s page. Then you will have access to the description of that entity in multiple different formats.

now--export-entity

In the future, the City should (or at least I hope will) make the whole set of datasets fully downloadable. Right now you only have access to that information via that export feature per entity. I hope because this NOW portal is fully disconnected from another initiative by the city: data.winnipeg.ca, which uses Socrata. The problem is that barely any of the datasets from NOW are available on data.winnipeg.ca, and the ones that are appearing are the raw ones (semi-structured, un-documented, un-integrated and non-linked) all the normalization work, the integration work, the linkage work done by the NOW team hasn’t been leveraged to really improve the data.winnipeg.ca datasets catalog.

New with the upgrades

Those who are familiar with the NOW portal will notice a few changes. The user interface did not change that much, but multiple little things got improved in the process. I will cover the most notable of these changes.

The major changes that happened are in the backend of the portal. The data management in OSF for Drupal 7 is incompatible with what was available in Drupal 6. The management of the entities became easier, the configuration of OSF networks became a breeze. A revisioning system has been added, the user interface is more intuitive, etc. There is no comparison possible. However, portal users’ won’t notice any of this, since these are all site administrator functions.

The first thing that users will notice is the completely new full-text search engine. The underlying search engine is almost the same, but the presentation is far better. All entity types have gotten their own special template, which are displayed in a special way in the search results. Most of the time results should be much more relevant, filtering is easier and cleaner. The search experience is much better in my view.

The overall site performance is much better since different caching strategies have been put in place in OSF 3.x and OSF for Drupal. This means that most of the features of the portal should react more swiftly.

Now every type of entity managed by the portal is templated: their webpage is templated in specific ways to optimize the information they want to convey to users along with their search result “mini page” when they get returned as the result of a search query.

Multi-linguality is now fully supported by the portal, however not everything is currently templated. However expect a fully translated NOW portal in French in the future.

Creating a Network of Portals

One of the most interesting features that goes with this upgrade is that the NOW portal is now in a position to participate into a network of OSF instances. What does that mean? Well, it means that the NOW portal could create partnerships with other local (regional, national or international) organizations to share datasets (and their maintenance costs).

Are there other organizations that uses this kind of system? Well, there is at least another one right in Winnipeg City: MyPeg.ca, also developed by Structured Dynamics. MyPeg uses RDF to model its information and uses OSF to manage its information. MyPeg is a non-profit organization that uses census (and other indicator) data to do studies on the well being of Winnipegers. The team behind MyPeg.ca are research experts in indicator data. Their indicator datasets (which includes census data) is top notch.

Let’s hypothetize that there would be interest between the two groups to start collaborating. Let’s say that the NOW portal would like to use MyPeg’s census datasets instead of its own since they are more complete, accurate and include a larger number of important indicators. What they basically want is to outsource the creation and maintenance of the census/indicators data to a local, dedicated and highly professional organization. The only things they would need to do is to:

  1. Formalize their relationship by signing a usage agreement
  2. The NOW portal would need to configure the MyPeg.ca OSF network into their OSF for Drupal instance
  3. The NOW portal would need to register the datasets it want to use from MyPeg.ca.

Once these 3 steps are done, taking no more than a couple of minutes, then the system administrators of the NOW portal could start using the MyPeg.ca indicator datasets like they were existing on their own network. (The reverse could also be true for MyPeg.) Everything would be transparent to them. From then on, all the fixes and updates performed by MyPeg.ca to their indicator datasets would immediately appear on the NOW portal and accessible to its users.

This is one possibility to collaborate. Another possibility would be to simply on a routine basis (every month, every 6 months, every year) share the serialized datasets such that the NOW portal re-import the dataset from the files shared by MyPeg.ca. This is also possible since both organizations use the same Ontology to describe the indicator data. This means that no modification is required by the City to take that new information into account, they only have to import and update their local datasets. This is the beauty of ontologies.

Conclusion

The new NOW portal is a great service for citizens of Winnipeg City. It is also a really good example of a web portal that leverages fully structured, integrated and linked data. To me, the NOW portal is a really good example of the features that should go along with a municipal data portal.

Literate [Clojure] Programming: Anatomy of a Org-mode file

This blog post is the second of a series of blog posts about Literate [Clojure] Programming where I explain how I develop my [Clojure] applications using literate programming concepts and principles. In the previous blog post I outlined a project’s structure. In this blog post I will demonstrate how I normally structure an Org-mode file to discuss the problem I am trying to solve, to code it and to test it.

One of the benefits of Literate Programming is that the tools that implement its concepts (in this case Org-mode) give to the developer the possibility to write its code in the order (normally more human friendly) he wants. This is one of the aspects I will cover in this article.

If you want to look at a really simple [Clojure] literate application I created for my Creating And Running Unit Tests Directly In Source Files With Org-mode blog post, take a look at the org-mode-clj-tests-utils (for the rendered version). It should give you a good example of what a literate file that follows the structure discussed here looks like.

This series of blog posts about literate programming is composed of the following articles:

  1. Project folder structure
  2. Anatomy of a Org-mode file (this post)
  3. Tangling all project files
  4. Publishing documentation in multiple formats
  5. Unit Testing.

Structure

A literate programming file can have any kind of structure. Depending on the task at hand, it can take the form of a laboratory notebook or a software documentation file. The structure I will explain here is the structure I use to develop normal applications using the Clojure programming language. In other blog posts I will explain other styles, but I will stick to that one for now.

The usual structure of a literate programming file is composed of the following sections:

  1. introduction
  2. main section
    1. sub-section
      1. introduction
      2. code/explanation/…/code/explanation
      3. unit tests
    2. sub-section
  3. complete namespace definition
    1. unit tests

Each of the sub-sections has the same outline, but multiple levels of sub-sections can be created depending on the needs. Every code block is uniquely named (identified) and belongs to a section or a subsection. The portion of that outline that lets you write your application in the order more friendly to a compiler is the complete namespace definition section which is where we “reconstruct” the code to be tangled (written in a standard source code file).

Introduction

Every file starts with a title and a description of the problem you are trying to solve and an overview of how you are trying to solve it. If required, subsections can always be added to the introduction to properly describe the problem and the solution to that problem. In any case, no code blocks are defined in the introduction; only text, images, tables of data or anything else that helps define a problem and its solution are included.

Note that the title of the file is defined using the #+TITLE: Org-mode markup.

Main & Sub Sections

The main and sub-sections have the same outline. They only differ in the level of the details. You could have a series of main sections without any sub-sections in them. Or you could have a single main section with multiple levels of sub-sections. This split really depends on how you want to formulate the solution to the problem exposed in the introduction.

A section should define a portion of the solution you are developing to fix the problem. The scope of the section is only defined by the developer and it depends on how things are being solved. A more complex solution may require more refined solutions which would require subsections (or multiple levels of them).

In any case, for each of these sections, I almost always define the following portions:

  1. introduction
  2. code/explanation/…/code/explanation
  3. unit tests

For each section, I try to introduce the portion of the solution with some text, images or data tables. Then I start to code my application by adding the code of my application in code blocks intertwined with some text iteratively until that portion of the overall solution is completed. Then I define a third section where I create some unit tests where I iteratively test the functions I created in the section. The unit tests are also used to document how the API can be used by acting as usage examples.

It is also possible that you may want to define a section in your file, but that you don’t want to weave that section in the resulting documentation. This can easily be done by adding the :noexport: markup at the end of a section title.

Code Blocks

Each code block should be named with a unique name across all org-mode files of your project. A name is defined using the #NAME: markup before a code block starts. The name is quite important since it helps understand the flow of your application. It should be written as a short description of what the code block does.

Code blocks in Org-mode have numerous options. However we will only cover the few key ones here. Note that most of the other options will be defined in the Complete Namespace Definition section of the file. This is where we will reference the name of the code block and where we will order the code to tangle from this literate file.

One of the key options of a code block is the :results option which can have one of the following values: silent, value or output. Depending on what you want to output in the literate document, you can display the value returned by the code block, the output of the code processed in the code block, or you can make it silent. The value or the output of a code block’s execution will appear in an EXAMPLE block underneath the code block.

Another important header option of the code block is :export which is used to tell Org-mode how to weave the code block and its results. It has 4 options: code (default), which only exports the code box, results which only exports the results box, both which export both and none which exports nothing when weaving a document.

As I said, many other header options exist like the possibility to use the result of a code block to assign to a variable that can be passed to another block in your literate file which lets you create workflows within your literate files. However I won’t cover these options here since they are more used in a laboratory notebook style.

Unit Tests Blocks

At the end of each section, I usually define a Unit tests section which is where I define different unit tests for the code defined in a section or one of its sub-sections. These tests are defined in a named code block. They are used to unit test the functions I created in a document and they are also used as API usage examples. Each of the unit tests blocks is aggregated in a test suites in the Unit Tests sub-section of the Complete Namespace Definition section (see below).

These unit tests are executed directly into the Org-mode file while I am developing them. This means that any issues will be caught right away and fixed in the code of that section. Also, if the code is updated in the future, these unit tests will be re-executed right away and any issues will be output directly in the Org-mode file without having to switch to any other testing facilities.

Complete Namespace Definition

A the end of each literate programming file, I do create a Complete Namespace Definition section where I outline how the tangled code will be ordered in the generated source file. Generally we don’t want to export this section into the weaved document, so I define it with the :noexport: markup at the end of the section name.

This section is where I define the header of my source files (usually the namespace declaration, import statements and such), where I order code to be tangled and where we define the code block header parameters related to tangling the code into the source code files.

It is in this section that you will understand why it is important to spend some time properly naming your code blocks in your file: since it is these names that will appear in this section that will make the outline of our code understandable.

There are 3 header parameters that I normally use for that code block:

  1. :tangle ../../../
  2. :mkdirp yes
  3. :noweb yes
  4. :results silent

First we don’t want to output anything in the Org-mode file after executing the code block, so we put :results to silent. Then we want to use the WEB markup in the code block, so we define the parameter :noweb to yes. Then if one of the folders specified by :tangle is not existing, we want Org-mode to create it for us instead of failing. Finally, the :tangle parameter is defined with the path where the tangled document will be written to the file system. The location of the source file will comply with the structure of your application.

The code block of the Unit tests sub section will be tangled in the unit test folder of your project.

Structure Navigation & Conclusion

One of the benefits of writing application using Literate Programming principles is that what we end-up doing is to create a much more human readable outline of an application. We end-up creating sections, subsections, etc. just like when you write an article, or a book, where you create sections and subsections where each of the sections focus on the thing your are writing about. To me, this is a much more natural work style to solve a problem. It is also much easier to share with non-developers who need to understand how your applications behave. To these people, it is like reading a scientific article grounded into a mathematic framework: if you are not a mathematician (and even if you are but are not familiar with the concepts discussed in the article) you will most likely (at least for a first read) read the article, understand its structure and idea, but you will skip the boxes where you have the equations. Here the same minding applies, it is just that non-developers will skip the boxes were there is the code. But in any case, he should be able to understand what you are doing, the problem you are trying to solve, and how you are trying to solve it.

This is why the structure that gets created when developing applications is quite interesting and beneficial. This structure is what I really like with Org-mode, which at its core is nothing other than a plain text outliner. This means that Org-mode has several features to help you manipulate and navigate the outline structure of a text file. Like conventional programming with an IDE where you can extend and collapse blocks of code, with Org-mode you can extend and collapse the outline of the document (created by the sections and subsections of your files).

This is quite powerful since you can focus on a series of functions that solve a particular problem just by extending its section and collapsing all others. You can even display only the content of that section into the Emacs buffer by using C-x n s to focus on a Org-mode region and C-x n w to unfocus that region. This means that even if you have a single file with several thousand of lines it doesn’t really matter since you can see any section of that file like if it was its own tiny file. This may be appealing to developers that don’t like a proliferation of files in their projects (in fact, they could end up with a single master well structured Org-mode file that gets tangled as multiple source code files).

Literate [Clojure] Programming Using Org-mode

Literate Programming is a great way to write computer software, particularly in fields like data science where data processing workflows are complex and often need much background information. I started to write about Literate Programming a few months ago, and now it is the time to formalize how I create Literate Programming applications.

This is the first post of a series of blog posts that will cover the full workflow. I will demonstrate how I do Literate Programming for developing a Clojure application, but exactly the same workflow would work for any other programming language supported by Org-mode (Python, R, etc.). The only thing that is required is to adapt the principles to the project structures in these other languages. The series of blog posts will cover:

  1. Project folder structure (this post)
  2. Anatomy of a Org-mode file
  3. Tangling all project files
  4. Publishing documentation in multiple formats
  5. Unit Testing

Clojure Project Folder Structure

The structure of a programming project can vary a lot. The structure I am using when developing Clojure is the one created by Leiningen which I use for creating and managing my Clojure projects. The structure of a simple project (in this case, the org-mode-clj-tests-utils project that I created for another blog post) looks like this:

- CHANGELOG.md
- LICENCE
- README.md
- resources
- pom.xml
- project.clj
- src
  - org_mode_clj_tests_utils
    - core.clj
- target
- test
  - org_mode_clj_tests_utils
    - core_test.clj

There are 4 main components to this structure:

  1. the project.clj file which is used by Leiningen to configure the project
  2. the src folder where the project’s code files [to be compiled] are located
  3. the target folder is where the compiled files will be available, and
  4. the test folder where the unit tests for the code sources are located

This kind of project outline is really simple and typical. Now let’s see what the structure would look like if this project would be created using Literate [Clojure] Programming.

Literate Clojure Folder Structure

The best way and cleanest way I found to create and manage the Org-mode files is to create a org directory at the same level as the src one. Then to replicate the same folder structure that exists in the src folder. The names of the source files should be the same except that they have the .org file extension. For example, the src/core.clj file would become org/core.org in the Org-mode folder, and the org/core.org file is used to tangle (create) the src/core.clj file.

The new structure would look like that:

- CHANGELOG.md
- LICENCE
- README.md
- resources
- org
  - project.org
  - org_mode_clj_tests_utils
    - core.org
- pom.xml
- project.clj
- src
  - org_mode_clj_tests_utils
    - core.clj
- target
- test
  - org_mode_clj_tests_utils
    - core_test.clj

The idea here is that all the files that needs to be modified related to the project would become a Org-mode file. Such files are the code source files, the test files, possible other documentation files and the project.clj file. When the Org-mode files will be tangled, then all the appropriate files, required by the Clojure project would be generated.

Anything I am writing for this project comes from a Org-mode file. All the development occurs in Org-mode. If someone would want to modify such a Literate Clojure application, then they would have to modify the Org-mode file and not the source files otherwise the changes would be overwritten by the next tangling operation.

Utilities Org-mode Files

Finally, I created a series of Org-mode files that are used to perform special tasks such as:

  1. Tangling all project files at once, and
  2. Publishing documentation in multiple formats

These are Org-mode files that can be executed to perform these tasks. In the case of tangling all project files at once, it would be necessary to use it if you haven’t changed the behavior of your Emacs to automatically tangle files on save.

The second file is to publish weaved documentation in multiple different formats (HTML, LaTeX, etc.) as required, all at once.

These two files are directly located into the /org/ folder. I will explain how they work in a subsequent post in that series. The final structure of a Literate Clojure project is:

- CHANGELOG.md
- LICENCE
- README.md
- resources
- org
  - project.org
  - publish.org
  - tangle-all.org
  - setup.org
  - org_mode_clj_tests_utils
    - core.org
- pom.xml
- project.clj
- src
  - org_mode_clj_tests_utils
    - core.clj
- target
- test
  - org_mode_clj_tests_utils
    - core_test.clj

Conclusion

As you can see, a Literate Clojure application is not much different. The way to program such an application is more profound than the small changes that occur at the level of the folder structure.

There is still an open question related to publishing this kind of Literate work on repositories such as Git: should only the org folder be added to a Git repository, or should we also add the files that get tangled as well? In an ideal World, only the org files would need to go into the repository. However, depending on the nature of work (work only accessible by you, or work accessible by a group of people that know Org-mode, or making the project public on GitHub, etc.) we may have to commit the tangled files too. In the case of an open source project, I think it is required since many people unfamiliar with Org-mode won’t be able to use the codebase because they won’t be able to tangle it from the Org files. For this specific reason, I tend to publish the org files along with all the files that get tangled from them. That way I am sure that even if the users of the library doesn’t know anything about Org-mode or Literate Programming they could still use the code. The only thing I try to take care of is to commit the Org file and the tangled file related to a specific change in the same commit, and I try not to create two commits, one for each file.

The next blog post of that series will explain how the Org-mode source files are actually created, what is their internal structure, how they are organized and used.




This blog is a regularly updated collection of my thoughts, tips, tricks and ideas about data mining, data integration, data publishing, the semantic Web, my researches and other related software development.


RSS Twitter LinkedIN


Follow

Get every new post on this blog delivered to your Inbox.

Join 93 other followers:

Or subscribe to the RSS feed by clicking on the counter:




RSS Twitter LinkedIN