I am giving a talk (in French) at the 85th edition of the ACFAS congress, May 9. I will discuss the engineering aspects of doing machine learning. But more importantly, I will discuss how Semantic Web techniques, technologies and specifications can help solving the engineering problems and how they can be leveraged and integrated in a machine learning workflow.
The focus of my talk is based on my work in the field of the semantic web in the last 15 years and my more recent work creating the KBpedia Knowledge Graph at Cognonto and how they influenced our work to develop different machine learning solutions to integrate data, to extend knowledge structure, to tag and disambiguate concepts and entities in corpuses of texts, etc.
One thing we experienced is that most of the work involved in such project is not directly related to machine learning problems (or at least related to the usage of machine learning algorithms). And then I recently read a survey conducted by CrowdFlower in 2016 that support what we experienced. They surveyed about 80 data scientists to probe them to find out “where they feel their profession is going, [and] what their day-to-day job is like” To the question: “What data scientists spend the most time doing”, they answered:
Continue reading “A Machine Learning Workflow”
I am proud to announce the immediate release of the KBpedia Knowledge Graph version
1.40. This new version of the knowledge graph includes
53,739 concepts which is
14,687 more than with the previous version. It also includes
251,848 new alternative labels for
20,538 previously existing concepts in the version
542 new definitions.
This new version of KBpedia will have an impact on multiple different knowledge graph related tasks such as concepts and entities tagging and most of the existing Cognonto use cases. I will be discussing these updates and their effects on the use cases in a forthcoming series of blog posts.
But the key topic of this current blog post is this: How have we been able to increase the coverage of the KBpedia Knowledge Graph by
37.6% while keeping it
consistent (that is, there are no contradictory facts) and
satisfiable (that is, checks to see if the candidate addition violates any existing class disjointness assertions), all within roughly a single month of FTE effort?
Continue reading “KBpedia Knowledge Graph 1.40: Extended Using Machine Learning”
In previous articles I have covered multiple ways to create training corpuses for unsupervised learning and positive and negative training sets for supervised learning , , using Cognonto and KBpedia. Different structures inherent to a knowledge graph like KBpedia can lead to quite different corpuses and sets. Each of these corpuses or sets may yield different predictive powers depending on the task at hand.
So far we have covered two ways to leverage the KBpedia Knowledge Graph to automatically create positive and negative training corpuses:
- Using the links that exist between each KBpedia reference concept and their related Wikipedia pages
- Using the linkages between KBpedia reference concepts and external vocabularies to create training corpuses out of
Now we will introduce a third way to create a different kind of training corpus:
- Using the KBpedia aspects linkages.
Aspects are aggregations of entities that are grouped according to their characteristics different from their direct types. Aspects help to group related entities by situation, and not by identity nor definition. It is another way to organize the knowledge graph and to leverage it. KBpedia has about 80 aspects that provide this secondary means for placing entities into related real-world contexts. Not all aspects relate to a given entity.
Continue reading “Leveraging KBpedia Aspects To Generate Training Sets Automatically”
In the first part of this series we found the good hyperparameters for a single linear SVM classifier. In part 2, we will try another technique to improve the performance of the system: ensemble learning.
So far, we already reached
95% of accuracy with some tweaking the hyperparameters and the training corpuses but the
F1 score is still around
~70% with the full gold standard which can be improved. There are also situations when
precision should be nearly perfect (because false positives are really not acceptable) or when the
recall should be optimized.
Here we will try to improve this situation by using ensemble learning. It uses multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. In our examples, each model will have a vote and the weight of the vote will be equal for each mode. We will use five different strategies to create the models that will belong to the ensemble:
- Bootstrap aggregating (bagging)
- Asymmetric bagging
- Random subspace method (feature bagging)
- Asymmetric bagging + random subspace method (ABRS)
- Bootstrap aggregating + random subspace method (BRS)
Different strategies will be used depending on different things like: are the positive and negative training documents unbalanced? How many features does the model have? etc. Let’s introduce each of these different strategies.
Note that in this article I am only creating ensembles with linear SVM learners. However an ensemble can be composed of multiple different kind of learners, like SVM with non-linear kernels, decisions trees, etc. However, to simplify this article, we will stick to a single linear SVM with multiple different training corpuses and features.
Continue reading “Dynamic Machine Learning Using the KBpedia Knowledge Graph – Part 2”
In my previous blog post, Create a Domain Text Classifier Using Cognonto, I explained how one can use the KBpedia Knowledge Graph to automatically create positive and negative training corpuses for different machine learning tasks. I explained how SVM classifiers could be trained and used to check if an input text belongs to the defined domain or not.
This article is the first of two articles.In first part I will extend on this idea to explain how the KBpedia Knowledge Graph can be used, along with other machine learning techniques, to cope with different situations and use cases. I will cover the concepts of feature selection, hyperparameter optimization, and ensemble learning (in part 2 of this series). The emphasis here is on the testing and refining of machine learners, versus the set up and configuration times that dominate other approaches.
Depending on the domain of interest, and depending on the required
recall, different strategies and techniques can lead to better predictions. More often than not, multiple different training corpuses, learners and hyperparameters need to be tested before ending up with the initial best possible prediction model. This is why I will strongly emphasize the fact that the KBpedia Knowledge Graph and Cognonto can be used to automate fully the creation of a wide range of different training corpuses, to create models, to optimize their hyperparameters, and to evaluate those models.
Continue reading “Dynamic Machine Learning Using the KBpedia Knowledge Graph – Part 1”