Cognonto, Artificial Intelligence, Semantic Web

KBpedia Knowledge Graph 1.40: Extended Using Machine Learning

I am proud to announce the immediate release of the KBpedia Knowledge Graph version 1.40. This new version of the knowledge graph includes 53,739 concepts which is 14,687 more than with the previous version. It also includes 251,848 new alternative labels for 20,538 previously existing concepts in the version 1.20, and 542 new definitions.

This new version of KBpedia will have an impact on multiple different knowledge graph related tasks such as concepts and entities tagging and most of the existing Cognonto use cases. I will be discussing these updates and their effects on the use cases in a forthcoming series of blog posts.

But the key topic of this current blog post is this: How have we been able to increase the coverage of the KBpedia Knowledge Graph by 37.6% while keeping it consistent (that is, there are no contradictory facts) and satisfiable (that is, checks to see if the candidate addition violates any existing class disjointness assertions), all within roughly a single month of FTE effort?

Continue reading

Cognonto, Artificial Intelligence

Measuring the Influence of Expanded Knowledge Graphs on Machine Learning

Mike Bergman and I will release a new version 1.40 of the KBpedia Knowledge Graph in the coming month. This new version of the knowledge graph will include roughly 15,000 new concepts and 150,000 new alternative labels and 5,000 new definitions for existing KBpedia reference concepts. This new release will substantially increase the size of the current KBpedia Knowledge Graph.

This extension is based on a new methodology that we began to cover in the Extending KBpedia With Wikipedia Categories Cognonto use case. The extension uses graph embeddings for each KBpedia reference concept and its linkage to the Wikipedia category structure to pre-select the Wikipedia categories that are most likely to be good candidates to fill [current gaps] in the KBpedia graph structure. The new reference concept candidates scored through this automated process were then reviewed for likely selection. These selections were then analyzed by re-generating the KBpedia Knowledge Graph, which includes routines for identifying, reporting and fixing consistency and coherency issues using the KBpedia Generator. Problematic assignments are either dropped or fixed. These steps reflect the general process Cognonto follows in mapping and incorporating new schema and ontologies.

In the coming month or two, I will write a series of blog posts that will analyze the impact of different important versions of KBpedia on different machine learning models that we have previously created for the Cognonto use cases. All of the current use cases have been created using version 1.20 of KBpedia. We are about to finalize the creation of an intermediate version 1.30 (for internal analysis only). We are separately identifying thousands of reference concepts that will be temporarily removed, since they are more properly characterized as ‘aspects‘ and not true sub-classes. This removal will allow us to then define a third variant for machine learning comparisons. Some of these ‘aspects’ will be re-introduced into the graph where proper parent-child relationships can be established. The next public release of KBpedia, tentatively identified as The version 1.40, will include all of these updates.

Each of these three variants (versions 1.20, 1.30 and 1.40) will enable us to analyze and report on the influence that different version of the KBpedia knowledge graph can have on different machine learning tasks. The following tasks will be covered:

  1. Creating graph embeddings to disambiguate tagged concepts
  2. Creating domain specific training corpuses to train word embeddings
  3. Creating domain specific training sets to classify text, and
  4. Checking relatedness between Knowledge Graph concepts and Wikipedia categories based on their graph embeddings.

Our goal at Cognonto is to make available the power of knowledge-based artificial intelligence (KBAI) to any organization. Whether if it is for help populating search or tagging indexes, for performing semantic query expansion, or for help with a broad series of machine learning tasks, knowledge graphs plus KBAI provide a nearly automated way for doing so. Our research and expertise is geared toward creating, linking, extending, and leveraging knowledge graphs and knowledge bases to empower new and existing systems. We will continue to report in specific detail how and with what impact knowledge graphs and knowledge bases lead to better machine learning results.

Cognonto, Artificial Intelligence, Clojure

Extended KBpedia With Wikipedia Categories

A knowledge graph is an ever evolving structure. It needs to be extended to be able to cope with new kinds of knowledge; it needs to be fixed and improved in all kinds of different ways. It also needs to be linked to other sources of data and to other knowledge representations such as schemas, ontologies and vocabularies. One of the core tasks related to knowledge graphs is to extend its scope. This idea seems simple enough, but how can we extend a general knowledge graph that has nearly 40,000 concepts with potentially multiple thousands more? How can we do this while keeping it consistent, coherent and meaningful? How can we do this without spending undue effort on such a task? These are the questions we will try to answer with the methods we cover in this article.

The methods we are presenting in this article are how we can extend Cognonto‘s KBpedia Knowledge Graph using an external source of knowledge, one which has a completely different structure than KBpedia and one which has been built completely differently with a different purpose in mind than KBpedia. In this use case, this external resource is the Wikipedia categories structure. What we will show in this article is how we may automatically select the right Wikipedia categories that could lead to new KBpedia concepts. These selections are made using a SVM classifier trained over graph embedding vectors generated by a DeepWalk model based on the KBpedia Knowledge Graph structure linked to the Wikipedia categories. Once appropriate candidate categories are selected using this model, the results are then inspected by a human to take the final selection decisions. This semi-automated process takes 5% of the time it would normally take to conduct this task by comparable manual means.

Continue reading

Cognonto, Artificial Intelligence, Semantic Web

Create a Domain Text Classifier Using Cognonto

A common task required by systems that automatically analyze text is to classify an input text into one or multiple classes. A model needs to be created to scope the class (what belongs to it and what does not) and then a classification algorithm uses this model to classify an input text.

Multiple classification algorithms exists to perform such a task: Support Vector Machine (SVM), K-Nearest Neigbours (KNN), C4.5 and others. What is hard with any such text classification task is not so much how to use these algorithms: they are generally easy to configure and use once implemented in a programming language. The hard – and time-consuming – part is to create a sound training corpus that will properly define the class you want to predict. Further, the steps required to create such a training corpus must be duplicated for each class you want to predict.

Since creating the training corpus is what is time consuming, this is where Cognonto provides its advantages.

In this article, we will show you how Cognonto’s KBpedia Knowledge Graph can be used to automatically generate training corpuses that are used to generate classification models. First, we define (scope) a domain with one or multiple KBpedia reference concepts. Second, we aggregate the training corpus for that domain using the KBpedia Knowledge Graph and its linkages to external public datasets that are then used to populate the training corpus of the domain. Third, we use the Explicit Semantic Analysis (ESA) algorithm to create a vectorial representation of the training corpus. Fourth, we create a model using (in this use case) an SVM classifier. Finally, we predict if an input text belongs to the class (scoped domain) or not.

This use case can be used in any workflow that needs to pre-process any set of input texts where the objective is to classify relevant ones into a defined domain.

Unlike more traditional topic taggers where topics are tagged in an input text with weights provided for each of them, we will see how it is possible to use the semantic interpreter to tag main concepts related to an input text even if the surface form of the topic is not mentioned in the text. We accomplish this by leveraging ESA’s semantic interpreter.

Continue reading

Cognonto, Artificial Intelligence, Semantic Web

Mapping Datasets, Schema and Ontologies Using the Cognonto Mapper

There are many situations were we want to link named entities from two different datasets or to find duplicate entities to remove in a single dataset. The same is true for vocabulary terms or ontology classes that we want to integrate and map together. Sometimes we want to use such a linkage system to help save time when creating gold standards for named entity recognition tasks.

There exist multiple data linkage & deduplication frameworks developed in several different programming languages. At Cognonto, we have our own system called the Cognonto Mapper.

Most mapping frameworks work more or less the same way. They use one or two datasets as sources of entities (or classes or vocabulary terms) to compare. The datasets can be managed by a conventional relational database management system, a triple store, a spreadsheet, etc. Then they have complex configuration options that let the user define all kinds of comparators that will try to match the values of different properties that describe the entities in each dataset. (Comparator types may be simple string comparisons, the added use of alternative labels or definitions, attribute values, or various structural relationships and linkages within the dataset.) Then the comparison is made for all the entities (or classes or vocabulary terms) existing in each dataset. Finally, an entity similarity score is calculated, with some threshold conditions used to signal whether the two entities (or classes or vocabulary terms) are the same or not.

The Cognonto Mapper works in this same general way. However, as you may suspect, it has a special trick in its toolbox: the SuperType Comparator. The SuperType Comparator leverages the KBpedia Knowledge Ontology to help disambiguate two given entities (or classes or vocabulary terms) based on their type and the analysis of their types in the KBPedia Knowledge Ontology. When we perform a deduplication or a linkage task between two large datasets of entities, it is often the case that two entities will be considered a nearly perfect match based on common properties like names, alternative names and other common properties even if they are two completely different things. This happens because entities are often ambiguous when only considering these basic properties. The SuperType Comparator’s role is to disambiguate the entities based on their type(s) by leveraging the disjointedness of the SuperType structure that governs the overall KBpedia structure. The SuperType Comparator greatly reduces the time needed to curate the deduplication or linkage tasks in order to determine the final mappings.

We first present a series of use cases for the Mapper below, followed by an explanation of how the Cognonto Mapper works, and then some conclusions.

Continue reading