Menu

Exploiting Information Extraction, Reasoning and Machine Learning for Relation Prediction

calendar icon Jul 4, 2012 6060 views
split view icon
video icon
presentation icon
video with chapters icon
video thumbnail
Pause
Mute
speed icon
speed icon
0.25
0.5
0.75
1
1.25
1.5
1.75
2

The three most common approaches for deriving or predicting instantiated relations, i.e. triple statements (s, p, o), are information extraction, reasoning and relational machine learning. Information extraction uses sensory information, typically in form of text, and extracts statements using various methods ranging from simple classifiers to the most sophisticated NLP approaches. Logical reasoning is based on a set of true statements and derives new statements via inference using higher-order logical axioms. Finally, machine learning exploits regularities in the data to predict the likelihood of new statements. In this paper we combine all three methods to exploit all sources of available information in a modular way, by which we mean that each approach, i.e., information extraction, reasoning, machine learning, can be optimized independently to be combined in an overall system. For relational machine learning, we present a novel approach based on hierarchical Bayesian multi-label learning which also sheds new light on common factorization approaches. We rank the probabilities for statements to be true in the sense that: given that we are forced to make a decision, what is the best option. We consider the fact that an entity can belong to more than one ontological class and discuss aggregation. We extend the approach to modeling nonlinear dependencies between relationships and for personalization. We validate our model using data from the Yago and the DBpedia ontology.

RELATED CATEGORIES

MORE VIDEOS FROM THE SAME CATEGORIES

Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International license.