Grounded language models represent the relationship between words and the non-linguistic context in which they are said. This paper describes how they are learned from large corpo...
We revisit 26 meta-features typically used in the context of meta-learning for model selection. Using visual analysis and computational complexity considerations, we find 4 meta-f...
This paper describes the interaction among language resources for an adequate concept annotation of domain texts in several languages. The architecture includes domain ontology, d...
Situated models of meaning ground words in the non-linguistic context, or situation, to which they refer. Applying such models to sports video retrieval requires learning appropri...
The use of background knowledge and the adoption of Horn clausal logic as a knowledge representation and reasoning framework are the distinguishing features of Inductive Logic Prog...