Structured Reasoning with Language
The Semantic Web was born from the need to reason over informal content (e.g., Web text), based on a fundamental premise: computers cannot process most of the information stored on Web pages. It has led to major advances in knowledge representation, knowledge graphs, reasoning at scale, and more. Today, though, language models (LMs) have ruptured the original premise: they showing surprising skill at processing text directly, offering new opportunities for knowledge management and the Semantic Web vision. In this talk, I’ll illustrate three such opportunities we have been exploring: structured reasoning directly over natural language (NL) statements (NL inference); using LMs as tools for building formal and semi-formal world models; and using NL communication between agents to building robust multi-agent services. Finally I’ll speculate on what the future information world might look like with our new LM companions at our side. Bio: Peter Clark is a Senior Research Director and founding member of the Allen Institute for AI (AI2). He leads AI2’s Aristo Project, aiming to build the next generation of systems that can systematically reason, explain, and continually improve over time. He received his Ph.D. in 1991, has published over 250 papers, and has received several awards, including four Best Paper awards (AAAI, EMNLP, AKBC), a Boeing Associate Technical Fellowship (2004), and Senior Member of AAAI.