Keynote Speakers

Marieke van Erp – KNAW Humanites Cluster

Unflattening Knowledge Graphs

Investigating complex entities and concepts is at the core of humanities research. The concept coffee can for example refer to the plant that yields coffee seeds, the beverage ‘coffee’, and the activity of drinking the beverage. Moreover, it has a long history that is deeply connected to colonialism and status. All of these notions are of interest to humanities scholars as they are an intricate part of national identities, have changed dramatically over time, and connect to many different narratives with different opinions on them. This complexity is not captured in knowledge graphs, but Semantic Web technology has has become such an important tool that it is essential to address this complexity. In this talk, Marieke will present recent work on modelling complex concepts, dealing with contentiousness, and the most pressing challenges to unflatten knowledge graphs, for humanities research and other domains.

Bio: Marieke van Erp is a Language Technology and Semantic Web expert engaged in interdisciplinary research. She holds a PhD in computational linguistics from Tilburg University and has worked on many (inter)national interdisciplinary projects. Since 2017, she has been leading the Digital Humanities Research Lab at the Royal Netherlands Academy of Arts and Sciences Humanities Cluster. She is one of the founders and scientific directors of the Cultural AI Lab, a collaboration between 8 research and cultural heritage institutions in the Netherlands aimed at the study, design and development of socio-technological AI systems that are aware of the subtle and subjective complexity of human culture. In January 2023, she was awarded an ERC Consolidator project that will investigate how language and semantic web technologies can improve the creation of knowledge graphs supporting humanities research.


Efthymia Tsamoura – Samsung AI

Reasoning at Scale: Why, How and What’s Next.

Can we reason over knowledge graphs consisting of billions of facts using commodity hardware? And what if the underlying facts are true only with some probability? While the above questions originally concerned the data management community, the efforts to incorporate symbolic knowledge into deep models, as well as the rise of Web-scale probabilistic knowledge bases, sent a strong message: we need reasoning techniques that, firstly, can scale and, secondly, can support uncertainty under well-established semantics. 

In this talk, I will present reasoning techniques that allow us to answer positively the above two questions: in the absence of uncertainty, they can materialize knowledge graphs with 17 billion facts in less than 40 min using a single commodity machine; when the underlying facts are uncertain, e.g., they correspond to a deep classifier’s predictions, they support complex neuro-symbolic scenarios, meaning the difference between answering queries in seconds to not answering them at all even under approximations.

Bio: Efi Tsamoura is a Senior Researcher at Samsung AI, Cambridge, UK. In 2016 she was awarded an early career fellowship from the Alan Turing Institute, UK, and before that, she was a Postdoctoral Researcher in the Department of Computer Science of the University of Oxford. Her main research interests lie in the areas of logic, knowledge representation and reasoning, and neuro-symbolic integration. Her research has been published in top-tier AI and database venues (SIGMOD, VLDB, PODS, AAAI, IJCAI, etc.). Efi started the Samsung AI neuro-symbolic workshop series “When deep learning meets logic”


Alexander Gray – IBM Research

Reasoning with Realistically Imperfect Knowledge

I will describe new approaches for reasoning which are motivated by the practical limitations of real-world knowledge graphs, i.e. with multiple aspects of imperfection.  These include:  uncertainty (both subjective and probabilistic), ignorance (unknown uncertainty), partiality (missing variables altogether), and contradiction (inconsistency between pieces of knowledge and/or raw data).  These are difficult to find in a single paradigm simultaneously.  I will discuss the efficiency of learning and inference, and correctness properties.

Bio: Alexander Gray leads the Foundations of AI team and a global research program in Neuro-Symbolic AI at IBM.  His current interests are in neuro-symbolic AI and automated data science.  He received AB degrees in Applied Mathematics and Computer Science from UC Berkeley and a PhD in Computer Science from Carnegie Mellon University.  Before IBM he worked at NASA, served as a tenured Associate Professor at the Georgia Institute of Technology, and co-founded an AI startup in Silicon Valley.  His work on machine learning, statistics, and algorithms for massive datasets using ideas from computational geometry and computational physics, predating the movement of “big data” in industry, has been honored with a number of research honors including multiple best paper awards, the NSF CAREER Award, selection as a National Academy of Sciences Kavli Scholar, and service as a member of the 2010 National Academy of Sciences Committee on the Analysis of Massive Data.  His interests have generally revolved around exploring new or underdeveloped connections between diverse fields with the potential of breaking through long-standing bottlenecks of ML/AI.

Share on