ESSAI 2025 Course Catalog

AI for Autonomous Robots: Bridging Theory and Practice

The Royal Melbourne Institute of Technology

Course type: Introductory

About the course

Designing AI algorithms for real-world use with Autonomous Robots presents unique challenges which extend beyond conventional AI development. These include meeting real-time operational demands within the constraints of limited onboard computational resources, and effectively handling the uncertainty and noise inherent in robotic sensors and actuators.
Additionally, AI techniques must seamlessly integrate into robot software architectures that bridge multiple levels of abstraction, from low-level hardware to high-level reasoning. This foundational course provides a comprehensive introduction to the practical design of AI algorithms for autonomous robots.
Participants will be introduced to fundamental concepts required to interface with robot hardware, such as kinematics, and will then explore a spectrum of AI algorithms adapted for autonomous robots including localisation, mapping, robot vision, reinforcement learning, and task planning. This course combines theory with practical experiments on robot platforms, offering participants a holistic perspective on the unique challenges of AI-driven robotics.

Automated Planning in the Continuous World

Toronto Metropolitan University

Course type: Introductory

About the course

Automated AI Planning is an exciting area of research. It is particularly interesting to develop automated planners that can solve optimization problems for mixed discrete-continuous systems, since there are potentially many real-world applications of these systems. In addition, it is interesting to develop automated planners for the domains with unknown objects, since in the real world systems not all objects are given in advance, and some objects can be created or destroyed by the planner. To accomplish this task, the planner has to be lifted in the sense that it should work with action schemas, and it should instantiate the actions with object names at run-time, but not in advance.
Deductive planning, i.e., planning based on deductive reasoning, is one of the promising approaches to lifted planning, since it supports planning when there are several infinite logical models of the planning domain. This is important in applications where actions have parameters that vary over infinite domains. For this deductive approach to be computationally feasible, it must be properly controlled. It is the surprising and mathematically non-trivial fact that this control can be achieved in a domain-independent way for a large class of planning domains with non-linear continuous processes.
I propose to teach a series of 5 lectures that will outline a recently developed approach to heuristic automated planning for mixed discrete-continuous (hybrid) systems. Following the Constraint Logic Programming framework, it delegates solution of the optimization problem with respect to numerical and temporal constraints to an external numerical optimization solver.

Back to the Future of Cognition and Control in Robotics: Robustness, Rationality, and Explainable Agency

The University of Edinburgh

Course type: Advanced

About the course

This advanced course seeks to bring participants to the state of the art in integrated systems that sense and interact with the physical world using knowledge-based and data-driven methods for reasoning, control, collaboration, and learning. In particular, we will explore systems that support:

  • Reasoning with prior commonsense domain knowledge and learned models based on non-monotonic logics and probability theory.
  • Incremental and interactive learning from multimodal sensor cues using machine learning methods and foundation models.
  • Robust and efficient decision-making based on decision heuristics that prioritize adaptive satisficing over optimization.
  • Methodology to establish that the system’s behavior satisfies desired properties, and to provide on-demand relational descriptions as explanations in response to different types of questions.

We will use practical examples drawn from robotics, computer vision, and multiagent systems to ground these concepts and discuss how the interplay between representation, reasoning, control, and learning can help address the underlying fundamental challenges.

Bridging Adversarial Learning and Data-Centric AI for Robust AI

University of Bari

University of Bari

Course type: Introductory

About the course

Ubiquitous AI systems have prompted the need for robust AI models that can preserve their accuracy in adversarial environments. The model development complexity and the model opacity have increased AI vulnerability, making AI models more susceptible to adversarial attacks. Adversarial Learning (AL) focuses on identifying, explaining and mitigating vulnerabilities of AI models by exploring both offensive and defensive approaches.
On the other hand, the emerging focus on the Data-Centric AI (DCAI) paradigm poses the basis for enhancing the quality of data to develop accurate AI models that are robust to out-of-distribution data, such as samples produced by adversaries.
This course will explore the correlation between AL and DCAI. Starting with introducing the key concepts and challenges in both AL and DCAI literature, the course will focus on DCAI strategies to enhance data quality and learn robust models, and will introduce defensive strategies to improve the resilience of the training pipeline.

Data Driven Approaches in (Multi-objective) Bayesian Optimisation

University of Exeter

Course type: Introductory

About the course

The course `Data Driven Approaches in (Multi-objective) Bayesian Optimisation’ focuses on solving complex optimisation problems using the latest data-driven and AI techniques. The participants will learn about probabilistic machine learning, specifically Gaussian processes, and their application in Bayesian optimisation. The course will emphasise methods for solving problems with multiple conflicting objectives, using real-world examples to show how these advanced techniques work in practice.
The students in the course will gain a deep understanding of modern data-driven optimisation methodologies and knowledge of making efficient decisions when solving problems with conflicting objectives.

Explainable AI via Argumentation: Theory & Practice

University of Cyprus

Hellenic Mediterranean University

Course type: Introductory

About the course

Explanations play a central role in AI either in providing some form of transparency to black-box machine learning systems or more generally in supporting the results of an AI system to help users to understand, accept and trust the operation of the system.
The course will present how Argumentation can serve as a basis for Explainable AI (XAI) and how this can be applied to Decision Making and Machine Learning for AI applications. It will present the role and basic quality requirements of explanations of AI systems and how these can be met in argumentation-based systems. It will cover the necessary theory of argumentation, a software methodology for argumentation-based explainable systems and the use of practical tools in argumentation for realizing such systems. Students will have hands-on experience using these tools and the development of a realistic XAI decision-making system.

Deep Reasoning in AI with Answer Set Programming and LLMs

University of Calabria

University of Calabria

Course type: Introductory

About the course

Answer Set Programming (ASP) is a logic-based Knowledge Representation and Reasoning (KRR) paradigm easing the fast prototyping of algorithms for complex problems. Indeed, ASP finds a natural application in solving Deep Reasoning problems characterized by search spaces of exponential size, which is the typical case for combinatorial search and combinatorial optimization. However, while moving the first steps in ASP is easy, being proficient with the most advanced linguistic constructs and scaling over realistic size instances is not necessarily a walk in the park.
In this course we will show how to use ASP at several levels, from the basic use of ASP systems for computing answer sets of an ASP program, to more sophisticated use cases in which ASP itself is just one (even if of crucial importance) wheel in broader and more complex gears. The course will also introduce some recent results in combining LLMs and ASP: ASP can be used to expand LLMs (weak) reasoning capabilities to obtain robust AIs, and LLMs can ease the task of coding in ASP.

Distributional reinforcement learning

Linnaeus University

Linnaeus University

Course type: Introductory

About the course

The goal of this course is to introduce participants to the general framework of distributional reinforcement learning (DRL). The framework represents a recent and successful paradigm shift for reinforcement learning algorithms that use deep learning. By having agents learn the full distribution of returns rather than their expected values we obtain a much richer view and more flexibility for algorithms. The overall aim of the course is to give a solid foundational understanding of DRL, enabling participants to pursue more advanced topics or apply gained knowledge in practical settings.
The course will provide a primer on the classical RL framework which includes Bellman equations and operators. This will lay the conceptual and theoretical groundwork for the novel theory, which includes random variable Bellman equations, distributional operators and distributional temporal-difference learning. The course will conclude with a forward-looking exploration and discussion of state-of-the-art methods in the field, such as categorical and quantile deep learning methods in DRL.

Ethics and Law in Trustworthy AI: Foundations and Applications

The Kempelen Institute of Intelligent Technologies

The Kempelen Institute of Intelligent Technologies

Course type: Introductory

About the course

Trustworthy AI, which integrates legal and ethical considerations, is a critical focus in responsible AI development and use. Highlighted by key European Union (EU) initiatives like the Ethics Guidelines for Trustworthy AI (EGTAI), the Assessment List for Trustworthy AI (ALTAI), and the proposed Artificial Intelligence Act (AIA), it underscores the importance of ethical principles in AI governance.
This course introduces the concept of trustworthy AI, its foundational ethical principles, and its alignment with EU legal frameworks, including the AIA and other relevant regulatory initiatives in the digital space. Participants will learn how ethical principles can guide practical implementation through ethics-based assessment processes and their application in AI research and development.
Additionally, the course explores key intersections between existing and proposed EU laws. By the end, participants will gain a comprehensive understanding of some key requirements on trustworthy AI and their integration into legal and ethical frameworks.

Machine Unlearning: Theory, Methods, and Evaluations with Hands-On Insights

Politecnico di Torino University

University of L’Aquila

Course type: Advanced

About the course

The field of Machine Unlearning focuses on the operations needed to “forget” specific data used during the training process. This requirement arises in various scenarios, typically when copyrighted material is incorporated into training datasets or individuals invoke their right to be forgotten under the General Data Protection Regulation (GDPR).
Retraining models from scratch for each deletion request is typically impractical, particularly for large-scale models. Instead, Machine Unlearning leverages more computationally efficient methodologies.
The course will begin with a comprehensive introduction to the theoretical foundations of Machine Unlearning, followed by an in-depth exploration of state-of-the-art techniques. Additionally, it examines the evaluation metrics and protocols used to assess the effectiveness and efficiency of the unlearning process.
Finally, participants will actively engage in a dedicated benchmarking tool to explore unlearning techniques and evaluation metrics through hands-on activities. This practical approach aims to provide meaningful insights into this emerging area of research.

Formal verification of symbolic and connectionist AI: a way toward higher quality software

Paris-Saclay University

Paris-Saclay University

Paris-Saclay University

Course type: Introductory

About the course

AI software – both symbolic and connectionist – pervasiveness in our societies is a fact. Its very dissemination raises important questions from the perspective of software engineering. As AI software malfunctions have dire consequences on individuals and societies, it is expected that AI software will aim for high software quality. The field of formal methods successfully transferred techniques and tools into some of the most critical industries.
The goal of this course is to provide an accurate perspective of formal methods applied to AI software. Following real-world industrial examples, we will present how the use of formal methods can help AI developers assess the quality of their software, ranging from adversarial robustness to automated neural network fixing and explainable AI with guarantees.

Query Languages and Graph Neural Networks

University of Oxford

University of Oxford

Course type: Introductory

About the course

Graph Neural Networks (GNNs) are one of the most popular machine learning models for graph data, with applications in molecular biology, social networks, cybersecurity, and Web search, among many other areas. A key question in the study of GNN learning is understanding the expressive power of the models used. For example, we may study the ability of GNNs to distinguish graphs, usually called non-uniform expressive power, or we may try to identify the exact class of functions that a particular GNN architecture can realise, called uniform expressive power. Both kinds of expressive power of GNNs have been characterised with the help of query languages: formal, logic-based languages from database theory with well-understood computational properties.
In this course, we overview key results–most of them very recent–on uniform and non-uniform expressive power of GNNs formulated using query languages. We also show how these results can be exploited for practical applications such as model explainability and verification.

Foundations and Explainability of Datalog

University of Milan

University of Edinburgh

Course type: Introductory

About the course

Datalog emerged in the 1970s as a prominent logic-based query language from Logic Programming and has been extensively studied since then. It essentially extends the language of unions of conjunctive queries, which corresponds to the select-from-where fragment of SQL, with the important feature of recursion much needed to express some natural queries. In recent years, Datalog has been used in symbolic AI either as a powerful knowledge representation language, or as a tool for efficiently executing queries over knowledge bases. This introductory course is about the foundations of Datalog, as well as the important task of explaining answers to Datalog queries that is crucial towards explainable and transparent data-intensive applications.
The first part of the course will discuss the foundations of Datalog, namely syntax, semantics, query evaluation, and its computational complexity.
The second part will discuss natural explainability notions for Datalog queries and show that they can be captured via the unifying framework of semiring provenance. We will then concentrate on the notion of why provenance, a simple yet very useful approach for explaining answers to Datalog queries. We will analyze its computational complexity and discuss how it can be efficiently computed via SAT solvers. We will also discuss variations of why-provenance that lead to less informative and more informative explanations.

Higher-Order Network Analysis: Topology, Machine Learning, and Applications

Sapienza University of Rome

Sapienza University of Rome

Course type: Advanced

About the course

Many real world systems can be explored by analyzing the interactions among their constituents, such as social networks, brain networks, protein-protein interaction networks, etc. Therefore, the development of methodologies that analyse graph-structured data is gaining significant attention from scientists across several fields, such as network science, machine learning, and topology. Graphs model pairwise interactions between entities; however, several systems include higher-order interactions, i.e,. simultaneous interactions among more than two entities.
The objective of the course is to provide students with the fundamental theoretical and practical tools that allow for the analysis of higher-order network data. In particular, the course will bring together higher-order network processing techniques that lie at the intersection between network theory and machine learning. This course introduces methods for learning signals on some non-Euclidean structures—specifically, graphs, hypergraphs, and simplicial complexes—and examines advanced neural network models, including higher-order extensions of graph neural networks. Overall, this is a strongly interdisciplinary course that brings together concepts from network theory, topology, and machine learning to give a broader understanding of the promising field of higher-order networks analysis.

Human Rights to AI System Specifications

Imperial College London

Imperial College London

Course type: Introductory

About the course

As artificial intelligence continues to reshape the world, this course examines how AI systems can be designed to uphold and advance human rights. Through an interdisciplinary approach, participants will explore AI’s capabilities and challenges in areas such as copyright, fairness, accountability, and governance.
The course begins with foundational concepts in AI, including techniques like machine learning and natural language processing, and their relevance to human rights. It then delves into the ethical and legal implications of AI, such as its role in regulatory frameworks, emerging rights, and power asymmetries. Using infrastructural studies as a guiding methodology, participants will learn how to map human rights principles to actionable system specifications, ensuring that AI systems align with values like transparency, inclusivity, and security.
Real-world applications and case studies are integrated throughout, including discussions on the attribution of AI-generated content, the legal status of AI systems, and the potential for AI to enhance civic education and co-production of governance policies. By the end of the course, participants will be equipped with the knowledge and tools to critically analyse and design AI systems that are ethical, equitable, and aligned with societal needs.

Neural Recommender Systems: Theory, Methods, and Applications

Sapienza University of Rome

Sapienza University of Rome

Course type: Introductory

About the course

This course provides a comprehensive overview of neural recommender systems, focusing on their foundations, state-of-the-art advancements, and practical challenges. Students will explore fundamental architectures, deep learning techniques, and the role of neural networks in personalization and recommendation tasks.
The lectures will include critical topics such as sequence-based recommendations, graph-based approaches, multi-modal systems, and fairness and explainability in recommendations. Hands-on examples and discussions will emphasize both theoretical insights and practical implementations.
By the end of the course, students will gain a strong understanding of neural recommender systems, their applications, and the challenges of deploying them in real-world settings.

Introduction to Constraint Satisfaction

Charles University

Course type: Introductory

About the course

Constraint programming is a technology for declarative description and solving of hard combinatorial problems, such as scheduling. It represents one of the closest approaches to the Holy Grail of automated problem solving: the user states the constraints over the problem variables and the system finds an instantiation of variables satisfying the constraints and representing the solution of the problem.
The course overviews major constraint satisfaction techniques and shows how they can be used to solve practical problems.

Strategic AI: Bridging Game Theory and Multi-Agent Systems via Autoformalization

Royal Holloway, University of London

Royal Holloway, University of London

Royal Holloway, University of London

Course type: Introductory

About the course

This course explores the intersection of game theory, multi-agent systems, and AI-driven formalisation techniques for modelling strategic interactions between agents, both human and artificial. Participants will build a strong foundation in game theory and MAS through hands-on, interactive examples. The course will then introduce Game Description Languages (GDLs) and their temporal and situational extensions, emphasising their role in formalising complex game-theoretic scenarios.
Subsequently, participants will examine advanced concepts, including belief hierarchies, theory of mind, and recursive reasoning within agentic contexts. A novel highlight is the integration of large language models for autoformalizing game-theoretic scenarios, bridging subsymbolic and symbolic AI tools. Practical applications will be demonstrated through a custom MAS simulation framework and real-world case studies, showcasing the versatility and relevance of game-theoretic autoformalization in AI research.

Uncertainty in Machine Learning - Towards Trustworthy AI Models

University of Groningen

University of Groningen

Course type: Introductory

About the course

What if we train a model to classify dogs and cats, but it is later tested with an image of a human? Generally, the model will output either dog or cat, and has no ability to signal that the image contains no class that it can recognize. This is because classical neural networks do not contain ways to estimate their uncertainty, and this has practical consequences for the use of these models, like safety when cooperating with humans, autonomous systems like robots, computer vision systems, and other uses that require reliable uncertainty quantification estimates.
In this short course, we will cover the basic concepts of how to train machine learning models with uncertainty, Bayesian neural networks, uncertainty quantification, and related benchmarks and evaluation metrics.

AI for Security: Exploring Adversarial Learning and the Transferability of Adversarial Attacks

Kennesaw State University

Course type: Advanced

About the course

Adversarial attacks on AI models pose a significant threat to many applications, including autonomous vehicles and face recognition systems. But how effective and realistic is the concept of adversarial transferability? Can understanding this concept help us design better defenses?
This course dives into the crucial aspects of Adversarial Learning, focusing on understanding, designing, and defending against adversarial attacks in AI systems. We begin by establishing fundamental concepts of Convolutional Neural Networks (CNNs) and exploring the differences between white-box and black-box adversarial attacks. The course then focuses on Fast Gradient Sign Method (FGSM) attacks as a case study to demonstrate how adversarial attacks are designed and executed and their effectiveness in compromising AI models. This serves as a foundation to introduce the concept of Adversarial Transferability, to examine its practicality and effectiveness across different CNN architectures and explore how this concept can be leveraged to build resilient and robust AI defenses.
By the end of this course, participants will gain a comprehensive understanding of the real-world scenarios where adversarial attacks can pose significant threats to AI systems. They will also learn to assess the vulnerabilities of neural networks, grasp the concept of adversarial transferability, and explore how it can be utilized to enhance the security and resilience of AI models.