Humans and Autonomous Agents Lab

Ensuring safety in Human-Centric Multiagent Systems

Latest News

January 2026

New students join the lab

Our lab has new members: Willam (PhD), Ajmain (MEng project), Zirui (MEng project).

January 2026

Paper accepted in the Journal of Artificial Intelligence Research (JAIR)

Our paper on a model of (false) polarization on social recommender systems mediated by ideological organizations has been accepted for publication in JAIR.

December 2025

Paper accepted at AAMAS 2026

Our paper on ‘LLM-augmented empirical game theoretic simulation for social-ecological systemss’ (link) has been accepted at AAMAS 2026.

October 2025

NCC grant Awarded

Our lab has been awarded NCC CSIN grant on ‘Threat Modelling and Mitigation of human-factors based exploits in advanced driver assistance systems’.

View All News

Research Areas

Simulation Icon

Safety of Autonomous Vehicles and Agent-Based Systems

As autonomous agents—such as self-driving vehicles and intelligent assistants—become more widespread, ensuring their safety and reliability in human-centered environments is critical. Our work explores novel approaches to verification and validation that explicitly account for human unpredictability, cognitive diversity, and interaction dynamics. We aim to develop methodological frameworks that address both technical correctness and behavioral safety.

Simulation Icon

Game theory for Human-AI interaction

We focus on human-AI interaction from a behavioural game theoretic lens. We study how to formally represent interactions between humans and autonomous agents, especially when human behavior deviates from classical models of full rationality. Drawing from behavioral and empirical game theory, we develop models of bounded rationality to better align automated decision-making with real-world human behavior. This line of work also examines the strategic implications and risks that arise when agents rely on incorrect or idealized models of human decision-making.

Simulation Icon

Computational Social and Institutional Sciences

We leverage large language models and formal simulation frameworks to model human behavior at both micro and macro levels. This research develops tools to study social, economic, and institutional dynamics through AI-based simulations, thereby supporting the testing of hypotheses and the design of interventions in complex, societal-scale systems.

Simulation Icon

Norms, Values, and Alignment of AI systems

This line of research deals with reasoning, learning, and deliberation of norms within multiagent systems. The overarching goal is to look beyond preferences and move towards a more holistic sense of agency.