Human Support Robot

Robots make the world happy

King’s College London, Social AI & Robotics Laboratory.

Last updated: Nov. 21, 2022

Helping robots to understand and adapt to people


  • Oya Celiktutan, Associate Professor, King’s College London
  • Gerard Canal, Assistant Professor, King’s College London
  • Viktor Schmuck, PhD student, King’s College London
  • Ruiqi Zhu, PhD student, King’s College London
  • Edoardo Cetin, PhD student, King’s College London
  • Lennart Wachowiak, PhD student, King’s College London



The Social AI & Robotics (SAIR) Laboratory, led by Dr Oya Celiktutan, focuses on the theory of machine learning and its application to human behaviour understanding and generation, human-robot interaction, and robot learning from humans. The success of robots that can interact and work alongside people in human environments depends significantly on their ability to recognise human expressive gestures and activities and respond to them accordingly. Within this context, the SAIR team’s work includes how robots can detect humans and recognise their corresponding behaviours, including their activities, gestures, and nonverbal behaviours, how they can navigate in crowded social spaces, how they can learn new tasks by watching others, and how they can explain their actions to their users. Dr Gerard Canal’s research focuses on the application of decision-making techniques such as Task Planning to extend the robot’s autonomy, adaptability, and explainability. Such techniques can find important applications in any robotic scenario where there is an interaction between humans and the robot, particularly, in assistive home scenarios.


  1. V. Schmuck, T. Sheng and O. Celiktutan, “Robocentric Conversational Group Discovery,” 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020, pp. 1288-1293, DOI: 10.1109/RO-MAN47096.2020.9223570.
  2. V. Schmuck and O. Celiktutan, “GROWL: Group Detection With Link Prediction,” 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 2021, pp. 1-8, DOI: 10.1109/FG52635.2021.9667061.
  3. E. Cetin, P. J. Ball, S. J. Roberts and O. Celiktutan, ‘’Stabilising Off-Policy Deep Reinforcement Learning from Pixels,’’ 2784-2810 2022 ICML [Link].
  4. E. Cetin and O. Celiktutan “Domain Robust Visual Imitation Learning with Mutual Information Constraints,” 2021 ICLR [Link].
  5. L. Wachowiak, P. Tisnikar, G. Canal, A. Coles, M. Leonetti and O. Celiktutan, “Analysing Eye Gaze Patterns during Confusion and Errors in Human–Agent Collaborations,” 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2022, pp. 224-229, DOI: 10.1109/RO-MAN53752.2022.9900589.