Meet the Team

  • Sotirios (Sotos) Tsaftaris

    Hub Director

    An expert in multimodal AI and applications in health at The University of Edinburgh, working closely with industry, NHS, and other stakeholders.

  • Hana Chockler

    London Spoke Lead | EDI Lead

    An expert in causal reasoning and explainability at King’s College London, a Google Faculty awardee, and a former Principal Scientist at causaLens.

  • Matthew Sperrin

    Manchester Spoke Lead | ECR Champion

    An expert in causal inference for decision support at the University of Manchester, leading several projects with industry and NHS partners.

  • Ricardo Silva

    Research Theme Lead

    An expert on causal inference and AI at University College London. He holds an EPSRC Open Fellowship and is a Turing fellow. Collaborates with industry partners such as DeepMind, Spotify and ByteDance.

  • Ben Glocker

    Knowledge Transfer Lead

    An expert in causality for image analysis focusing on the safe and ethical deployment of medical imaging AI at Imperial College London and Kheiron Medical Technologies.

  • Niccolo Tempini

    Responsible Innovation & Ethics Lead

    An expert in the governance, management and sharing of research data, and data infrastructure development at the University of Exeter and also a Turing Fellow.

  • Catherine

    Catherine Gauld

    Hub Manager | Environmental Sustainability Champion

    An expert having managed several large-scale multi-stakeholder projects, with a keen interest in AI for health and AI Ethics

  • Emily Lekkas

    Partnerships Manager

    An expert in driving forward high value innovation projects in technology sectors such as Data Science, AI, Digital Health, Medical Devices and Life Sciences.

  • Belgin Davidson Portrait

    Belgin Davidson

    CHAI Hub Administrator

    An expert in delivering portfolio activities and projects with 19 years’ experience in higher education.

  • Connie Aitkin Portrait

    Connie Aitkin

    CHAI Hub Events Administrator

    An expert in events management, having coordinated workshops, recruitment days, and networking events, ensuring seamless execution and positive outcomes.

  • Erin Johnstone Portrait

    Erin Johnstone

    CHAI Business Development Executive

    An expert in developing and managing multi-stakeholder partnerships for cutting-edge projects across healthcare, medical devices and the pharmaceutical industry, with a keen interest in AI.

Co-Investigators

Professor Daniel Alexander, University College London

Professor Kenneth Baillie, University of Edinburgh

Dr Sjoerd Beentjes, University of Edinburgh

Dr Elliot Crowley, University of Edinburgh

Dr Karla Diaz-Ordaz, University College London

Dr Javier Escudero Rodriguez, University of Edinburgh

Dr Henry Gouk, University of Edinburgh

Dr Anita Grigoriadis, King’s College London

Dr Hui Guo, University of Manchester

Professor Bruce Guthrie, University of Edinburgh

Dr Stephan Guttinger, University of Exeter

Professor Ewen Harrison, University of Edinburgh

Professor Yulan He, King’s College London

Dr Ava Khamseh, University of Edinburgh

Dr Yingzhen Li, Imperial College London

Professor Kia Nazarpour, University of Edinburgh

Professor Ram Ramamoorthy, University of Edinburgh

Professor Ginny Russell, University of Exeter

Dr Sohan Seth, University of Edinburgh

Professor Ian Simpson, University of Edinburgh

Dr Eliana Vasquez Osorio, University of Manchester

Professor William Whiteley, University of Edinburgh

Fabio De Sousa Ribeiro  

Fabio`s research activity over the past few months has focused primarily on the study of new theoretical identifiability results for deep latent variable models. Identifiability is extremely important for both reliable representation learning and causal inference, as without identifiability guarantees it is difficult to determine whether our model estimates are truly causal or a product of confounding factors, selection bias, spurious correlations. 

PDRA

David Kelly 

Researches and develops new algorithms for black-box explainability. Since he started, we submitted a paper on explaining absence (specifically, the "no abnormalities" diagnosis for medical AI for images), a paper on multiple explanations (important for healthcare images with multiple abnormalities), and we are currently working on a journal paper describing the underlying algorithm and implementation. David also participates in development of explainability for spectra (applicable to Raman spectroscopy) and for tabular data (useful for tabular data in healthcare), and oversees an UG student working on explainability of 3D images (useful for explaining MRIs). 

Nathan Blake 

Works on applications of explainability in healthcare. Since he started, Nathan submitted a paper on explanations for MRI classifiers to Nature Communications in Medicine (partly including research that was done in the TAS Node). Currently, he is working on a line of papers on explainability for Raman spectroscopy, with the first paper demonstrating explainability for synthetic data (presented in the SPEC'24 conference in June), and subsequent papers extending the approach to in-vitro and in-vivo data. Nathan also provides guidance to a PhD student, working on personalised prediction of the risk for developing breast cancer for BRCA1&BRCA2 mutation carriers, using AI and causality.