About Us

WHO WE ARE

We’re a team of six based in the UK and California. In the summer of 2023, after running the MATS Program extension, we found ourselves having organically grown an AI safety hub of researchers and organisations in London. At the same time, a lot of momentum had been building around safety in the UK, led by the UK Government’s decision to host the first Global AI Safety Summit. We decided to formalise this hub and LISA was the product of that formalisation process. We’re now proud to be supporting some of the leading research organisations and individual researchers in AI safety. Please see below for full details on our team and Advisory Board.

WHAT WE DO

Our mission is to be a leading research centre that improves the safety of advanced AI systems by supporting and empowering individual researchers and small organisations. To achieve this, LISA:

  • Provides a supportive, productive, and collaborative research environment where a diversity of ideas are refined, challenged, and advanced;
  • Offers financial stability and recognition to individual researchers and small organisations;
  • Cultivates a home for leading AI safety research by leveraging London’s strategic advantages and building upon our existing ecosystem and partnerships;
  • Fosters epistemic quality and diversity amongst new AI safety researchers and organisations by facilitating established domain-specific mentorship and upskilling programmes and encouraging debate and investigation of numerous AI safety research agendas.

LISA stands in a unique position to enact this vision. We now support several organisations including Apollo Research, Leap Labs, BlueDot Impact, MATS extension, and ARENA as well as many individual and externally-affiliated researchers. We are poised to capitalise on the abundance of motivated and competent talent in London and the supportive environment provided by the UK government and other local organisations. Our approach is not just about creating a space for research; it is about building a community and a movement that can significantly improve the safety of advanced AI systems.

Our team

Get to know our leadership team

Mike Brozowski Operations Director
LinkedIn Email

Mike is an operations professional with over seven years experience in senior leadership roles including high-growth, early-stage companies. Mike managed the business operations of the MATS program in London and co-founded LISA. Before his involvement with AI safety, Mike led the operations for a number of FinTech firms and was responsible for the management and integration of internal operations where he oversaw the creation of all operational strategies and policies across a wide range of support activities.

James Fox Research Director
LinkedIn Email

James is Research Director. He co-leads LISA and oversees research prioritisation and strategy. He is currently writing up his PhD (Computer Science, University of Oxford) on technical AI safety, supervised by Tom Everitt (Google DeepMind) and Michael Wooldridge & Alessandro Abate (Oxford), which focused on game theory, causality, reinforcement learning, and agent foundations. James also has an MSci in Natural Sciences (Physics) from the University of Cambridge and has research experience at CERN and CSER.

Ryan Kidd Non-Executive Director
LinkedIn Email

Ryan is Co-Director of MATS, a Board Member and Co-Founder of the London Initiative for Safe AI (LISA), and a Manifund Regrantor. Previously, he completed a BSc (Hons) and PhD in Physics at the University of Queensland (UQ), where he ran UQ’s Effective Altruism student group for three years, tutored physics courses, volunteered in academic, mental health, and ally roles, and helped organize the UQ Junior Physics Odyssey.

Christian Smith Non-Executive Director
LinkedIn Email

Christian is Co-Director of MATS and a Board Member and Co-Founder of the London Initiative for Safe AI (LISA). Previously, he studied particle physics and pedagogy at Stanford University, worked in operations at multiple organizations including Lightcone Infrastructure, performed research at CERN, and organized educational programs like the Uncommon Sense Seminar.

Chris Knight Head of Compliance
Nina Wolff-Ingham Office Manager
LinkedIn Email

Nina is LISA’s Office Manager. She works to optimise the research centre’s functionality. Nina has a background within hospitality management and event coordination, which she uses alongside her BAHons in Marketing to help develop a cohesive working environment.

Jodie Woodall Front of House Coordinator

Meet our advisory board

Callum McDougall Researcher at Anthropic and Founder of ARENA
Henry Sleight MATS Extension Program Coordinator
Jamie Bernardi Co-Founder of BlueDot Impact
Jessica Rumbelow Co-Founder of Leap Labs
Marius Hobbhahn Director and Co-Founder of Apollo Research