London Initiative for Safe AI

A home for leading AI Safety research

Our mission is to improve the safety of advanced AI systems by empowering individual researchers and small organisations. We do this by creating a supportive, collaborative, and dynamic research environment that hosts members who are pioneering a diversity of AI safety research.

I visited LISA for several days and was impressed by the community they've built in a short time, with a large number of talented junior researchers. The UK AI safety community seems to be growing rapidly and I think LISA is helping catalyze this.
Adam Gleave (CEO, FAR AI)
Two of my MATS scholars (Arthur Conmy and Callum McDougall) used LISA as their office during MATS, and did fantastic work, their paper (Copy Suppression) is a solid and genuinely valuable contribution to the mechanistic interpretability literature. I'm glad LISA existed to host them!
Neel Nanda (Research Engineer, Google DeepMind)
Having the space gave me a big energy boost from having a community and a sudden influx in ideas and a huge jump in how much I understood what junior researchers like MATS scholars were doing.
Hoagy Cunningham (Anthropic Researcher & Former Scholar, MATS)
We have substantially benefited from LISA in a variety of ways. LISA hosts a large number of talented people, which directly benefits us: mutually beneficial discussions over lunch and dinner, high-quality talks, multiple new research collaborations. We’re also very thankful for the operational support from LISA.
Marius Hobbhahn (CEO, Apollo Research)
MATS 4.1 would not have been possible without LISA. LISA provided crucial and highly counterfactual infrastructure to MATS scholars and the only MATS team member in London. MATS 4.1 scholars commented on their increased productivity in the LISA office, and many found collaborators with the members of LISA. This included one MATS scholar accepting a role at one of LISA’s member organisations, and multiple others deciding to work on research projects with LISA member.
Henry Sleight (4.1 Coordinator, MATS)
The second iteration of ARENA benefited greatly from taking place in the same offices as MATS 3.1 participants, and other alignment orgs. The discussions & talks organised during that time helped better develop participants’ inside views in alignment, and form connections which would help develop their career.
Callum McDougall (Anthropic Researcher and Founder, ARENA)
Having a managed workspace with other AI safety researchers not only decreases our operational burden, but also fosters serendipity through lunchtime conversations and regular talks and events. We value this highly for idea exchange, and as a hiring pipeline – our first full-time research hire is an ARENA alum, who I first met in the office.
Jessica Rumblelow (CEO, Leap Labs)

CONTACT US

Name(Required)