Artificial intelligence will change the world. Our mission is to ensure this happens safely and to the benefit of everyone.
We aim to produce foundational research enabling the safe and beneficial development of advanced AI. See our publications below.
Luke Marks, Amir Abdullah, Luna Mendez, Rauno Arike, Philip Torr, Fazl Barez
Alex Foote1*, Neel Nanda2, Esben Kran1, Ionnis Konstas3, Shay Cohen4, Fazl Barez1,4,5*
Philip Quirke, Fazl Barez
Michael Lan, Fazl Barez
Jason Hoelscher-Obermaier1*, Julia Persson1*, Esben Kran1, Ionnis Konstas2, Fazl Barez1,3*
Antonio Valerio Miceli-Barone1*, Fazl Barez1*, Ioannis Konstas2, Shay B. Cohen1
Our initiatives allow people from diverse backgrounds to have an impact on AI safety. On 7 continents, more than 200 projects have been developed by over 1,000 researchers. Some teams have gone on to publish their research at major academic venues, such as NeurIPS and ACL.
We engage in both original research and contracting for research projects that aim to translate academic insights into actionable strategies for mitigating catastrophic risks associated with AI. We have co-authored with researchers from the University of Oxford, DeepMind, Edinburgh University and more.
Our aim is to foster a positive vision and an action-focused approach to AI safety, a commitment underscored by our signing of both the Statement on AI Risk and the Letter for an AI Moratorium. We are privileged to not only be tightly connected with but also actively develop a large community in AI safety.
Check out the list below for ways you can interact or research with Apart!
If you have lists of AI safety and AI governance ideas that are shovel-ready lying around, submit them to aisafetyideas.com and we'll put them into the list as we make each more shovel-ready!
Send your feature ideas our way in the #features-bugs channel on Discord. We appreciate any and all feedback!
You can book a meeting here and we can talk about anything between the clouds and the dirt. We're looking forward to meeting you.
We have a design where ideas are validated by experts on the website. If you would like to be one of these experts, write to us here. It can be a huge help for the community!
The blog contains the public outreach for A*PART. Sign up for the mailing list below to get future updates.
February 7, 2024
AI development attracts more than $67 billion in yearly investments, contrasting sharply with the $250 million allocated to AI safety. This gap suggests there's a large opportunity for AI safety to tap into the commercial market. The big question then is, how do you close that gap?
February 7, 2024
Written by Neel Nanda, who previously worked on mech interp under Chris Olah at Anthropic, who is currently a researcher on the DeepMind mechanistic interpretability team.
Head of Research Department
Commanding Center Management Executive
Partner Associate Juhasz
Head of Global Research
Commanding Cross-Cultural Research Executive
Commanding Research Executive
Manager of Experimental Design
Partner Associate Lækra
Head of Climate Research Associations
Research Equality- and Diversity Manager
Partner Associate Hvithammar
Honorary Fellow of Data Science and AI
P0rM Deep Fake Expert
Partner Associate Waade
Head of Free Energy Principle Modelling
London Subsidiary Manager
Partner Associate Dankvid
Partner Snus Executive
Bodily Contamination Manager
Partner Associate Nips
Head of Graphics Department
Cake Coding Expert
Associate Professor Formula T.
Honorary Associate Fellow of Research Ethics and Linguistics
Optimal Science Prediction Analyst
Partner Associate A.L.T.
Commander of the Internally Restricted CINeMa Research
Keeper of Secrets and Manager of the Internal REC