Reduce catastrophic risks from advanced AI

The AI Risk Mitigation Fund (ARM Fund) is a non-profit aiming to reduce catastrophic risks from advanced AI through grants towards technical research, policy, and training programs for new researchers.

Grants from our team

The ARM Fund launched in Dec 2023 and hasn’t made any grants yet. Here are some grants made by the ARM Fund team as part of their grantmaking for the Long-Term Future Fund. We are proud to have made grants to many of these individuals early in their careers, after which they went on to make significant contributions in technical AI safety and AI governance.

David Krueger
University of Cambridge
Building research capacity
Start-up funds for computing resources for a deep learning and AI alignment research group at the University of Cambridge
$200,000
Noemi Dreksler
Centre for the Governance of AI
Policy
Two-year funding to conduct public and expert surveys on AI governance and forecasting.
$231,608
Alan Chan
Mila
Building research capacity
4-month stipend for a research visit to collaborate with academics in Cambridge on evaluating non-myopia in language models and RLHF systems
$12,321
Alexander Turner
Oregon State University
Technical research
Year-long stipend for research into shard theory and mechanistic interpretability in reinforcement learning
$220,000
Jessica Rumbelow
Leap Laboratories
Technical AI safety research
Seed funding for a new AI interpretability research organization
$195,000
Akbir Khan
University College London
Technical research
Compute for empirical work on AI Safety Via Debate
$55,000

Grantmaking focus areas

Technical AI alignment research

Technical research can uncover dangerous capabilities before it’s too late, or enable us to design future AI systems that are easier to understand, monitor and control.

AI Policy

Good policy can ensure that governments and corporations appropriately guard against catastrophic risks.

Building AI safety research capacity

Investment has poured into AI capabilities development, yet strikingly few researchers are working on key problems in AI safety, particularly outside of major industry labs. Grants in this area aim to bring new talent into the AI safety field.

About the Fund

This fund was spun out of the Long-Term Future Fund (LTFF), which makes grants aiming to reduce existential risk. Over the last five years, the LTFF has made hundreds of grants, specifically in AI risk mitigation, totalling over $20 million. Our team includes AI safety researchers, expert forecasters, policy researchers, and experienced grantmakers. We are advised by staff from frontier labs, AI safety nonprofits, leading think tanks, and others.

Help reduce catastrophic risks from advanced AI