The Center on Long-Term Risk Fund supports promising projects and individuals working on long-term risk mitigation related to advanced AI.
The Center on Long-Term Risk (CLR) works to address worst-case risks from the development and deployment of advanced artificial intelligence systems, with a current focus on conflict scenarios and the technical and philosophical aspects of cooperation.
The CLR Fund primarily supports individuals who want to make research contributions to CLR’s current priority areas. However, it will also support other high-quality projects if its fund managers believe it will contribute to reducing risks of suffering (now or in the future).
Recent grant recipients include:
For more information about how donations are allocated, see the list of past recipients and the grantmaking process on the CLR website.
We don't currently have further information about the cost-effectiveness of the Center on Long-Term Risk Fund beyond it doing work in a high-impact cause area and taking a reasonably promising approach.
We have varying degrees of information about the cost-effectiveness of our supported programs. We have more information about programs that impact-focused evaluators (some of which our research team expects to investigate soon as part of their evaluator investigations) have looked into, as well as programs that we’ve previously included on our list of recommended charities. We think it’s important to share the information we have with donors as we expect it will be useful in their donation decisions, but don’t want donors to mistakenly overweight the extent to which we share information about some charities and not others. Therefore, we want to clarify two things: