The Center on Long-Term Risk AI Safety Research and Field Building program addresses worst-case AI risks through research, identifying grants, and field building.
The risks from the development and deployment of advanced AI systems pose a complex challenge. Because our resources are limited, CLR believes we need to prioritise and ask ourselves what actions we should take now to have as much of a positive impact as possible.
Some of the crucial considerations that inform CLR’s current priorities are:
To address these risks, CLR:
You can read more about CLR’s research agenda and its focus on transparency on its website.
We don't currently have further information about the cost-effectiveness of the Center on Long-Term Risk beyond it doing work in a high-impact cause area and taking a reasonably promising approach.
We have varying degrees of information about the cost-effectiveness of our supported programs. We have more information about programs that impact-focused evaluators (some of which our research team expects to investigate soon as part of their evaluator investigations) have looked into, as well as programs that we’ve previously included on our list of recommended charities. We think it’s important to share the information we have with donors as we expect it will be useful in their donation decisions, but don’t want donors to mistakenly overweight the extent to which we share information about some charities and not others. Therefore, we want to clarify two things: