In this grants report, the Longtermism Fund team is pleased to announce that the following grants have been recommended by Longview and are in the process of being disbursed:
This report will provide information on what the grants will fund, and why they were made. It was written by Giving What We Can, which is responsible for the Fund's communications. Longview Philanthropy is responsible for the Fund's research and grantmaking.
We would also like to acknowledge and apologise for the report being released two months later than we would have liked, in part due to delays in the process of disbursing these grants. In the future, we will aim to take potential delays into account so that we can better keep to our target of releasing a report once every six months.
These grants were decided by the general grantmaking process outlined in our previous grants report and the Fund’s launch announcement.
As a quick summary, the Fund supports work that:
In addition, the Fund focuses on organisations with a compelling and transparent case in favour of their cost-effectiveness, and/or that will benefit from being funded by a large number of donors. Longview Philanthropy decides the grants and allocations based on its past and ongoing work to evaluate organisations in this space.
This grant is to support the work of Martin Wattenberg and Fernanda Viégas to develop their AI interpretability work at Harvard University. The grant aims to fund research that enhances our understanding of how modern AI systems function — better understanding how these systems work is among the more straightforward ways we can ensure these systems are safe. Profs. Wattenberg and Viégas have a strong track record (with both having excellent references from other experts) and their future plans are likely to advance the interpretability field.
Longview: “We recommended a grant of $110,000 to support Martin Wattenberg and Fernanda Viégas’ interpretability work on the basis of excellent reviews of their prior work. These funds will go primarily towards setting up a compute cluster and hiring graduate students or possibly postdoctoral fellows.”
The evaluations project at the Alignment Research Center (“ARC Evals") works on “assessing whether cutting-edge AI systems could pose catastrophic risks to civilization.” ARC Evals is contributing to the following AI governance approach:
ARC Evals works primarily on the first step of this approach.
The organisation is relatively new, and is now scaling up after seeing success. For example, ARC Evals built partnerships with frontier labs OpenAI and Anthropic to evaluate GPT-4 and Claude for certain dangerous capabilities prior to their release. At least as of the time this was published, the organisation has substantial room for more funding — on the order of millions of dollars needed to support its plans over the coming 18 months.
Longview: “We recommended a grant of $220,000 to ARC Evals on the basis of ARC Evals’ strong plan for contributing to AI governance and promising early progress. These funds will go primarily towards staff costs, and possibly computation, depending on ARC Evals’ overall fundraising.”
This grant will support NTI | Bio for a specific project aiming to strengthen international capabilities to uphold the norm against bioweapons development and use. Concretely, this involves organising a workshop with leading experts on the topic to develop a list of key recommendations. To read about the kind of work involved in this project, we recommend reading the NTI | Bio paper “Guarding Against Catastrophic Biological Risks: Preventing State Biological Weapon Development and Use by Shaping Intentions”. The grant is restricted to this project.
Longview: “We recommended a grant of $100,000 to support this work on the basis that it was likely the most promising work which NTI | Bio would not otherwise have funding available for, and NTI’s track record of running similar projects. These funds will go primarily towards the workshop, with a smaller portion towards staff costs.”
Learn more about NTI | Bio more broadly and see here for more information about the grant specifically.
This grant provides funding for CCDD to employ a Director of Research and Administration to support CCDD’s work. This role acts as a force multiplier on all of CCDD’s work, which after reviewing their work several times over the last few years, Longview believes is impactful.
CCDD’s research contributes to planning for and reducing the chance of global catastrophic biological risks including influencing policy such as by estimating disease spread, researching vaccine trials (such as by publishing the original research on the potential value of human challenge trials to address COVID-19), and training future epidemiologists (such as training several Epidemic Intelligence Service officers). Its director, Professor Marc Lipsitch, spends around a quarter of his time as the Senior Advisor for the CDC’s Center for Forecasting and Outbreak Analytics, where he was founding co-director, and former CCDD Postdoctoral Research Fellow Rebecca Kahn was also on the founding team. Prof. Lipsitch is also a nuanced contributor to important debates such as around research with the potential to be used for both good and harm. Donors can learn more about these topics via his appearance on variouspodcasts and media.
This grant helps fill a particular funding gap that CCDD reported could otherwise be difficult to fill — CCDD is mostly funded via the US government and large foundations, but generally this funding is restricted to direct research (rather than supporting research via funding operational or administrative roles).
Longview: “We recommended a grant of $80,000 to support Laurie Coe’s position as CCDD’s Director of Research and Administration on the basis that this will be a force multiplier on work increasing the world’s readiness for and reducing the chance of catastrophic pandemics, and because CCDD has a pressing need for funding to support this role.”
Learn more about the Center for Communicable Disease Dynamics.
This grant supports CEIP to run a project aiming to develop a common understanding about escalation pathways to nuclear war and which policy interventions are most likely to contribute to risk mitigation. More specifically, this grant will help fund research workshops whereby a diverse range of experts within fields relevant to nuclear security and risk analysis convene to analyse potential escalation pathways, attempt to estimate their likelihood, identify potential levers to reduce or mitigate this risk, and compare these various pathways and levers more holistically.
The project will be run by James Acton and Jamie Kwong and will result in a report with policy recommendations and outreach to decision makers to promote these policy changes.
Longview: “We recommended a grant of $52,000 to support the project on escalation pathways on the basis of its direct relevance to reducing the most extreme risks from nuclear weapons, and the CEIP team’s strong track record of high-quality analysis which is taken seriously by policymakers. These funds will go primarily towards workshops and project staff time.”
The Fund is approaching the end of its first year, and the team is extremely grateful to the 598 donors who have cumulatively raised over $750,000 so far. We can all help solve funding constraints — your donations, and your advocacy, can make an enormous difference to protect the lives of future generations.