Note: We reevaluated some of the evaluators below in 2024. Please see our 2024 results for the most up-to-date information.
This page presents the results from Giving What We Can’s 2023 iteration of our ongoing project to evaluate impact-focused evaluators and grantmakers (which we collectively refer to as ‘evaluators’). We use this project to decide which evaluators we rely on for our giving recommendations and to advise our cause area funds.
In the 2023 round, we conducted the following six evaluations:
Based on our evaluation, we’ve decided to continue to rely on GW’s charity recommendations and to ask GW to advise our new GWWC Global Health and Wellbeing Fund.
Some takeaways that inform this decision include:
GW’s overall processes for charity recommendations and grantmaking are generally very strong, reflecting a lot of best practices in finding and funding the most cost-effective opportunities.
GW’s cost-effectiveness analyses stood up to our quality checks. We thought its work was remarkably evenhanded (we never got the impression that the evaluations were exaggerated), and we generally found only minor issues in the substance of its reasoning, though we did find issues with how well this reasoning was presented and explained.
We found it noteworthy how much subjective judgement plays a role in its work, especially with how GW compares different outcomes (like saving and improving lives), and also in some key parameters in its cost-effectiveness analyses supporting deworming. We think reasonable people could come to different conclusions than GW does in some cases, but we think GW’s approach is sufficiently well justified overall for our purposes.
Based on our evaluation, we’ve decided to recommend the AWF as a top-rated fund and to allocate half of our new GWWC Effective Animal Advocacy Fund’s budget to the AWF.
The key findings informing our recommendation of the AWF are:
Its recent marginal grants and overall grant decision-making look to be of sufficiently high quality.
We expect the AWF will have significant room to fund grants at or above the quality of its recent marginal grants.
We don’t know of any clearly better alternative donation options in animal welfare.
We did find what we think to be significant room for improvement in some of the AWF’s grantmaking reasoning and value-add to its grantees beyond funding — much of which the AWF acknowledges and is planning to address. However, we don’t think this room for improvement affects the AWF’s position as being — to our knowledge — among the best places to recommend to donors.
Based on our evaluation, we’ve decided to not currently rely on ACE’s charity recommendations nor to recommend ACE’s Movement Grants programme (MG) as a top-rated fund. However, we still think ACE’s funds and recommendations are worth considering for impact-focused donors and we will continue to host them on the GWWC donation platform. We’ve also decided to recommend the work of one of ACE’s recommendations — The Humane League (THL) — on corporate campaigns for chicken welfare as a top-rated program, and plan to allocate half of the GWWC Effective Animal Advocacy Fund’s budget to it until our next review in animal welfare.
Our key findings informing these decisions are:
When compared with the AWF, we think ACE’s Movement Grants fund (MG) performs slightly less strongly on several proxies we looked into for the marginal cost-effectiveness of its grants.
We therefore think the AWF is currently a slightly better donation option for the impact-focused donor. However, we are open to part of this difference being explained by reasonable disagreements on optimal grantmaking strategy. Moreover, if the AWF had not been available as a better alternative by our criteria, we might have recommended MG upon further consideration. We think MG will become more competitive with the AWF, according to our criteria, if it succeeds in implementing the improvements that ACE has planned and in moving closer to the vision for MG that ACE has shared with us.
ACE’s charity evaluations process does not currently measure marginal cost-effectiveness to a sufficient extent for us to directly rely on the resulting charity recommendations.
We see some reasons to be hopeful this will change in future evaluations, and still think ACE’s recommendations are worth considering for impact-focused donors. We also expect the gain in impact from giving to any ACE-recommended charity over giving to a random animal welfare charity is much larger than any potential further gain from giving to the AWF or THL’s corporate campaigns over any (other) ACE-recommended charity, and note that we haven’t evaluated ACE’s recommended charities individually, but only ACE’s evaluation process.
THL’s corporate campaign work for chicken welfare is plausibly a highly cost-effective donation opportunity.
This assessment is not based on a direct investigation by the GWWC research team, but supported by four separate pieces of evidence, one of which is ACE’s recommendation.
We decided not to make an explicit comparison between THL’s corporate campaign work and the AWF in terms of their marginal cost-effectiveness, as we thought we would be unlikely to find a justifiable difference between the two in the limited time we had available. We decided to recommend both as top-rated options and plan to have our GWWC Effective Animal Advocacy Fund allocate half of its disbursements to THL’s program and to ask the AWF to advise the other half.
Based on our evaluation, we’ve decided to recommend the LTFF as a top-rated fund and to allocate half of our new GWWC Risks and Resilience Fund’s budget to the LTFF.
The key findings informing our recommendation of the LTFF are:
The LTFF has high-quality applicants to make grants to, and has a good basic process for selecting among those.
The LTFF’s significant room for funding makes it more likely that donations to it will be cost-effective.
We don’t know of any clearly better alternative donation option in reducing GCRs.[3]
We also found some areas where we think the LTFF could significantly improve:
Improving the quantity, diversity, and quality of its recorded grant reasoning.
Improving its response time for grant applications.
The issues we identified seem to mainly be a result of LTFF’s difficulty in maintaining and scaling its grantmaking capacity to match a significant increase in funding applications. This is something the LTFF is aware of and working to address.
We found no clear, justifiable reasons for the donor’s extra dollar to be better spent at the LTFF than at Longview’s Longtermism Fund (or vice versa). As a result, we recommend both as top-rated funds and plan to allocate half of the budget of our Risks and Resilience Fund to each until our next evaluation. We did outline several differences between the LTFF and the LLF so motivated donors can decide for themselves which they think fits their values and starting assumptions best.
Based on our evaluation, we’ve decided to recommend the LLF as a top-rated fund and to allocate half of our new GWWC Risks and Resilience Fund’s budget to the LLF.
The key findings informing our recommendation of the LLF are:
Longview has solid grantmaking processes in place to find highly cost-effective funding opportunities.
In the grants we evaluated, we generally saw these processes working as intended, which makes us optimistic about the cost-effectiveness of the grants.
The scope and structure of the LLF is — by design — consistent with what we are looking for with our Risks and Resilience Fund: a fund that makes grants that are relevant and understandable to a wide variety of donors looking to reduce global catastrophic risks.
We don’t know of any clearly better alternative donation option in reducing GCRs.[3]
We found no clear, justifiable reasons for the donor’s extra dollar to be better spent at the LLF than at the EA Long-Term Future Fund (or vice versa). As a result, we recommend both as top-rated funds and plan to allocate half of the budget of our Risks and Resilience Fund to each until our next evaluation. We did outline several differences between the LLF and the LTFF so motivated donors can decide for themselves which they think fits their values and starting assumptions best.
Based on the results of this research, we made some changes to our recommended programs. We:
Removed several charities we used to recommend based on the recommendation of a “trusted evaluator” in favor of only including charities/funds based on the research of evaluators we had looked into and decided to rely on as part of our 2023 evaluations research.
Highlighted in each cause area of our best charities page a few “additional opportunities” from the full range of our supported programs, due to the significantly shorter list of recommendations, so that interested donors could check out other promising opportunities. (More info about how and why these were selected.)
Added a Recommendations FAQ section to provide detailed context for our recommendations
How we chose which evaluators to look into in 2023
We explain the reasons for choosing each evaluator within each 2023 report. Among other reasons, in 2023, these choices were informed by:
A survey we conducted among 16 effective giving organisations (made up primarily of the larger national fundraising organisations listed here), on which evaluations would be most useful for them.
Where our donors give — we wanted to prioritise evaluators whose funds and charity recommendations our donors were currently supporting the most, so that the results would be most useful to them.
Our previous selection of “trusted evaluators” — we wanted to prioritise evaluators whose research had informed our previous recommendations.
Our decision to evaluate exactly two evaluators per cause area — we wanted to be able to compare where feasible, but also wanted to prioritise the quality of our initial investigations over the quantity of them, given the limited time we had available (which was also the preference of the 16 other effective giving organisations we surveyed).
The choice of which evaluators to prioritise affects our overall recommendations for 2023. For example, because we have not yet evaluated Founders Pledge, we have not used its research to inform our recommendations so far. This lack of comprehensiveness is one of our project’s initial key limitations. We try to partially account for this by highlighting promising alternatives to our recommendations on our donation platform, and providing resources for donors to investigate these further.
The process for our 2023 evaluations
As discussed above, a key goal for our evaluations project was to decide which evaluators to rely on for our recommendations and grantmaking. We were additionally interested in providing guidance to other effective giving organisations, providing feedback to evaluators, and improving incentives in the effective giving ecosystem.
For each evaluator, our evaluation aimed to transparently and justifiably come to tailored decisions on whether and how to use its research to inform our recommendations and grantmaking. Though each evaluation is different — because we tried to focus on the most decision-relevant questions per evaluator — the general process was fairly consistent in structure:
We began with a general list of questions we were interested in, and we used this list to generate some initial requests for information.
After receiving this information, we tried to define decision-relevant cruxes of the evaluation: the questions that, when answered, would guide our decisions on whether and how to rely on the evaluator for our recommendations and grantmaking. These differed for each evaluator. In some cases, they were quite broad (e.g., “Do the Animal Welfare Fund’s review and grading processes look like they are tracking marginal cost-effectiveness?”), and in others, this was a set of more concretely defined research questions (as was the case for Happier Lives Institute).
We shared these cruxes along with some additional (more specific) questions with evaluators, asking for feedback — in some cases, changing our cruxes as a result.
We then investigated these cruxes — asking further questions and iterating on our overall view — until we felt confident that we could make a justifiable decision. We intermittently tried to share our thinking with evaluators so that we could receive feedback and new information that would help with our investigation.
Note: we were highly time-constrained in completing this first iteration of our project. On average, we spent about 10 researcher-days per organisation we evaluated (i.e., two researchers spending one full workweek), of which only a limited part could go into direct evaluation work — a lot of time goes into planning and scoping, discussion on the project as a whole, communications with evaluators, and so on.
Your current selection
Funds / Organisations you select will show up here