Our Weaknesses

As advocates for measurement, evaluation, and transparency in the nonprofit sector, it is essential that we also exhibit transparency - acknowledging weaknesses in our work and mistakes we made along the way.

The organizations that participated in the research program investigated themselves

The Maximum Impact program provides associations conducting cost-benefit studies with financial support, guidance from academic researchers, and oversight from our team throughout the program in order to maintain the highest research standards. However, as the associations are ultimately responsible for carrying out the investigations and capabilities vary, it is difficult to ensure all studies achieve the same level of quality.

How do we solve this?

We engaged a panel of expert judges from Israel and worldwide to evaluate the work of participating associations. The judges assessed both evidence and research quality and chose the top charities based on their effectiveness levels and confidence in their findings. For organizations that did not receive the award, we added relevant feedback from judges to their organization pages.

Moving forward, we will continue exploring additional ways to ensure consistently high standards across all studies. Maintaining the integrity of research processes while empowering diverse organizations remains an ongoing challenge we are committed to addressing.

We evaluated only a small number of local organizations

In the first round of our research program, we reviewed approximately 160 association applications and provided in-depth cost-benefit study support to 21 organizations. This is still a small sample, which prevents us from saying with certainty that we were able to locate the most effective associations in their field.

How do we solve this?

That’s true! We are at the beginning of the road, and aim to increase the number of associations that perform and publish cost-effectiveness studies across successive cohorts. In this way, we can get an overall picture of the effectiveness of an association in relation to associations operating in the same field. In addition, in areas where there is a known indication of what is considered good results in the area, for example, the cost to prevent a unit of carbon-dioxide, our panel of judges mentioned the accepted benchmark.

Quantitative measurement does not capture the whole picture

There can be very impressive organizations whose quantitative research does not capture all the social impact of their actions, or it is very difficult to produce qualitative research on the results of its interventions. Our methodology, which is mainly quantitative, may miss these associations.

How do we solve this?

First, this is an actual limitation of this kind of research. There are ways within the research design itself to address this problem, which we do when planning the structure of studies. We do not avoid measuring “difficult to test” associations, as in the first cohort we examined an alternative protein accelerator as well as awareness development programs, Areas which are considered to be hard to measure.

Theoretically, we acknowledge limits to quantitative assessment but believe organizations with hard-to-measure impacts carry the burden of showing meaningful change. If a social program’s theory of change is not measurable, the chance of operating ineffectively increases, as data indicate some do this. Therefore, as funders, we should demand more measurement in social sectors while also considering that quantification alone does not fully capture impact. There is room for additional context in decision-making.

Skip to content