I have been a part of Effective Altruism (EA) since 2015. During that time I have been to multiple EA conferences, co-founded the EA chapter in Tübingen, gave many talks on different EA-related topics and am generally convinced that EA is a good idea. This post contains a short introduction to EA to give people who have not yet heard of the concept a chance to understand its basic ideas. I then talk about the benefits the movement has and clarify some misconceptions about EA that I have witnessed multiple times when talking to others.

What is Effective Altruism - an Introduction

There are two paths through which EA originated. The first is one of moral philosophy: Peter Singer initiated the movement with the famous ‘drowning child’ thought experiment. Assume you are walking home from work and pass a pond. You see a small child that is about to drown shouting for help. Nobody else is around who could help and the pond is so shallow that there is no risk involved for you. Most people would agree that there is some kind of moral duty to save the child from drowning. Walking away would be negligence and omission and we would agree that a person just ignoring the child would do something wrong. Now assume that you are walking home from work in your fancy new suit which has cost you 3000 dollars. Same situation - a child is about to drown in the pond. However, when you save the child your fancy suit is ruined and the 3000 dollars are gone for good. Still, most people would say that it is a moral duty to save the child. I mean, who cares about 3000 dollars when a life is at stake, right? It is important to note that this intuition of the moral duty holds independent of the identity of the child. We would save the neighbors child but also a complete stranger. Singer argues that, in reality, we face this exact decision every time we buy something. We know that there are people, for example in Sub-Saharan Africa, that die every day through preventable diseases such as malaria who could be saved through a donation to the right organization. However, we often decide to ignore their suffering and death and rather buy a new luxury good that is neither an investment for the future nor a necessity for a decent life in the West. We buy way more clothes than we need, the latest technological gadgets, a car that is way more expensive than necessary and so on. The large physical and emotional distance and the absence of the direct emotional reward from donating vs. jumping in a pond seem to have such a large effect that we have no problem ignoring this suffering and death for most of the time. Peter Singer argued that this is wrong and many agreed and joined the movement. It was estimated that a donation to the Against Malaria Foundation could save a persons life at the cost of a 3000 dollar donation, thereby bringing the drowning child thought experiment to the real world. However, I need to point out that this estimate has been updated to around 6000 dollars due to better disease models and the correction of some original underestimations. I guess this doesn’t change much about the philosophical point in question - I mean, who cares about a 6000 dollar suit when a life is at stake, right?

The second path argues through an economic analysis of giving. In 2006, Holden Karnofsky and Elie Hassenfeld decided to donate a large portion of the fortune they earned as hedge fund analysts. Looking at the world through the lense of investment managers, they wanted to get a high altruisic return on their donation. Therefore they started to research different organisations but none of them analyzed its own effectiveness or had sufficient data collection methods. So in 2007 they quit their jobs and founded GiveWell which is a website that compares different charities according to their effectiveness. It is often referred to as ‘the spreadsheet method of giving’ since it uses lots of data and statistical models to compare organisations. Founded on these two pillars the EA movement grew very fast and by now has many members and affiliated organisations. They ask questions such as: “What is most effective way to alleviate suffering?”, “How should we measure happiness?”, “What causes should be prioritized?”, “Is Artificial Intelligence becoming a risk for sentient life and what can we do about it?”, “How can we prevent governments from engaging in large scale conflicts?” and many more.

The difference in effectiveness between different interventions is often illustrated by the blind dog example. To train a blind dog and get it used to the new owner costs between 40k and 50k Dollars in the US. It was often claimed that preventing people from getting blind by a disease called trachoma costs between 20 and 200 Dollars. That would imply a difference in effectiveness of at least a factor of 200 - probably even more given that preventing blindness is better than making a blind persons life more convenient through a blind dog. However, later research on the topic shows that preventing trachoma is significantly more expensive than originally assumed. On the other hand GiveWell estimates that Cataract surgery costs around 1000 US dollars and significantly improves vision. This implies a difference of a factor of at least 40. Given this difference it is very important to consider effectiveness in general, as many more lives could be improved or saved due to better choices in giving.

As well as providing a moral justification and showing that effectiveness matters in the context of donations, EA also developed some heuristics to evaluate an intervention. These are scale, tractability and neglectedness. Scale merely characterizes the impact of a given intervention according to the desired metric, i.e. potential amount of suffering reduced, potential number of people affected, etc. In general we want to help as many individuals as possible and therefore a large scale is something we should strive for. Organisations’ effectiveness, similar to private companies, also scale with the size of the interventions as the influence of fixed cost on the overall budget is lower and the possibility of specialization increases. Tractability describes our ability to track the results of a given intervention, i.e. to measure the number of lives saved or to quantify the amount of suffering reduced. Note that this is a hard problem in itself already. However, it gets even harder when thinking about counterfactual effectiveness. For example, we could find that an intervention decreased the average child mortality in a given region over a ten year period. To make sure we can attribute this effect to the specific intervention we need to control for all other influences on average child mortality such as economic growth, improvement of health institutions in general or the introduction of other health policies. Note that this is much harder than measuring just the effects of an intervention. In practice this is often implemented by using randomized controlled trials (RCTs), i.e. the intervention is applied in one village but the result is measured additionally in another village with similar circumstances to approximate its counterfactual effects. Neglectedness describes how many other people are working on that particular problem or how much other funding it receives. If both of these metrics are low, the problem is called neglected. Neglectedness is not a desirable property in itself, it is mainly a useful proxy for other advantageous characteristics. These are: a) low hanging fruits. When a problem has not yet received a lot of attention it is often easier to make quick progress. b) If a problem receives a lot of attention it often receives a lot of funding from other sources. In natural disasters such as earth quakes there often is high media coverage leading to many private and public donations. Very often there are diminishing returns in many interventions, especially in disaster relief, for example because small organisations are not able to handle the huge amounts of new donations and therefore allocate it inefficiently. This means that the impact one has per dollar invested is very small compared to other more neglected problems. To actually evaluate a given charity or cause area, there are many other and more complicated factors to think about. Scale, tractability and neglectedness, however, give a good intuition as a potential mental framework to start with.

Why do I like the movement/philosophy?

There are many reasons why I like the movement but I want to highlight three in particular.

1. Cause neutrality

Cause-neutrality means selecting causes based on impartial estimates of impact. Whether a cause area such as fighting climate change, preventing animal suffering, distributing bednets to fight malaria or fighting mental health issues is worth investing money in, entirely depends on its effectiveness w.r.t. the amount of reduced suffering per resource. I think this is a desirable property because it removes the politics from helping. Foreign aid is often allocated according to other reasons, e.g. to influence a certain regime or to pressure it by a thread of withdrawal. I prefer to simply ask the question: ‘where can we help the most?’ independent of ideologic alignment or other political interests because the consequences of these political games are paid with the lives of the poorest of the poor.

2. Scientific approach

EA is an evolving movement. This means that all parts of the movement try to constantly update their perspectives according to the latest research. This can be seen by the yearly reports of GiveWell but also by the changes in the recommendations of 80000hours, which is a website that gives career advice to help solve the world’s most pressing problems. What I find especially refreshing is to see the openness of the community with errors made in the past. There is no desire to hide past mistakes but rather a culture of publishing them in a transparent manner, i.e. explaining why one made that mistake and how it was changed by new insights. I think this can be seen especially in the case of 80000hours. Originally, they recommended doing ‘earning to give’ to many people, i.e. finding a high paying job and donating most of the earnings. Over time they realized that the key bottleneck for EA is not in financial but human resources, for example by working in governance or within EA affiliated organizations. Therefore they acted upon the new information and changed their recommendations. They even explicitly posted and update a section on errors and mistakes that they have made in the past (see here). I want a movement that embraces a culture in which errors are made transparent for others to learn instead of hiding them to preserve the image of perfection that is unlikely to be true anyways. Additionally, the community embraces a scientific mindset and is very willing to rate contributions on their content and less on their style or on the authority of the speaker. I have witnessed this when visiting my first EA conference in 2015. Objectively, I probably had very little to contribute to the conversation but my ideas were still taken seriously. Discussions were mostly focussed on their progress and not on social credit of the speakers, i.e. I mostly felt like everyone could contribute an idea and when it turned out to be a bad one it was rather seen as ‘a reduction of the hypotheses space’ than ‘just a dumb person’. Overall, I just really enjoy the scientific mindset within the movement and am astonished every time as to how easy it is to dive into deep conversations with people you did not even know ten minutes ago.

3. No time and space constraints

A lot of moral stances are not time and space neutral. Helping the local community, raising money for the children’s hospital in town or giving money to beggars is seen as something very moral. In the time domain, most people would say there is some moral responsibility towards future generations but it is significantly smaller than people who are currently around. And I personally just do not believe this to be the case. Why is the right to live of a person that is arbitrarily born in my city larger than the right of a person in a country that I have never been to? Why is the suffering of a person in 100 or 1000 years in time less important than the suffering of a person right now? I can obviously understand the emotional component behind these beliefs, i.e. I know the feeling of anger and sadness and I know the urgent desire to improve a problem that I locally see with my own eyes. But rationally, I do not understand. If I was visiting Sub-Saharan Africa and would see the suffering I would have exactly the same feelings as at home. They would just be stronger because the number of people affected and the degree to which they suffer is just overwhelming compared to problems in the West. The fight against malaria, hunger, civil wars, corruption, etc. is just a hard struggle. I can also understand the intuition behind discounting for future suffering. There is a lot of uncertainty about the future, it might be hard to approximate future people’s preferences, etc. I still think, however, that on a rational level these are invalid concerns. High uncertainty is not a reason to stop doing something, it just informs our strategy, i.e. we might need to diversify and use multiple different approaches. Approximating future peoples preferences might be hard on a very detailed level, like whether they prefer tea or coffee, but we are talking about basic human desires that are unlikely to change over time such as not being hungry, not having any diseases, having some rights and living in a house that has a roof and is not underwater due to rising sea levels. Therefore, I think a good moral framework should not include time and space constraints.

I would go even further - I think a good moral framework must not include time and space constraints. Both the time and place at which you are born are arbitrary, therefore there is no reason to prioritize people that live close to me over those that live far away or people that are alive now over people that are alive in the future. I want to pump the intuition a bit further: Most people think it is unfair that past and current generations burn fossil fuels that hurt current or future generations. If we were able to go back in time and show people the long term consequences of their actions, i.e. the burning forests and melting icecaps, then they would be at least to some extend morally responsible for these consequences. By the same logic we, as current generations, are responsible for the suffering we inflict on future generations. The fact that we currently do not know exactly what the consequences of our actions will be does not mean we do not need to consider them but rather that we need to make our predictions more accurate, i.e. build more elaborate climate models and gather more data. The analogy of the drowning child still holds - this time, however, we were the ones creating the pond by melting the ice caps in the first place.

Misconceptions and Clarifications

When talking to others about the idea of Effective Altruism I often encounter some misconceptions about the movement. Upon clarification, these people either become more open to the idea of EA or have other reasons against or concerns with the movement. If that is the case and people are not convinced by EA, I don’t want them to base that decision on a false understanding.

Strawmaning EA

I witnessed the following scenario multiple times: When introducing someone to EA, they think about it for a minute and then answer something like: “EA does not consider X, therefore the movement is fundamentally flawed and I don’t have to think about it anymore. Q.E.D.” where X is one of the questions I try to answer below or any other superficial criticism of EA. Before discarding EA as nonsense after a short time of thinking about it and finding any criticism I would like you to consider the following: Effective Altruism is a global movement with a lot of supporters, many of which are very intelligent people who have thought a long time about EA. Some of them even work fulltime in EA-affiliated institutions. This particular criticism, you came up with in 15 minutes, has probably been asked before and either found a satisfying answer or is currently thought about by someone within the movement. This is in my opinion one of the great advantages of EA. Since it is cause neutral and ever evolving, good criticism will immediatly be incorporated into the debate and solutions are searched for. With this remark I also do not want to stifle criticism of EA. All I want to say is the following: If you found some criticism against EA think for a second whether this could already have been solved within the movement before discregarding the entire philosophy. If you do not find a satisfying answer, ask members of the community online or offline. Usually, they are very open for discussion and happy to help.

If everyone followed EA, why would the world not collapse?

Usually the first question I get after introducing EA is along the lines of: “If everyone was an EA and we all donated our money to Africa, would the rest of the world not suffer even more because they had no functioning medical system anymore?”. I think it is useful to consider EA as a global optimization problem. We want to reduce the suffering of living beings in the universe. A redistribution of wealth at this scale is very likely not the correct solution to that problem and therefore not something that EA would support. However, the EA community has not yet reached (and will probably not reach for a while) the scale at which we can ask questions about redistribution of this magnitude.

If I am a convinced EA, do I have to spend 100% of my time for it?

I think this is a depressingly tempting thought. I know many EAs who struggle with it on a daily basis. During my undergrad I was on the brink of burnout due to these beliefs and they lead to a spiral of negative thoughts and effects. I thought that whenever I was not achieving my goals that would translate to a worse career outcome and hence less reduction of suffering. This made me feel bad and anxious about grades or other achievements in life. Due to these thoughts my grades got worse and I got even more worried. Breaking the spiral cost a lot of effort but fortunately improved my mental health greately. From this time and many discussions with people having similar thoughts I have learned the following lessons:

  1. really care about your mental health. It is both bad for you and your ability to help others if you are at the brink of having a burnout every day.
  2. Your time is valuable but investing it probably has some diminishing returns. Working 80 hours a week is not a necessity for becoming a good EA and if it has a negative effect on you, take more time for other activities. Also being an EA is not a full time job, donating some money to effective charities is always a net gain, even if you do not want to give a larger portion of your money or time for EA causes. Importantly, being moral is continuous, not binary. Donating some money, reducing meat consumption to some extent, etc. are all net beneficial.
  3. Don’t do something that is incompatible with your preferences even if it has higher expected altruistic returns. It is very hard to motivate oneself to do something that you really don’t like and the price will be paid by your mental health. Additionally, most people excel at what they like and therefore can have more impact with their work.
  4. 80000 hours has a lot more detailed advice about this and I recommend visiting their website or podcast.

Does EA imply utilitarianism?

Not necessarily. Essentially, you could plug in any metric and try to find out what actions would most effectively fulfill it over time and space. For example, we could ask what kind of action maximizes the amount of stamps in the universe and act according to that goal. In practice, however, many people within the EA community believe in some form of consequentialism, i.e. the value of an action is measured by its consequences. I think this is a result of a desire for measurability on two levels. First, is is easier (objectively still very hard) to approximate the outcome of a certain action than other, non-consequentialist, ideas such as the purity of intentions. How do I measure purity of intentions? What if they deceive themselves or lie during self-reporting? Second, it is easier to compare two different actions to each other when considering consequences. We can try to answer questions such as: “where is more suffering alleviated?” or “where do less people die?”. For other ethical systems this becomes harder. Questions such as “Which action leads to purer intentions” are often harder to answer because they differ so much from individual to individual and it is easier to deceive ourselves about the purity of our intentions when something benefits us. In principal, however, it would be possible to integrate most ethical systems into the EA framework. The EA community itself is also really not decided about what particular form of consequentialism is the correct one. How to measure suffering/happiness correctly or how to weigh suffering compared to happiness are two of the most debated questions within the community. I personally am very utilitarian. Other concepts such as fairness, justice or agency matter to me as well but I am not yet sure whether I think they have inherent value or whether they are just a good proxy or heuristic for overall happiness. For example, I am not sure whether I care about redistribution of wealth because I think it is ‘more fair’ or whether I solely care about it because I expect it to lead to better utilitarian outcomes.

How can you even measure suffering?

That’s the big question. If we could answer it EA would be much easier. I think the first notion we need to get rid of is the idea of ‘measurement’. Essentially, all measurements are just approximations of some quantity in the real world. However, our approximations for temperature or radiation are way more accurate than they are for happiness and suffering. While this problem itself is an open research question, there are some ways to measure the effectiveness of a change in the world such as a medical intervention or the implementation of a policy. Usually, ‘disability adjusted life years’ (DALYs) or ‘quality adjusted life years’ (QALYs) are used. Given the problem of Qualia, i.e. that my subjective mental state is not perceivable for anyone else, an approximation of suffering or happiness will likely stay a hard problem for a long time.

If you help people in poor countries in the present do we not create more suffering in the future?

I.e. “If we donate to AMF and their bednets reduce the number of people dying from malaria that means more people who can have children exist. The population of that country grows even faster and future natural disasters like droughts will cause big famines or increase the likelihood of future conflicts for water and food and thus more people die.” I think this is a valid point and a variable we have to control for. However, the reasons why people have many children in Sub-Saharan countries are often inversely proportional to a higher quality of life. a) People have many children because many of them die during their childhood or early adolescence and can therefore not provide for their families. Having many children is therefore a risk diversifying strategy. Thus having less diseases and higher life expectancy due to treatments and interventions would reduce the necessity for many children. b) Having less diseases on average leads to longer participation time in school and in the workforce, lifting people out of poverty and increasing their quality of life. Usually, a higher quality of life correlates strongly with a decrease in the number of children because of a decreased reliance on them as a safety net (see here for example).

EA embraces Capitalism, but to be truly EA we need to fight for Communism

There are two questions here that need a response: First, does EA embrace Capitalism? Even though EA suggested doing ‘Earning to Give’ in its early days the movement has shifted away from that position to some extent and now recommends doing it only in a small number of cases. This is mostly due to the fact that human resources are more sparse than financial resources on EA related problems. ‘Earning to Give’ was/is, in my opinion, the main reason for people having the association between capitalism and EA. Additionally, most EAs that I know are probably in favor of more redistribution from the top to the bottom. Tax exemptions for donations is something that most EAs are in favor of though. So overall, I think EA does not really embrace capitalism. Second, is Communism a better system? Maybe, I really do not know. Considering both the history of communism and some theoretical problems, like a good portion of egotistic behavior in human beings, I am way too uncertain to focus my entire effort to change the system. Additionally, a group with the size of EA is probably not able to lead to system change on a grand level and should therefore focus on smaller steps.

Why do you want to kill my grandma?

I don’t!! However, in general, I would prefer to save the lives of many individuals in Sub-Saharan Africa instead of prolonging the life of one individual in the West if these two options are mutually exclusive. Usually, I don’t have to make that trade-off though since western countries usually have functioning health care (probably with exception of the USA).

Why do EAs often pretend that they save the world while they have not done anything in their life yet?

Many EAs, especially those who still study, often get accused of not actually doing anything while claiming the moral high ground. And to be honest, I think there is a very small amount of people within the movement who use EA for social credit and to calm their own conscience. However, the vast majority of EAs either do something or position themselves for high impact positions. Many EAs do not eat meat, donate parts of their student money, organize meetups and talks in their local chapters or proselytize their family and friends. Some just focus on their studies or career to position themselves for later impact. While for some this is counterintuitive because they think being part of a social movement has to have a certain vibe to it, like getting a smile from the kid with cancer you helped or standing in the rain to collect some money for your local school, I think this is not necessary. As an EA, in my opinion, one should strive to maximize the expected reduction of suffering. If this is done through direct action like movement building during university that seems a good option. Since the talents of different individuals vary strongly, it is likely that someone is more effective in focusing on their research or career to be in a high political or social position later in life. Let’s say for example, that someone studies very hard for ten years to maximize their chances of becoming an influential person in the White House. In the 4 years of their position, they make American health care 1 percent more effective by replacing some medicine with generic drugs. What seems like a small change just freed hundreds of millions of dollars that could be used to fight the health care issues of a larger amount of people. From the outsight, this person was ‘doing nothing’ for ten years even though they had a huge influence on a large number of people.

Further reading

  1. Wikipedia
  2. 80000hours
  3. Effective Altruism Foundation
  4. Future of Humanity Institute
  5. GiveWell
  6. Peter Singer: practical ethics
  7. William McAskill: doing good better

One last note:

If you have any feedback regarding anything (i.e. layout or opinions) please tell me in a constructive manner via your preferred means of communication.