Disclaimer: While I was writing this post, a related Forum post was published. It shares some aspects but is still sufficiently different that I am still comfortable posting mine.
What is this post about?
It took me one second to become an Effective Altruist. I was at an introductory talk for Effective Altruism and as soon as the presenter had explained the basic idea it fully resonated with me. I was basically like “Ah this makes sense. Where can I sign up? What do you want me to do with my life?”. And I know of multiple people who had similar experiences when being introduced to EA. However, this is not the norm which is a realization I gradually had to accept over the last years. I always knew this on a rational level but I mostly had to learn how to adapt my actions to align with that belief.
Since that moment I have been pretty active in the EA community. I have been to conferences, I co-founded the Tübingen EA-chapter and basically been “the EA guy” for many people in my surroundings. In the beginning, I was pretty much one of the very few EAs within my social circles and by now many people agree with the core principles or consider themselves EAs. This wasn’t an effect of me selecting for EAs as friends but rather EA becoming more mainstream and me having lots of discussions about it within my primary social group - competitive debating.
Over these years I have learned a lot of things about how to introduce people to EA that I want to write down in condensed form so that others can profit from them. Many of them were painful lessons that come from me underestimating the differences between me and the person I was talking to. I think this is captured well by the anecdote of me talking to my girlfriend about EA for the first time (we weren’t yet a couple back then). I talked about the blind dog, about the global south, and so on. She had done a voluntary social year between school and university in which she helped disabled children in Germany so she naturally asked what I thought of it. To which I answered that I thought it was a moral good but pretty marginal compared to helping people in Subsaharan Africa because of the difference in effectiveness. What seemed like a pretty logical and obvious observation to me, didn’t resonate with her at all. From her perspective there was just this weird twenty-year-old, who hadn’t even helped anyone himself yet, telling her that she had basically wasted a year of her life even though she had done something that is universally seen as a moral good. By now she largely agrees with the core ideas of EA but, as you can imagine, her first associations with EA were pretty negative. Even though I gave an honest and well-intentioned answer, I basically ignored human psychology to a large extent which is bad if you want to convince them.
Even though I think that all of my insights are pretty obvious in hindsight, I have seen other Effective Altruists struggle in similar ways when they tried to explain EA to their friends, families, or co-workers and I thus have compiled a short list of things that helped me. If you disagree with points or want to add additional ones, I’m interested in discussing them.
The examples of my failures in this post are partly overexaggerated to emphasize a certain point.
Before we start though, it is always important to remember that EAs (me included) are usually pretty weird. We usually care much more about rationality than the average person, we like to talk about theories and concepts that most people find uninteresting and don’t value small talk or other rituals as highly as the average person. This is not a bad thing, just a neutral observation. However, the consequence of this is that most other people will not find the same reasons convincing that convinced you of EA or they will usually have concerns that you never had and thus you have to mostly ignore large parts of your own perception of EA for a second when talking to others.
0. It depends on the Person
While the following pieces of advice are rather general, it is important to stress that there is not one strategy that convinces everyone. However, depending on the person you are trying to convince, you should adapt your approach pretty drastically. If they are completely new to EA it makes sense to focus on the very basics, e.g. “helping more is better” or pushing the intuition that there are large differences in effectiveness between organizations with the same goal. If you realize that they think all the basics are completely intuitive and wonder why somebody would ever doubt them it makes sense to accelerate to more advanced questions. While this might seem pretty obvious, I have seen people using the same PowerPoint slides and talking points to introduce EA to philosophy students, scientists, and their family members. So before you set out to introduce somebody to Effective Altruism, think about what they know and what they want to know. Then adapt the rest of this post according to the person and circumstances.
1. Positive framing
The finding that explains the largest difference for my success is simply: carrots work, sticks don’t. People usually hate it if you frame EA in a way that says everything less than using all available time and money for EA purposes is immoral. It is a conception that is unachievable to most - even pretty diehard EAs - and it implies a pretty dire personal perspective of the world. When people think that accepting EA concepts implies that you basically voluntarily enslave yourself for the rest of your life to be a moral person, then most people will reject these EA concepts. It is also a rather bad state of mind to have for your mental health in general. This is why I would not recommend showing the Peter Singer Ted talk to people who are new to EA and would recommend not talking about moral obligations at all in the beginning. Most people don’t care about moral obligations anyway when they don’t confirm their previously held beliefs.
What I would recommend instead, is to talk about pretty uncontroversial but powerful EA concepts. The two I would stress the most are: a) There are large differences in the effectiveness of different interventions. The blind dog might be a good example to pump the intuition but there are many others that show the concept similarly well. And b) there are many things every single person can do to decrease the suffering in the world in an effective way. This might, for example, be through donations, an adaption in their career or by helping out in their local chapter. I presume that the first idea works well because it is very intuitive to most people - helping more people is rarely seen as a bad thing. The second idea is important because it emphasizes your own agency. It feels motivating and actionable rather than exhausting and overwhelming like the heavy backpack of a moral obligation. Your goal is not to transform a person into a max-level EA in 15 minutes but rather to make them curious and generally interested in the movement such that they are willing to come back or research on their own. Optimally they should leave their first encounter with EA with the positive feeling of having “an entirely new world to explore” rather than the negative feeling of “If I don’t do X I’m an immoral human being”.
2. Customer is King
Especially during my early days as an EA, I kept having the same misconception. A slightly exaggerated version goes like this: “EA is correct. If I lay out all the arguments people should accept EA. If they don’t immediately accept EA, they are irrational and a lost cause.” When phrased like this it seems very obviously bad but I caught myself having slightly less extreme versions of this thought pattern multiple times. The reason why it’s a bad pattern is that a) there is a good chance you explained EA poorly or in a way that is not accessible to the other person and b) because it - once again - ignores human psychology to a large extent. Most people don’t rapidly update their moral frameworks and expecting them to do so is an epistemic failure on your side, not theirs.
A more modest but also more effective framing would be to assume that you show them the evidence and the arguments for why it can be rational for a person in their circumstances to be an EA but it is ultimately them who make a decision. Similar to how a salesperson would treat their customer by explaining to them why a product would fit them and is worth the money and the customer can then choose to buy it or not. If they decide against it, you have to improve your product or your strategy.
Within this customer-based framework, it is also significantly easier to remember that talking to someone about EA is all about them and nothing about you. It’s not the time for you to signal what a good person you are or to claim the moral high ground to make yourself feel better. It’s about them making a decision and you are only there to help them.
Ultimately, I think it is bad to assume you are a priest who already knows the god-given truth and runs around proselytizing others. Rather you are someone who presents a value-proposition and the other person has the agency to decide whether they agree with it or not. I found this stance very counterintuitive in the beginning. For me, it seemed clear that the goal was to get the desired outcome - having more Effective Altruists - and whether the person has agency about the process was secondary. However, in my experience, people really really hate the feeling of not having agency, and thus even if you only care about outcomes, agency is a good proxy for eventual success.
3. Assume their perspective
I would break that down into two parts. Firstly, it is important to engage in their current motivations. I have, for too long, presented a too narrow picture of EA when talking to people, i.e. mostly of Earning to Give in the earlier days and then mostly of working in EA institutions later on. While many people abstractly agreed that this would be a moral path, the switching costs are usually pretty high. There is high uncertainty whether they are skilled enough to work in an EA institution and they usually have already invested a lot of time in their current career path.
Since, by now, there are many different ways to improve the world in an effective manner I would stress the paths that are close to their current occupation or their desired future occupation even if I don’t think they are the most effective open position in a vacuum. If, for example, you know that a person wants to go into a high-salary job (e.g. for social status), I would suggest that they could do lots of good by donating parts of their income instead of suggesting to work for an EA institution. If they work or want to work in governance, I would suggest ways how they could have a positive impact there. If they are a biologist, I would try to introduce them to problems in biology that have high expected return such as working on diseases in the global south or reducing the risk of pandemics instead of suggesting to change their field of study. I have empirically found that this strategy is more likely to introduce someone to the ideas of EA and make active changes in their life. I suspect that this is because they don’t have to make a trade-off between their current career success and EA goals but merely minor changes in their current path. Over time they might learn more and more about EA and at some point decide themselves that their current choice of occupation is suboptimal or ineffective in achieving the goals they deem to be most important.
Secondly, it is reasonable to identify which causes they currently find important. This was especially important for me when it comes to donations. I found that it was significantly harder to motivate people for effective causes that they currently don’t believe are important, even after long discussions on their importance. Even after people would rationally agree that AI-safety was important, they would often still be reluctant to donate to charities working on AI-safety. It is much easier to introduce people to EA by asking them which causes they currently find important and then argue that they should donate towards charities that achieve these goals in a more efficient manner. If people want to help children then they are often willing to switch a donation from their local children’s hospital to the Helen Keller Institute or the Against Malaria Foundation once they understand how large the difference in effectiveness is.
If you assume their perspective and take into account their current motivations and the causes they care about it is more likely that they will like EA ideas.
4. Step by step
When I was new to EA I think my outreach strategies were much too aggressive. Learning about EA had opened this magical new perspective on the world and I was immediately hooked. Additionally, I was very aware (too aware) of the fact that if I was unsuccessful in convincing someone of EA, there were real opportunity costs, i.e. an innocent person might not be helped if I didn’t convince someone to donate. Overall, these things combined led me to expect people to immediately adapt their world views once they learned about EA and then becoming increasingly alarmed and desperate when they didn’t. Often the other person would then turn away because they felt pressured or like I was trying to guilt trap them.
My attempts were well-intentioned and might work in a world in which people rapidly update their beliefs upon hearing new information. Unfortunately, this is not how human psychology works in this world. So my advice would be: Don’t rush it. Let the other person control the pace and never make them lose their feeling of agency. It’s not a race to proselytize them as fast as possible but rather you introducing them to new arguments, viewpoints, and evidence so that they can decide whether they want to adapt their current view. If the arguments are good and presented in a way that is accessible to them, they will update their views eventually. And since people usually have a lot of questions about the movement, causes, strategies, measurements, etc. it will likely take a bit longer until they are willing to invest their time or money. Thus, you should take it step by step, answering their concerns, reducing their uncertainty, and engaging in their arguments.
Additionally, I think it is important not to overwhelm people at their first introduction to EA. If you end up at AI-safety, cooperation theory, and Hedonium you are more likely to make EA seem crazy than that these are just logical conclusions of pretty uncontroversial ideas. So as discussed in the first point, I would start really simple by saying that a) effectiveness matters when improving the world and b) that everyone can use their own time or other resources to join the effort. Then you can answer their concerns and once they agree with these statements you could introduce them to new ideas, e.g. the different causes or how they are broadly evaluated within the EA framework - moving forward step by step without overwhelming the other person.
5. Emphasize uncertainties within EA and take concerns seriously
I think EAs have been to overconfident in some of their statements and suggestions in the past and I found that this has repelled some people from joining the movement. So sometimes when I ask people what they think of Effective Altruism, I get an answer along the lines of “I generally agree with the ideas but EAs often seem so overconfident about their assessments that I don’t feel comfortable with it” or “I like the approach but some EAs have not taken my well-intentioned concerns seriously”.
While I think that the EA community is pretty open to new ideas and very willing to discuss its shortcomings, there still is a spark of truth to these forms of criticism. I have witnessed this overconfident behavior broadly in two scenarios.
The first is concerned with moral assessments. For example, when people who are new to EA express that they hold substantial non-utilitarian moral beliefs the answer sometimes is something like “lol, virtue ethics”, instead of an actual engagement. This is especially common in groups of EAs because a strong rejection of non-utilitarian moral beliefs can signal how strong your own utilitarian beliefs are to the other members of your in-group. I think there are lots of arguments in favor of utilitarianism that should be explained in such a situation. But I also think that moral philosophy is pretty complicated and it’s not like EAs had somehow solved all problems with utilitarianism. So just expressing the fact that there is uncertainty about morals and that we are merely operating on a basis of current best estimates is already helpful.
The second is concerned with epistemology. We currently don’t know what suffering is, we currently don’t know which long-term strategies are most effective and we don’t know which career path is optimal for any given individual. We have formed hypotheses about all of these concepts, some of which are much better than others, but in all cases, we currently have major open questions and large uncertainty. So we should say that the current information is not final, broadly quantify our uncertainty and then express that we welcome discussion. So when people ask, for example, how we quantify suffering, we should state that there are no perfect measurements and then explain why some are better than others, e.g. by showing the research of the Happier Lives Institute on the issue. Pretending that we know more than we do is not only turning people away because it seems arrogant, having the explicit goal of quantifying uncertainty and expressing it is a major selling point of the movement in itself.
Lastly, I just want to emphasize how important it is to take people seriously no matter how dumb you might find their arguments. I don’t know how many times people questioned whether you could quantify the value of a human life, whether suffering is actually quantifiable at all, whether donations only lead to dependence on the West, etc. But I have also found that just calmly presenting the arguments against all of these criticisms persuaded most people who brought them forward. The fact that people were willing to answer my naive questions, suggesting resources, and engaging in a discussion when I first turned up to an EA conference led me to believe that the movement was more interested in substance than in signaling and was one of the main reasons why I decided to stick around. If people get the opposite impression, e.g. that EAs just like to claim the moral high ground without taking criticism seriously, many won’t bother to become a part of it.
I have been told that parts of this blog post can also be interpreted as arrogant because I’m so confident about EA being correct. While I’m very convinced that the core tenants of EA are true, i.e. helping more is better, suffering matters also in non-human entities and independent of your location and time, it is important to me that this confidence is perceived as the result of a reasoning process that I’m happy to elaborate to others rather than arrogance. A climate scientist, for example, could also be very convinced that climate change is real because they have thought about the topic for a long time and seen data and evidence but react to a skeptic with confidence and explain their reasoning without being perceived as arrogance.
6. Miscellaneous
There are two miscellaneous things that I have tried in my social circles that worked quite well.
The first is giving games. Over the last couple of Christmases, I have played giving games with my family, i.e. I gave them money (e.g. 100€) and they had to split the money between four different organizations (two on global health, one on climate change, and one for animal suffering). There was a 50€ bill so that they couldn’t just split evenly and actively had to engage in a discussion. The discussions were very interesting to watch and scratched the surface of cause prioritization, effectiveness of individual organizations, and capabilities to suffer of different actors (e.g. animals vs humans). By now, many of my family members have started to rethink their donations and my cousin even donates a substantial amount to charities suggested by GiveWell every year.
The second is altruistic bets. You bet with your friends on a specific outcome and the loser has to donate a certain amount to an effective charity. This combines three nice things: a) It improves your forecasting and calibrates your beliefs about the world better; b) It can be used as a tool to nudge you. If you want to do more push-ups just bet with a friend that you will be able to do X at the end of the month; c) You donate money to an effective charity on a regular basis. Especially if the other person would not have donated without the bet you have effectively doubled your donations if you bet at true odds.
Conclusions
Ultimately, a lot of the realizations I had and the lessons I learned are not very surprising or revolutionary. However, the fact that I have witnessed me and other EAs committing simple mistakes over and over again when introducing others to EA indicates that it’s one of those topics where just knowing what you should do doesn’t mean you actually do it. Similar to changing bad habits, the first step is to reflect on your own behavior, then compare it to the desired behavior and finally slowly adapt in the right direction.
To summarize my main take-aways:
- Present a positive framing of EA, i.e. doing more is good, not than doing less than everything possible is bad.
- They have to keep the agency, i.e. you present arguments and evidence in favor of EA and they can choose whether they find them plausible and want to know more. They are not a lost cause or immoral if they don’t immediately jump on board.
- Pace yourself. In hindsight it might seem like you yourself were convinced of everything that EA entails on day one and maybe you even were. But most people are not. So go slow. Don’t overwhelm them and go step by step.
One last note
If you want to get informed about new posts you can subscribe to my mailing list or follow me on Twitter.
If you have any feedback regarding anything (i.e. layout or opinions) please tell me in a constructive manner via your preferred means of communication.