What is effective altruism?
Effective altruism (or “EA”) is a specific cluster of beliefs and ideas about how to do the most good possible with your money and career. EA was originally focused exclusively on finding and giving to the most effective charities which could save the most lives per dollar spent, but the idea and movement has since grown to significantly focus on finding and promoting careers that can have a disproportionately good impact. Many people interested in EA (myself included) still donate to charities that we believe are disproportionately effective at doing good. The GiveWell Top Charities Fund is one of the most popular places to donate in EA.
What does it mean to do the most good? EA specifically means good from a broadly consequentialist perspective. An action’s goodness should be measured by its consequences rather than the intentions of the actor, and the more good consequences an action causes the better it is. EA attempts to serve as a community where what counts as the most good can be rigorously debated under a consequentialist framework and where people can support each other in acting on the conclusions of that debate. Actions that alleviate extreme suffering or support wellbeing can be roughly measured ethically by how much suffering they alleviate and wellbeing they create, and for how many conscious beings they affect. The specific form of consequentialism that most EAs endorse was articulated (among other places) by Derek Parfit in his book Reasons and Persons.
Obviously there are a lot of serious problems with trying to do the most good. It is extremely hard to predict the full consequences of any action. Any individual action exists in a broader network of actions and systems and it may be doing more harm in total in ways that individual actions miss. It could be that the political system we live under is so bad that one-off donations don’t actually do good (a common socialist criticism of EA) or simply that a bunch of individual decisions that all individually maximize good results collectively cause bad results (the St. Petersburg Paradox is a classic example). Defining “wellbeing” and “suffering” impartially is extremely difficult and may be impossible, though there are interesting attempts. There are plenty of more abstract problems, from questions about whether different animals are conscious and capable of suffering to whether consequentialism can still work if we live in an infinite universe. Despite the challenges of acting on consequentialist principles, EA has landed on 3 specific areas as especially promising places to do a disproportionate amount of good from a consequentialist perspective: global health and development, animal welfare, and preventing existential risks to humanity.
To learn more, you can read the official intro to effective altruism, the EA Handbook, or Doing Good Better. I also really like this forum post on EA as a question. The 80,000 Hours website (an EA group focused on helping people find EA-aligned careers) has cause area profiles for different issues EAs focus on.
What’s the purpose of EA DC?
Washington DC has the third largest population of active EAs in the country (after NYC and the Bay Area). EA DC exists to network active EAs in DC, help DC EAs access highly impactful careers, and push other organizations in DC to think and act more along EA lines. Read more about us on our website.
How I got into EA
In 2009 I was interested in philosophy and saw this video of Peter Singer talking about ethical problems with spending money and charity. It made me more interested in consequentialism and global inequalities. Both ideas stuck with me.
In 2013 I was in college studying physics and philosophy. I was mainly focused on analytic ethical philosophy and found Derek Parfit’s Reasons and Persons. I’ve forgotten how, but shortly after reading more about Parfit EA came on my radar, and I excitedly began to read a lot about it. At the time EA was pretty exclusively focused on earning to give, which wasn’t something I saw myself doing, but I was excited to follow the community from afar and donate where I could. At the time I wasn’t sure what additional value I could add by being more involved. Once I started working I began donating to EA charities. As EA grew and developed I was excited to see it focus more on animal welfare and catastrophic risk. When the 80,000 Hours podcast started I began listening regularly. The local community in DC was pretty silent until around 2020 when an organizer brought a lot of people together, which is when I started to attend events. I had time and energy to volunteer running events and was eventually offered the role of part-time and then full-time paid director of EA DC.
EA resources I recommend
Answers to common questions I get about EA
Isn’t this all kind of goofy?
Yes. EA involves a lot of goofiness (like worrying about electron suffering or arguing that sleeping in is unethical because it robs your future self of positive experience) but I think the good things that are happening in EA more than balance out the weirdness. Ultimately I have never been involved in a community of people that didn’t include a large number of beliefs that I thought were goofy.
What good has EA actually done?
GiveWell has probably saved around 150,000 lives. 25% of all funding for factory farmed animal welfare comes from EA charities and funders. I’m agnostic about EA’s contribution to preventing catastrophic risks, but I’m pretty confident that people should be more worried about AI and biosecurity than they currently are.
Is EA secretly entirely focused on AI?
No. It’s true that among the most active EAs AI is generally the main issue area people spend their time and energy on, but in total the community spends a lot more money on global health and in the 2020 EA community survey only 14% of active EAs listed AI as their top cause area.
Is EA inherently utilitarian?
No, but it is heavily influenced by consequentialist and utilitarian thinking and a majority of EAs identify as utilitarian. In my opinion it’s very easy to justify working on big problems relating to suffering, wellbeing, and care about the number of beings affected without accepting utilitarianism.
Is EA inherently capitalist?
No. Which economic ideas are true is part of the debate about what it means to do good and EA does not demand that you accept specific axioms. There are socialist EAs and the political makeup of EA is pretty mixed but leans left. I have never met an EA who believes that private charity should completely take the place of basic government services.
That being said, I’m a capitalist and think that all attempts to “socialize the means of production” have been horrifically bad for workers, the global poor, and the environment. I support strong welfare states and redistribution of wealth, but I also support free markets. I don’t think my politics reinforce my support for EA or vice versa. If I were a communist there would be certain parts of EA that I wouldn’t take seriously, like earning to give, but I would still take animal welfare and global catastrophic risk seriously.
Do I identify as a rationalist?
No. I agree with some rationalist ideas like Bayesianism and recognize rationalist contributions to EA but I find a lot of aspects of the rationalist community either alienating or just straight-up bad, and I don’t agree with a lot of ideas associated with rationalism that don’t overlap with EA.
Where do I donate?
I donate 10% of my income. Half goes to the GiveWell Top Charities Fund and half goes to EA animal welfare charities.
Where do I disagree with EA the most?
I probably put a lower probability of AI being an existential risk than a lot of EAs, and a much higher probability on nuclear war being a major existential risk in the 21st century.
I’m not actually a consequentialist and probably identify more as some form of Kantian contractualist, but think that consequentialism can still yield a lot of useful counter-intuitive results that can guide our actions.
I’m wary of trying to make EA a lifestyle or core guiding philosophy rather than a pile of individually interesting problems and questions. Most EAs I know have the same relationship with it, but some try to use rationalism to reorient their whole lives and ways of speaking and I worry this can become very insular. These are a small minority of the full EA community.
How hierarchical is EA? Does EA have leaders?
EA community building is supported by a few very large funders and a bunch of much smaller ones, so the funding situation in community building is at least moderately hierarchical. That being said, I and most city organizers I know feel very empowered to make specific decisions for our groups and about how to communicate EA ideas. City groups are very different from each other and much of the culture of EA depends on the specific decisions of the specific people in each hub, so in that way it’s very non-hierarchical and influenced more by local communities. EA groups are funded by the Centre for Effective Altruism, which does not dictate many specific decisions to organizers. EA does not have specific leaders who make decisions for the movement as a whole, but it does have disproportionately influential people who set the tone for the movement.