Learning about Effective Altruism

FAQs

  • Effective altruism (EA) asks: How can we do the most good using reason and evidence? And what are concrete actions we could take? EA is partly a research project, investigating what the most pressing problems are and what the most effective solutions are to these problems. EA is also a community of people from many backgrounds and professions who seek to use their resources (time, money, and careers) to do the most good.

  • Many people want to do good in the world. But many of these attempts to do good are largely ineffective and do not achieve their intended goals. Some ways of doing good are much more effective. For example, some charities are 100 times more effective than others, helping 100 as many people with the same amount of resources. Differences in effectiveness are neither largely known nor considered when making decisions. Returning to the charity example, about $450 billion is donated to charity annually in the US, but most donations don't go to effective charities. By practicing effective altruism, we can achieve more good even by using the same amount of resources.

  • Psychological research can inform effective altruist thinking. For example, some decisions could have profound implications for the long-term future, such as whether to detonate a nuclear weapon, create a potentially dangerous new technology, and invest in pandemic preparedness. Understanding the psychology of these decisions is one step to ensuring they are made more wisely. In addition, a better understanding of human psychology might help determine the best action to take from an effective altruist perspective. For example, what makes voters elect potentially dangerous leaders, and how can we prevent it? What interventions most cost-effectively improve human psychological wellbeing? Finally, psychology research can help us spread effective altruist principles in broader society, e.g. by understanding and overcoming the psychological barriers to it.

  • Longtermism is the view that positively influencing the long-term future is a key moral priority of our time. An existential risk is any risk that has the potential to eliminate all of humanity or human potential. Many researchers believe that humanity is in a uniquely dangerous period in its development, characterized by unprecedented destructive capability.

    If humanity does not go extinct, there could be an enormous number of future people. The principles of EA are such that all people, including future people matter, so it is a key priority for us to improve their lives and safeguard humanity against catastrophes that could prevent future people from ever existing. In other words, one of the most effective ways to do good in the world may be by mitigating existential risk.

RESOURCES

EFFECTIVE ALTURISUM

  • Here is a short introduction to EA

Ajeya Cotra summarizes effective altruism's key concepts and core beliefs and shares her personal experience

RATIONALITY

  • The Scout Mindset: Why Some People See Things Clearly and Others Don't (2021) by Julia Galef

  • Thinking, Fast and Slow (2011) by Daniel Kahneman

  • Superforecasting (2015) by Philip E. Tetlock and Dan Gardner

  • Clearer Thinking Podcast by Spencer Greenberg



We recommend these introductory resources to EA, rationality, and some EA cause-areas/pressing problems (longtermism, AI safety, animal welfare, and global poverty) to develop an understanding of key EA principles and topics, as well as to generate behavioral science research questions.

Longtermism

  • What We Owe the Future (2022) by Will MacAskill

  • The Precipice (2020) by Toby Ord

  • The "most important century" blog post series argues why the 21st century could be the most important century ever for humanity.  

  • 80,000 Hours Podcast: Carl Shulman on the common-sense case for existential risk work and its practical implications

  • 80,000 Hours Podcast: Hilary Greaves on Pascal’s mugging, strong longtermism, and whether existing can be good for us

This video by Kurzgesagt demonstrates why positively influencing the long-term future is a key moral priority of our time

AI safety

  • Superintelligence: Paths, Dangers, Strategies (2014) by Nick Bostrom

  • The Alignment Problem: Machine Learning and Human Values (2020) by Brian Christian

  • Human Compatible: Artificial Intelligence and the Problem of Control (2019) by Stuart J. Russell 

  • 80,000 Hours Podcast: Long-term AI policy strategy research and implementation

Why AI safety? Eventually, experts expect Artificial Intelligence to surpass human abilities to plan, conduct scientific research, and influence the world. However, AI researchers do not yet know how to align AI with human values. If well aligned, AI could be a tremendous force for good far into the future. If not, it could curtail human progress forever or even lead to human extinction. Ensuring AI is aligned with human values is thus crucial from a longtermist perspective.

PRESSING PROBLEMS & CAUSE-AREAS

Animal welfare

  • 80,000 Hours Podcast: Bruce Friedrich makes the case that inventing outstanding meat replacements is the most effective way to help animals

  • Animal Charity Evaluators is an organization that conducts research to determine how to achieve the greatest good with efforts to help animals. 

  • The End of Animal Farming: How Scientists, Entrepreneurs, and Activists Are Building an Animal-Free Food System (2018) by Jacy Reese

  • 80,000 Hours Problem Profile: Factory farming 

Global poverty

  • 80,000 Hours Problem Profile on health in poor countries.

  • 80,000 Hours Podcast: Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

  • The Life You Can Save: Acting Now to End World Poverty (2009) by Peter Singer

  • Another work by Peter Singer is one of the most famous essays on this topic,  "Famine, Affluence, and Morality" (1971)




Interested in studying these topics?