Hi folks, this is an overflow newsletter. I started with an article on Effective Altruism, ran into another one, so I went and read the other piece I’d filed on the same subject and present them all here. With luck you’ll still get a full newsletter on Saturday. (Then again it might be Sunday). Should you have any complaints, the Governor General has sworn in Scott Morrison as our complaints officer, though he has been quite busy of late (both Scott and the Governor General) so it may take a while to get through. Scott apologises in advance for any offence caused.
Freddie deBoer on Effective Altruism
I’ve found it difficult to express my own misgivings about EA, since I very much agree with the first point below. But this is one of the best explanations of what I don’t like about it. It’s a bit like so many business people who, having made their pile, take to ‘social impact’ and tell the world that they plan to take their ‘business experience’ to do gooding as if it’s some revelation that organisations doing good should manage themselves effectively. EA, as I’ve experienced it, is a bunch of graduates — very oftenj of philosophy and very often of a particular kind of philosophy — who are trying to cash it out as usefulness. All very good in a way of course, but in my (humble) opinion they’re altogether too impatient with that process of cashing out, altogether too ready to lecture and too unprepared to listen to others, including those they have much to learn from.
Anyway, I’ll leave the rest to Freddie:
How do we not just help people but help them most efficiently and effectively? I have two visceral responses to this effort.
This is a good project and worth doing.
It’s an utterly absurd way to define your purpose.
It’s a good project because, you know, doing good is important and we should want to do good better rather than worse. It’s utterly absurd because everyone who has ever wanted to do good has wanted to do good well, and acting as though you and your friends alone are the first to hit upon the idea of trying to do it is the kind of galactic hubris that only subcultures that have metastasized on the internet can really achieve. …
I thought that this (unanswered) tweet response to Matthews laid it out with brutal efficiency:
@dylanmatt I get the appeal, but what I always think about in response to these arguments is that "interesting is not the same as important."Interesting is not the same as important. So why are effective altruist spaces dominated by people trying to be interesting? Why do so many of its acolytes seem determined to distinguish themselves from each other, rather than to simply contribute a little bit to the overarching project of making the world a tiny bit better, without expectation of notoriety? Matthews himself says, in the above-linked piece,
what’s distinctive about EA is that because its whole purpose is to shine light on important problems and solutions in the world that are being neglected, it’s a very efficient machine for broadening your world. And especially as a journalist, that’s an immensely liberating feeling. The most notable thing about gatherings of EAs is how deeply weird and fascinating they can be, when so much else about this job can be dully predictable.
That’s its whole purpose? That’s strange; none of that is the same thing as doing good well. And in fact I can very easily imagine ways that it’s actively contrary to doing good well.
Eric Hoel on why he’s not an effective altruist
Morality is not a market
I think this is a well-argued and instructive piece, though I must say I tend to think that the more cultish aspects of EA are mostly comic rather than dark.
Despite the seemingly simple definition of just maximizing the amount of good for the most people in the world, the origins of the effective altruist movement in utilitarianism means that as definitions get more specific it becomes clear that within lurks a poison, and the choice of all effective altruists is either to dilute that poison, and therefore dilute their philosophy, or swallow the poison whole. This poison, which originates directly from utilitarianism (which then trickles down to effective altruism), is not a quirk, or a bug, but rather a feature of utilitarian philosophy, and can be found in even the smallest drop. And why I am not an effective altruist is that to deal with it one must dilute or swallow, swallow or dilute, always and forever. …
Who would not soil their clothes, or pay the equivalent of a dry cleaning bill, to save a drowning child? But when taken literally it leads, very quickly, to repugnancy. First, there’s already a lot of charity money flowing, right? The easiest thing to do is redirect it. After all, you can make the same argument in a different form: why give $5 to your local opera when it will go to saving a life in Bengal? In fact, isn’t it a moral crime to give to your local opera house, instead of to saving children? Or whatever, pick your cultural institution. A museum. Even your local homeless shelter. In fact, why waste a single dollar inside the United States when dollars go so much further outside of it? We can view this as a form of utilitarian arbitrage, wherein you are constantly trading around for the highest good to the highest number of people.
But we can see how this arbitrage marches along to the repugnant conclusion—what’s the point of protected land, in this view? Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes? What, precisely, is the reason not to arbitrage all the good in the world like this, such that all resources go to saving human life (and making more room for it), rather than anything else?
The end result is like using Aldous Huxley’s Brave New World as a how-to manual rather than a warning. …
I realize this suggestion may sound glib, but I really do think that by continuing down the path of dilution, even by accelerating it, the movement will do a lot of practical good over the next couple decades as it draws in more and more people who find its moral principles easier and easier to swallow. A back of a napkin is all you need, and the utilitarian calculations can be treated as what they are: a fig leaf.
What I’m saying is that, in terms of flavor, a little utilitarianism goes a long ways. And my suggestion is that effective altruists should dilute, dilute, dilute—dilute until everyone everywhere can drink.
Notes on Effective Altruism
Michael Nielsen’s piece is the most thoughtful of them all, and certainly the one that is most glowing about the extraordinary amount of good that EA has done.
Much of the power of EA (and of many ideologies) is to take away much of that choice, saying: no, you have a duty15 to do the most good you can in the world. Furthermore, EA provides institutions and a community which helps guide how you do that good. It thus provides orientation and meaning and a narrative for why you're doing what you're doing.
I've heard several EAs say they know multiple EAs who get very down or even depressed because they feel they're not having enough impact on the world. As a purely intellectual project it's fascinating to start from a principle like "use reason and evidence to figure out how do the most good in the world" and try to derive things like "care for children" or "enjoy eating ice cream" or "engage in or support the arts"16 as special cases of the overarching principle. But while that's intellectually interesting, as a direct guide to living it's a terrible mistake. The reason to care for children (etc) isn't because it helps you do the most good. It's because we're absolutely supposed to care for our children. The reason art and music and ice cream matter aren't because they help you do the most good. It's because we're human beings – not soulless automatons – who respond in ways we don't entirely understand to things whose impact on our selves we do not and cannot fully apprehend. …
Many of the issues are just the standard ones people use to attack moral utilitarianism. Unfortunately, I am far from an expert on these arguments. So I'll just very briefly state my own sense: "good" isn't fungible, and so any quantification is an oversimplification. Indeed, not just an oversimplification: it is sometimes downright wrong and badly misleading. Certainly, such quantification is often a practical convenience when making tradeoffs; it may also be useful for making suggestive (but not dispositive) moral arguments. But it has no fundamental status. As a result, notions like "increasing good" or "most good" are useful conveniences, but it's a bad mistake to treat them as fundamental. Furthermore, the notion of a single "the" good is also suspect. There are many plural goods, which are fundamentally immeasurable and incommensurate and cannot be combined. [Amen to that! Ed (and Scott Morrison).]
I find these attacks compelling. As a practical convenience and as a generative tool, utilitarianism is useful. But I'm not a utilitarian as a fundamental fact about the world. (Tangentially: it is interesting to ponder what truth there is in past UN Secretary-General Dag Hammarskjöld's statement that: "It is more noble to give yourself completely to one individual than to labor diligently for the salvation of the masses." This is, to put it mildly, not an EA point of view. And yet I believe it has some considerable truth to it.)