It’s impossible not to love a book that starts like this:
We are particular people. I have my life to live, you have yours. What do these facts involve? What makes me the same person throughout my life, and a different person from you? And what is the importance of these facts? What is the importance of the unity of each life, and of the distinction between different lives, and different persons?
That’s Derek Parfit’s Reason’s and Persons. This book is interesting, but difficult. That’s partially because the ideas are challenging, and partially because the book is written the way academic philosophers write books.
As an experiment, I’ve tried to extract some of the insights, and cast them in a more accessible way. This might be a failed experiment! I probably misunderstand some things, and trying to simplify the ideas might render them meaningless. You can judge this for yourself. Anyway, here’s my attempt at a summary of Chapter 1.
- Kate the Writer
- The Desert Hitchhiker
- Schelling’s Answer To Armed Robbery
- The Firefighting Pact
- The Transparent People
- Too Many Do-Gooders
- Clare’s Child
- The Obscene Film
- Esoteric Theories
Self-interest theory (S) says that it is rational for a person to try to make their own life go “as well” as possible.
Consequentialist theory (C) says that each person should try to make “the outcome” in the world as good as possible.
There are different versions of these, depending on what “go well” and “the outcome” mean. S needn’t say to be selfish in the usual sense — a good life might include the welfare of loved ones. Similarly, the “outcome” in C can vary. Utilitarianism tries to maximize happiness minus misery. Other versions of C might consider how equally these are divided among people, or include principles like honesty.
This chapter takes these theories for a test drive. It considers some unusual situations, and asks what these theories say to do in them.
Kate the Writer
Kate is a writer, passionate about her books. She does not believe S. She works very hard, which she believes is a sacrifice of her own happiness. She writes so hard and so long she eventually collapses into exhaustion and depression.
Suppose we change Kate to believe S. Since she thought she was hurting herself by working so hard, she chooses not to do that anymore. She doesn’t find her work and life as meaningful as we used to.
Have we made her better off?
Takeaway: Belief in S may reduce someone’s circle of concern and thereby make them less happy.
The Desert Hitchhiker
Sam believes in S, and always does what’s best for him. He is is also unable to lie. He is driving through the desert when his car breaks down. Fortunately a stranger stops, and offers him a ride into town for $20. Sam would be thrilled to pay, but has no money in his wallet. Sam tells the stranger, “I don’t have money now, but I will pay you when we get to my house”.
But suppose the stranger gives the ride. After getting to Sam’s house, there would be no way to make Sam pay. At that point, Sam would to what’s best for him and refuse to pay. Since Sam is unable to lie, the stranger realizes this and leaves Sam out in the desert.
Would Sam be better off if he had not believed S?
Takeaway: There are situations where the goals of S are best achieved by making yourself behave in ways that S says are wrong.
Another Takeaway: I’ve talked to many people about this problem. Most respond that it’s trivial — the right answer is to be “overall” rational on a long time horizon. Perhaps. But if Sam got the ride, upon arrival at is house it would be irrational for Sam to pay. So if you want to use this defense, you need some definition of “rational” that is not Markovian. Good luck with that.
Schelling’s Answer To Armed Robbery
A psychopath breaks into your house. The police are 15 minutes away. The man orders you to open your safe full of gold. If you don’t he will kill your children.
If you give him the gold, he may kill everyone so that you can’t tell the police about him. But if you ignore him, he will probably hurt a child to show you he is serious. What should you do?
Fortunately, you are an expert on conflict strategy, and happen to have a special drug on hand, which you take. Suddenly you don’t mind in any way if your children are killed, or if you are tortured. The man threatens to kill your daughter, and you say “whatever.”
The man no longer has any power over you. His threats mean nothing against someone so irrational. In your drugged state, you will probably not remember what he looks like. His best bet is to immediately leave so as to minimize his chance of being caught by the police.
Was it rational for you to make yourself irrational?
Takeaway: If you predictably take the actions in your own self-interest, other agents can exploit this. The defense is to make yourself irrational.
The Firefighting Pact
You are self-interested. It will be very dry and hot tomorrow. Your neighbors are worried about fires and convene a meeting. They propose that everyone swear to help out if anyone’s house catches on fire. It’s annoying to put out a fire, but much more annoying to have your house burn down.
Your neighbors happen to have a highly accurate lie detector. Suppose a neighbor’s house were to catch on fire. At that point, it would be rational for you to flake on the agreement. After putting you into the lie detector, your neighbors realize they would not benefit from allowing you into the pact. They invite you to step outside and lock the door.
According to your own self-interest theory, would it be better for you to transform yourself into an “irrational” trustworthy person?
Takeaway: This is a reasonable explanation for why we evolved to be (somewhat) loyal and trustworthy. You can substitute repeated interactions for the lie detector if you like.
The Transparent People
You are part of a group of rational, self-interested and transparent people. You live together on an island, eating coconuts.
Tired of working so hard, Alice builds a machine. This machine will make her irrational in a carefully chosen way: she is rational except when it comes to fulfilling any threats. (Apparently, building such a machine is easier than just gathering the damn coconuts.)
After running the machine, Alice announces to the group “I will not be gathering any more coconuts. Either you gather coconuts for me, or I’ll burn down all the coconut trees.” It is now rational for all the other people to do Alice’s work for her.
Were Alice’s actions rational?
Takeaway: Suppose that people refused to give Alice coconuts. At that point, it would not be be rational for Alice to burn down the trees. (She would starve!) So Alice’s gambit stems from the ability to do irrational things.
Another takeaway: To avoid Alice’s tyranny, you should have foreseen this situation. You should have constructed a machine that made everyone threat-ignorers — rational except that you ignore threats even when it is against your interest to do so.
Too Many Do-Gooders
Currently, people derive great happiness and meaning from certain “selfish” desires like their kids or enjoying dinner. The desires are not agent-neutral, and so not compatible with C. One day, aliens drop a virus onto the planet that transforms everyone into pure “do-gooders” only focused on the average good in the world. In order to achieve this, the aliens needed to reduce most of these desires.
Would those aliens have done the best thing for the planet?
There is a tradeoff: all the happiness people get from those desires might be lost. This could be better than the current world, on net. However, total happiness might be higher if many people were left with some of these desires, so they were pretty good but not pure do-gooders.
Takeaway: There are situations where the aims of C are best achieved if most people believe something else instead of C.
Clare loves her child. She has the chance to spend $20 buying her kid a wonderful dinner, or that same $20 buying a stranger a cure for a horrible disease. She buys her kid dinner.
You point out to Clare that this was wrong– the benefit to the stranger is much greater. Clare agrees this was wrong but says “Given how much I love my kid, it was impossible not to do that. It would be wrong to make myself love my kid. So while it was wrong, I can’t be blamed for doing it”.
Do you blame Clare?
Takeaway: This assumes it’s impossible for people to act contrary to their strongest desires. If that’s true, then “blameless wrong-doing” exists. If it’s just really hard to act against your strongest desires, then “low-blame wrong-doing” still exists.
The Obscene Film
One day, a man breaks into your house. He says “You must allow me to film you performing an obscene act, or I will kill your children. I will later use that film to blackmail you into committing minor crimes.” If you make the film, your kids are safe forever. You know that, while you could reject the blackmail, given your real personality, you probably wouldn’t.
Is it right for you allow him to make the film?
Takeaway: ??? Part of the question is if (1): it’s wrong to allow yourself to be induced to do wrong or (2) the wrong only occurs when you actually go ahead and do it. Still, I don’t think I fully understand what Parfit is getting at here.
Suppose we believe in C, but we live in a situation where C’s goals are best achieved by us convincing ourselves to instead believe in some other moral theory D. We probably need to forget that we used to believe C. (Otherwise how could we accept the new theory?)
Years pass, and circumstances on earth change due to technology. Now C says that it would be best to adopt some other moral theory E. Because we forgot about C, we can’t do this.
On the other hand, suppose that a small number of people secretly kept the flame of belief in C alive. When circumstances change, they reveal the supreme truth of C. After everyone converts back to C, they realize that they should now convince themselves to believe in E. A small number are nominated to remember C, and to reveal it only when circumstances change again.
Takeaway: This is a defense of C. Even if C says most of us should believe something else, there are good reasons that some people should remember C.
Murder and Accidental Death
Ben is about to die. Right before he dies, he plans to murder Cathy. Meanwhile a forest fire is bearing down on Deb, who will die unless she is rescued. The lives of Cathy and Deb are equally valuable.
You have the power to do one of two things:
- Convince Ben not to murder Cathy.
- Recue Deb.
You have a 50% chance of convincing Ben not to do the murder, and a 50.01% chance of rescuing Deb. What should you do?
Takeaway: The question here is if murder is itself wrong, or is it just the effects of murder that are bad.
If you like S or C, these arguments are a bit disturbing. Yet, Parfit’s point does not seem to be that S and C are incorrect. To the contrary, he brings up these issues only to say that they aren’t that bad.
The argument is that a theory being true is different from a theory being practical. S and C never claimed that believing in them would be helpful for achieving their goals. Yes, it’s strange, but it doesn’t mean they aren’t true.
Say that it’s true that S tells you to make yourself believe something other than S. That will then cause you to take actions that are wrong, according to S. Parfit says “OK then, go ahead and change yourself.” S never promised not to tell you to believe something else.
The same story is true for C. Oddly, Parfit gives few arguments that C can tell you to believe in something other than C. Instead he mostly, assumes it’s true. Anyway, he again says “Yes. If you believe C, and that’s what C says, then do it.” He further argues (as in Esoteric Theories above) that it C probably won’t tell everyone to not believe C.
The main purpose of this chapter is actually to defend S and C. Their difficulties are brought up to show that they aren’t fundamental difficulties, but more interesting curiosities.
- It can be rational to make yourself become irrational. In fact, you almost have do this to prevent other people taking advantage of you. Thus, rationality might tell you to change yourself so that you do things that are wrong, according to rationality.
- If you believe in Consequentialism, this might mean you should try to change yourself to believe in something else instead. That, in turn, will lead you to do things that are wrong, according to Consequentialism.
- THE THEORIES ARE EATING THEMSELVES OMG PARADOX.
- Just because a theory tells you to believe something else, doesn’t mean it is wrong. Something being “true” is just different from something being “practical”.
- There’s a perfectly good answer to the “paradox” that a theory might tell you to believe in something else instead: go ahead and do it. The theories never said that you shouldn’t. It’s OK.
PS: I will ostensibly write summaries like this for later chapters. If that’s relevant, you can subscribe by RSS or email below.