Newcomb's problem
- amplified cactus
- Posts: 26
- Joined: December 29th, 2019, 6:00 pm
Newcomb's problem
You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.
Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?
What I've always enjoyed about this one is that, as Robert Nozick once said, "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."
Why one-box?
By stipulation, the Predictor is near-perfect. Almost everybody who takes one box walks away with £1,000,000, and almost everybody who takes both boxes walks away with only £1,000. If you one-box, the Predictor will have predicted this and you will very likely get £1,000,000. If you two-box, she will have predicted this and you will very like only get £1,000. Basically, one-boxing wins! If two-boxing is the best choice, how come the vast majority of the two-boxers have less money than the one-boxers?
Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes. If the £1,000,000 is in box B, choosing both boxes won't make it disappear; and if box B is empty, choosing only box B won't make £1,000,000 appear. So two-boxing always nets you more money, no matter what the prediction was, because either way, you get additional £1,000. So in fact, one-boxing doesn't win. If the one-boxers had taken both instead, they would have had £1,001,000 overall.
(For what it's worth, my impression is that professional philosophers lean slightly towards two-boxing. Personally, I think they are just being silly.)
- h_k_s
- Posts: 1243
- Joined: November 25th, 2018, 12:09 pm
- Favorite Philosopher: Aristotle
- Location: Rocky Mountains
Re: Newcomb's problem
This does not sound like a philosophy experiment. A philosophy experiment would not be so obsessed with greed.amplified cactus wrote: ↑December 30th, 2019, 4:36 am Here's one of my favourite little problems from philosophy...
You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.
Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?
What I've always enjoyed about this one is that, as Robert Nozick once said, "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."
Why one-box?
By stipulation, the Predictor is near-perfect. Almost everybody who takes one box walks away with £1,000,000, and almost everybody who takes both boxes walks away with only £1,000. If you one-box, the Predictor will have predicted this and you will very likely get £1,000,000. If you two-box, she will have predicted this and you will very like only get £1,000. Basically, one-boxing wins! If two-boxing is the best choice, how come the vast majority of the two-boxers have less money than the one-boxers?
Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes. If the £1,000,000 is in box B, choosing both boxes won't make it disappear; and if box B is empty, choosing only box B won't make £1,000,000 appear. So two-boxing always nets you more money, no matter what the prediction was, because either way, you get additional £1,000. So in fact, one-boxing doesn't win. If the one-boxers had taken both instead, they would have had £1,001,000 overall.
(For what it's worth, my impression is that professional philosophers lean slightly towards two-boxing. Personally, I think they are just being silly.)
It sounds more like a psychology experiment.
So then, from psychology, it would seem that your best bet is to hedge your bets and take both boxes, and then be happy with the 1,000 Quid, since by taking them both, you are assured some success with the "bird in the hand" strategy.
- h_k_s
- Posts: 1243
- Joined: November 25th, 2018, 12:09 pm
- Favorite Philosopher: Aristotle
- Location: Rocky Mountains
Re: Newcomb's problem
https://en.wikipedia.org/wiki/Newcomb%27s_paradox
- amplified cactus
- Posts: 26
- Joined: December 29th, 2019, 6:00 pm
Re: Newcomb's problem
It's a problem that gets a lot of discussion in decision theory.h_k_s wrote: ↑December 30th, 2019, 8:20 amThis does not sound like a philosophy experiment. A philosophy experiment would not be so obsessed with greed.
It sounds more like a psychology experiment.
So then, from psychology, it would seem that your best bet is to hedge your bets and take both boxes, and then be happy with the 1,000 Quid, since by taking them both, you are assured some success with the "bird in the hand" strategy.
I'm not really sure why you object to the way the problem is framed. But if it's greed that bothers you, just imagine that instead the money will be sent to your favourite charity. So box A contains £1,000 for the charity, and box B contains either £1,000,000 for the charity or nothing.
It's true that taking both boxes guarantees that you will get at least some money, whereas if you take both, you might end up with nothing. But that's extremely unlikely to happen, because by stipulation the Predictor is nearly perfect. You can be almost sure that if you take just box B, you'll walk away with £1,000,000. Why would you want to hedge your bets when the odds are so strongly in your favour?
- Terrapin Station
- Posts: 6227
- Joined: August 23rd, 2016, 3:00 pm
- Favorite Philosopher: Bertrand Russell and WVO Quine
- Location: NYC Man
Re: Newcomb's problem
If the latter, the fact that according to Nozick the decision here is almost evenly divided would completely undermine the notion that the predictor is almost always right. You can't almost always be right (that almost all humans would choose one thing over another) if the outcome is actually 50/50.
- Terrapin Station
- Posts: 6227
- Joined: August 23rd, 2016, 3:00 pm
- Favorite Philosopher: Bertrand Russell and WVO Quine
- Location: NYC Man
Re: Newcomb's problem
- Sculptor1
- Posts: 7091
- Joined: May 16th, 2019, 5:35 am
Re: Newcomb's problem
The only question is whether or not you think there could be such a thing as a person who can predict human behaviour.amplified cactus wrote: ↑December 30th, 2019, 4:36 am Here's one of my favourite little problems from philosophy...
You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.
Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?
What I've always enjoyed about this one is that, as Robert Nozick once said, "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."
Why one-box?
By stipulation, the Predictor is near-perfect. Almost everybody who takes one box walks away with £1,000,000, and almost everybody who takes both boxes walks away with only £1,000. If you one-box, the Predictor will have predicted this and you will very likely get £1,000,000. If you two-box, she will have predicted this and you will very like only get £1,000. Basically, one-boxing wins! If two-boxing is the best choice, how come the vast majority of the two-boxers have less money than the one-boxers?
Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes. If the £1,000,000 is in box B, choosing both boxes won't make it disappear; and if box B is empty, choosing only box B won't make £1,000,000 appear. So two-boxing always nets you more money, no matter what the prediction was, because either way, you get additional £1,000. So in fact, one-boxing doesn't win. If the one-boxers had taken both instead, they would have had £1,001,000 overall.
(For what it's worth, my impression is that professional philosophers lean slightly towards two-boxing. Personally, I think they are just being silly.)
So it is highly unlikely that the details of the test is true. She can't possibly be right since the fact is that some people would take B, whilst others would take both. She likely to be wrong, such she might be wrong and place the million in box B.
Since you can see the $1000 you might as well take both boxes.
- amplified cactus
- Posts: 26
- Joined: December 29th, 2019, 6:00 pm
Re: Newcomb's problem
It's nearly perfect at predicting the behaviour of particular humans. It generates a list of predictions like:Terrapin Station wrote: ↑December 30th, 2019, 9:21 am Does the predictor predict "human behavior in general" or particular human behavior? In other words, does it predict what I'm personally going to do? Or is it supposed to be some average behavior?
If the latter, the fact that according to Nozick the decision here is almost evenly divided would completely undermine the notion that the predictor is almost always right. You can't almost always be right (that almost all humans would choose one thing over another) if the outcome is actually 50/50.
Frank will one-box
Vincent will one-box
Bob will two-box
etc
and almost all of these predictions are correct.
While I'm inclined to agree with this way of putting it, the two-boxer will protest that once you are actually in the room, and you have to make your decision, the boxes have already been set. At that point, your decision can't change what's in the boxes, because you can't causally influence the past. So by one-boxing, you're just leaving £1,000.Terrapin Station wrote: ↑December 30th, 2019, 9:25 am If the predictor is rather predicting individual behavior, particular individual's choices, then your decision does make a difference, obviously. Your individual decision is what the predictor predicted after all, and it's almost always right by stipulation.
- amplified cactus
- Posts: 26
- Joined: December 29th, 2019, 6:00 pm
Re: Newcomb's problem
It's a hypothetical scenario. You can play along with it or not. But by stipulation, the Predictor is almost perfect. It doesn't matter whether it would be possible for such a Predictor to exist in reality.Sculptor1 wrote: ↑December 30th, 2019, 9:57 amThe only question is whether or not you think there could be such a thing as a person who can predict human behaviour.
So it is highly unlikely that the details of the test is true. She can't possibly be right since the fact is that some people would take B, whilst others would take both. She likely to be wrong, such she might be wrong and place the million in box B.
Since you can see the $1000 you might as well take both boxes.
- h_k_s
- Posts: 1243
- Joined: November 25th, 2018, 12:09 pm
- Favorite Philosopher: Aristotle
- Location: Rocky Mountains
Re: Newcomb's problem
A fundamental principle of dealing with risk is always to minimize all risk of any kind. This is a personal philosophy dealing with self preservation however.amplified cactus wrote: ↑December 30th, 2019, 8:42 amIt's a problem that gets a lot of discussion in decision theory.h_k_s wrote: ↑December 30th, 2019, 8:20 amThis does not sound like a philosophy experiment. A philosophy experiment would not be so obsessed with greed.
It sounds more like a psychology experiment.
So then, from psychology, it would seem that your best bet is to hedge your bets and take both boxes, and then be happy with the 1,000 Quid, since by taking them both, you are assured some success with the "bird in the hand" strategy.
I'm not really sure why you object to the way the problem is framed. But if it's greed that bothers you, just imagine that instead the money will be sent to your favourite charity. So box A contains £1,000 for the charity, and box B contains either £1,000,000 for the charity or nothing.
It's true that taking both boxes guarantees that you will get at least some money, whereas if you take both, you might end up with nothing. But that's extremely unlikely to happen, because by stipulation the Predictor is nearly perfect. You can be almost sure that if you take just box B, you'll walk away with £1,000,000. Why would you want to hedge your bets when the odds are so strongly in your favour?
Therefore if taking both boxes minimizes the risk that you will go away empty handed, that is clearly the best approach philosophically speaking.
The only truly philosophical principle involved is risk avoidance.
Game theory on the other hand teaches you how to play games, which involves taking risks.
But philosophically, you should never take any unnecessary risk, and you should always eliminate all risks that can be eliminated whenever practical to do so.
There is a big difference between game theory and philosophy.
Philosophy focuses on what is best.
Game theory focuses on winning at the risk of losing. Most people first learn about game theory in high school math class.
Philosophy is normally not taught until college or else in an expensive prep school. In Catholic high schools you are taught about Augustine and Aquinas and the other Romantic philosophers and their philosophies. These all deal with God and God-ness.
- h_k_s
- Posts: 1243
- Joined: November 25th, 2018, 12:09 pm
- Favorite Philosopher: Aristotle
- Location: Rocky Mountains
Re: Newcomb's problem
It's just a game. It is not a realistic situation. It presents a challenge in game theory only. It's not really philosophy, my friend.Terrapin Station wrote: ↑December 30th, 2019, 9:21 am Does the predictor predict "human behavior in general" or particular human behavior? In other words, does it predict what I'm personally going to do? Or is it supposed to be some average behavior?
If the latter, the fact that according to Nozick the decision here is almost evenly divided would completely undermine the notion that the predictor is almost always right. You can't almost always be right (that almost all humans would choose one thing over another) if the outcome is actually 50/50.
- Sculptor1
- Posts: 7091
- Joined: May 16th, 2019, 5:35 am
Re: Newcomb's problem
It does matter, since knowing how she works would inform whether or not I thought she would predict one way or the other.amplified cactus wrote: ↑December 30th, 2019, 10:13 amIt's a hypothetical scenario. You can play along with it or not. But by stipulation, the Predictor is almost perfect. It doesn't matter whether it would be possible for such a Predictor to exist in reality.Sculptor1 wrote: ↑December 30th, 2019, 9:57 amThe only question is whether or not you think there could be such a thing as a person who can predict human behaviour.
So it is highly unlikely that the details of the test is true. She can't possibly be right since the fact is that some people would take B, whilst others would take both. She likely to be wrong, such she might be wrong and place the million in box B.
Since you can see the $1000 you might as well take both boxes.
And since she simply could not know how I would chose, since she cannot know ME, then her capacity to act in this respect is absurd.
Inevitably my decision has to be based on my knowledge of her method, and by extension her decision about how "humans" whatever the F*ck that is suppose to be.
- amplified cactus
- Posts: 26
- Joined: December 29th, 2019, 6:00 pm
Re: Newcomb's problem
"Always minimize all risk of any kind" seems ridiculous to me, and I don't see how anybody could actually live in accordance with such a principle. Do you avoid getting in cars due to the risk that you might crash? Do you avoid using stairs due to the risk that you might trip and fall? This sounds pathological.h_k_s wrote: ↑December 30th, 2019, 4:41 pmA fundamental principle of dealing with risk is always to minimize all risk of any kind. This is a personal philosophy dealing with self preservation however.
Therefore if taking both boxes minimizes the risk that you will go away empty handed, that is clearly the best approach philosophically speaking.
The only truly philosophical principle involved is risk avoidance.
Game theory on the other hand teaches you how to play games, which involves taking risks.
But philosophically, you should never take any unnecessary risk, and you should always eliminate all risks that can be eliminated whenever practical to do so.
There is a big difference between game theory and philosophy.
Philosophy focuses on what is best.
Game theory focuses on winning at the risk of losing. Most people first learn about game theory in high school math class.
Philosophy is normally not taught until college or else in an expensive prep school. In Catholic high schools you are taught about Augustine and Aquinas and the other Romantic philosophers and their philosophies. These all deal with God and God-ness.
In any case, assuming your goal is to gain wealth, two-boxing does involve a risk: the risk that you will lose out on getting the £1,000,000. Minimizing the risk of going away empty handed will significantly increase the risk of going away without the £1,000,000.
I have no idea what you're talking about with comments like "The only truly philosophical principle involved is risk avoidance" and "philosophically, you should never take any unnecessary risk". You seem to be using the term "philosophy" in a very unusual way.
- amplified cactus
- Posts: 26
- Joined: December 29th, 2019, 6:00 pm
Re: Newcomb's problem
The only information you have when you're faced with the decision is that the Predictor is almost perfect. You know that billions of other people have been in the room before you, and that almost all of the one-boxers got the £1,000,000, while almost all the two-boxers did not. Nobody knows exactly how the Predictor works.Sculptor1 wrote: ↑December 30th, 2019, 6:08 pmIt does matter, since knowing how she works would inform whether or not I thought she would predict one way or the other.amplified cactus wrote: ↑December 30th, 2019, 10:13 am
It's a hypothetical scenario. You can play along with it or not. But by stipulation, the Predictor is almost perfect. It doesn't matter whether it would be possible for such a Predictor to exist in reality.
And since she simply could not know how I would chose, since she cannot know ME, then her capacity to act in this respect is absurd.
Inevitably my decision has to be based on my knowledge of her method, and by extension her decision about how "humans" whatever the F*ck that is suppose to be.
Yes, it's absurd. Presumably it could never happen in reality. Again, it's a hypothetical. If you find it uninteresting or silly or whatever, you don't have to answer.
-
- Posts: 10339
- Joined: June 15th, 2011, 5:53 pm
Re: Newcomb's problem
If my aim is to gain as much money as possible, I take only box B. The predictor knows I will do this so she puts £1,000,000 in box B.amplified cactus wrote:You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.
Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?
This is a new condition that wasn't stated in the original wording of the problem. With this new condition the predicting power of the predictor is irrelevant to the problem, as far as I can see, because the only relevant action that she could take, based on her predictive knowledge, is now unavailable to her.Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes.
2023/2024 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023