Newcomb's problem

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Newcomb's problem

Post by amplified cactus »

Here's one of my favourite little problems from philosophy...

You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.

Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?


What I've always enjoyed about this one is that, as Robert Nozick once said, "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

Why one-box?
By stipulation, the Predictor is near-perfect. Almost everybody who takes one box walks away with £1,000,000, and almost everybody who takes both boxes walks away with only £1,000. If you one-box, the Predictor will have predicted this and you will very likely get £1,000,000. If you two-box, she will have predicted this and you will very like only get £1,000. Basically, one-boxing wins! If two-boxing is the best choice, how come the vast majority of the two-boxers have less money than the one-boxers?

Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes. If the £1,000,000 is in box B, choosing both boxes won't make it disappear; and if box B is empty, choosing only box B won't make £1,000,000 appear. So two-boxing always nets you more money, no matter what the prediction was, because either way, you get additional £1,000. So in fact, one-boxing doesn't win. If the one-boxers had taken both instead, they would have had £1,001,000 overall.

(For what it's worth, my impression is that professional philosophers lean slightly towards two-boxing. Personally, I think they are just being silly.)
User avatar
h_k_s
Posts: 1243
Joined: November 25th, 2018, 12:09 pm
Favorite Philosopher: Aristotle
Location: Rocky Mountains

Re: Newcomb's problem

Post by h_k_s »

amplified cactus wrote: December 30th, 2019, 4:36 am Here's one of my favourite little problems from philosophy...

You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.

Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?


What I've always enjoyed about this one is that, as Robert Nozick once said, "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

Why one-box?
By stipulation, the Predictor is near-perfect. Almost everybody who takes one box walks away with £1,000,000, and almost everybody who takes both boxes walks away with only £1,000. If you one-box, the Predictor will have predicted this and you will very likely get £1,000,000. If you two-box, she will have predicted this and you will very like only get £1,000. Basically, one-boxing wins! If two-boxing is the best choice, how come the vast majority of the two-boxers have less money than the one-boxers?

Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes. If the £1,000,000 is in box B, choosing both boxes won't make it disappear; and if box B is empty, choosing only box B won't make £1,000,000 appear. So two-boxing always nets you more money, no matter what the prediction was, because either way, you get additional £1,000. So in fact, one-boxing doesn't win. If the one-boxers had taken both instead, they would have had £1,001,000 overall.

(For what it's worth, my impression is that professional philosophers lean slightly towards two-boxing. Personally, I think they are just being silly.)
This does not sound like a philosophy experiment. A philosophy experiment would not be so obsessed with greed.

It sounds more like a psychology experiment.

So then, from psychology, it would seem that your best bet is to hedge your bets and take both boxes, and then be happy with the 1,000 Quid, since by taking them both, you are assured some success with the "bird in the hand" strategy.
User avatar
h_k_s
Posts: 1243
Joined: November 25th, 2018, 12:09 pm
Favorite Philosopher: Aristotle
Location: Rocky Mountains

Re: Newcomb's problem

Post by h_k_s »

From Googling this topic, it would appear that I have chosen the "dominance principle" -- where the bird in the hand is worth two in the bush.

https://en.wikipedia.org/wiki/Newcomb%27s_paradox
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

h_k_s wrote: December 30th, 2019, 8:20 amThis does not sound like a philosophy experiment. A philosophy experiment would not be so obsessed with greed.

It sounds more like a psychology experiment.

So then, from psychology, it would seem that your best bet is to hedge your bets and take both boxes, and then be happy with the 1,000 Quid, since by taking them both, you are assured some success with the "bird in the hand" strategy.
It's a problem that gets a lot of discussion in decision theory.

I'm not really sure why you object to the way the problem is framed. But if it's greed that bothers you, just imagine that instead the money will be sent to your favourite charity. So box A contains £1,000 for the charity, and box B contains either £1,000,000 for the charity or nothing.

It's true that taking both boxes guarantees that you will get at least some money, whereas if you take both, you might end up with nothing. But that's extremely unlikely to happen, because by stipulation the Predictor is nearly perfect. You can be almost sure that if you take just box B, you'll walk away with £1,000,000. Why would you want to hedge your bets when the odds are so strongly in your favour?
User avatar
Terrapin Station
Posts: 6227
Joined: August 23rd, 2016, 3:00 pm
Favorite Philosopher: Bertrand Russell and WVO Quine
Location: NYC Man

Re: Newcomb's problem

Post by Terrapin Station »

Does the predictor predict "human behavior in general" or particular human behavior? In other words, does it predict what I'm personally going to do? Or is it supposed to be some average behavior?

If the latter, the fact that according to Nozick the decision here is almost evenly divided would completely undermine the notion that the predictor is almost always right. You can't almost always be right (that almost all humans would choose one thing over another) if the outcome is actually 50/50.
User avatar
Terrapin Station
Posts: 6227
Joined: August 23rd, 2016, 3:00 pm
Favorite Philosopher: Bertrand Russell and WVO Quine
Location: NYC Man

Re: Newcomb's problem

Post by Terrapin Station »

If the predictor is rather predicting individual behavior, particular individual's choices, then your decision does make a difference, obviously. Your individual decision is what the predictor predicted after all, and it's almost always right by stipulation.
User avatar
Sculptor1
Posts: 7091
Joined: May 16th, 2019, 5:35 am

Re: Newcomb's problem

Post by Sculptor1 »

amplified cactus wrote: December 30th, 2019, 4:36 am Here's one of my favourite little problems from philosophy...

You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.

Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?


What I've always enjoyed about this one is that, as Robert Nozick once said, "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."

Why one-box?
By stipulation, the Predictor is near-perfect. Almost everybody who takes one box walks away with £1,000,000, and almost everybody who takes both boxes walks away with only £1,000. If you one-box, the Predictor will have predicted this and you will very likely get £1,000,000. If you two-box, she will have predicted this and you will very like only get £1,000. Basically, one-boxing wins! If two-boxing is the best choice, how come the vast majority of the two-boxers have less money than the one-boxers?

Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes. If the £1,000,000 is in box B, choosing both boxes won't make it disappear; and if box B is empty, choosing only box B won't make £1,000,000 appear. So two-boxing always nets you more money, no matter what the prediction was, because either way, you get additional £1,000. So in fact, one-boxing doesn't win. If the one-boxers had taken both instead, they would have had £1,001,000 overall.

(For what it's worth, my impression is that professional philosophers lean slightly towards two-boxing. Personally, I think they are just being silly.)
The only question is whether or not you think there could be such a thing as a person who can predict human behaviour.
So it is highly unlikely that the details of the test is true. She can't possibly be right since the fact is that some people would take B, whilst others would take both. She likely to be wrong, such she might be wrong and place the million in box B.
Since you can see the $1000 you might as well take both boxes.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Terrapin Station wrote: December 30th, 2019, 9:21 am Does the predictor predict "human behavior in general" or particular human behavior? In other words, does it predict what I'm personally going to do? Or is it supposed to be some average behavior?

If the latter, the fact that according to Nozick the decision here is almost evenly divided would completely undermine the notion that the predictor is almost always right. You can't almost always be right (that almost all humans would choose one thing over another) if the outcome is actually 50/50.
It's nearly perfect at predicting the behaviour of particular humans. It generates a list of predictions like:

Frank will one-box
Vincent will one-box
Bob will two-box
etc

and almost all of these predictions are correct.
Terrapin Station wrote: December 30th, 2019, 9:25 am If the predictor is rather predicting individual behavior, particular individual's choices, then your decision does make a difference, obviously. Your individual decision is what the predictor predicted after all, and it's almost always right by stipulation.
While I'm inclined to agree with this way of putting it, the two-boxer will protest that once you are actually in the room, and you have to make your decision, the boxes have already been set. At that point, your decision can't change what's in the boxes, because you can't causally influence the past. So by one-boxing, you're just leaving £1,000.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Sculptor1 wrote: December 30th, 2019, 9:57 amThe only question is whether or not you think there could be such a thing as a person who can predict human behaviour.
So it is highly unlikely that the details of the test is true. She can't possibly be right since the fact is that some people would take B, whilst others would take both. She likely to be wrong, such she might be wrong and place the million in box B.
Since you can see the $1000 you might as well take both boxes.
It's a hypothetical scenario. You can play along with it or not. But by stipulation, the Predictor is almost perfect. It doesn't matter whether it would be possible for such a Predictor to exist in reality.
User avatar
h_k_s
Posts: 1243
Joined: November 25th, 2018, 12:09 pm
Favorite Philosopher: Aristotle
Location: Rocky Mountains

Re: Newcomb's problem

Post by h_k_s »

amplified cactus wrote: December 30th, 2019, 8:42 am
h_k_s wrote: December 30th, 2019, 8:20 amThis does not sound like a philosophy experiment. A philosophy experiment would not be so obsessed with greed.

It sounds more like a psychology experiment.

So then, from psychology, it would seem that your best bet is to hedge your bets and take both boxes, and then be happy with the 1,000 Quid, since by taking them both, you are assured some success with the "bird in the hand" strategy.
It's a problem that gets a lot of discussion in decision theory.

I'm not really sure why you object to the way the problem is framed. But if it's greed that bothers you, just imagine that instead the money will be sent to your favourite charity. So box A contains £1,000 for the charity, and box B contains either £1,000,000 for the charity or nothing.

It's true that taking both boxes guarantees that you will get at least some money, whereas if you take both, you might end up with nothing. But that's extremely unlikely to happen, because by stipulation the Predictor is nearly perfect. You can be almost sure that if you take just box B, you'll walk away with £1,000,000. Why would you want to hedge your bets when the odds are so strongly in your favour?
A fundamental principle of dealing with risk is always to minimize all risk of any kind. This is a personal philosophy dealing with self preservation however.

Therefore if taking both boxes minimizes the risk that you will go away empty handed, that is clearly the best approach philosophically speaking.

The only truly philosophical principle involved is risk avoidance.

Game theory on the other hand teaches you how to play games, which involves taking risks.

But philosophically, you should never take any unnecessary risk, and you should always eliminate all risks that can be eliminated whenever practical to do so.

There is a big difference between game theory and philosophy.

Philosophy focuses on what is best.

Game theory focuses on winning at the risk of losing. Most people first learn about game theory in high school math class.

Philosophy is normally not taught until college or else in an expensive prep school. In Catholic high schools you are taught about Augustine and Aquinas and the other Romantic philosophers and their philosophies. These all deal with God and God-ness.
User avatar
h_k_s
Posts: 1243
Joined: November 25th, 2018, 12:09 pm
Favorite Philosopher: Aristotle
Location: Rocky Mountains

Re: Newcomb's problem

Post by h_k_s »

Terrapin Station wrote: December 30th, 2019, 9:21 am Does the predictor predict "human behavior in general" or particular human behavior? In other words, does it predict what I'm personally going to do? Or is it supposed to be some average behavior?

If the latter, the fact that according to Nozick the decision here is almost evenly divided would completely undermine the notion that the predictor is almost always right. You can't almost always be right (that almost all humans would choose one thing over another) if the outcome is actually 50/50.
It's just a game. It is not a realistic situation. It presents a challenge in game theory only. It's not really philosophy, my friend.
User avatar
Sculptor1
Posts: 7091
Joined: May 16th, 2019, 5:35 am

Re: Newcomb's problem

Post by Sculptor1 »

amplified cactus wrote: December 30th, 2019, 10:13 am
Sculptor1 wrote: December 30th, 2019, 9:57 amThe only question is whether or not you think there could be such a thing as a person who can predict human behaviour.
So it is highly unlikely that the details of the test is true. She can't possibly be right since the fact is that some people would take B, whilst others would take both. She likely to be wrong, such she might be wrong and place the million in box B.
Since you can see the $1000 you might as well take both boxes.
It's a hypothetical scenario. You can play along with it or not. But by stipulation, the Predictor is almost perfect. It doesn't matter whether it would be possible for such a Predictor to exist in reality.
It does matter, since knowing how she works would inform whether or not I thought she would predict one way or the other.
And since she simply could not know how I would chose, since she cannot know ME, then her capacity to act in this respect is absurd.
Inevitably my decision has to be based on my knowledge of her method, and by extension her decision about how "humans" whatever the F*ck that is suppose to be.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

h_k_s wrote: December 30th, 2019, 4:41 pmA fundamental principle of dealing with risk is always to minimize all risk of any kind. This is a personal philosophy dealing with self preservation however.

Therefore if taking both boxes minimizes the risk that you will go away empty handed, that is clearly the best approach philosophically speaking.

The only truly philosophical principle involved is risk avoidance.

Game theory on the other hand teaches you how to play games, which involves taking risks.

But philosophically, you should never take any unnecessary risk, and you should always eliminate all risks that can be eliminated whenever practical to do so.

There is a big difference between game theory and philosophy.

Philosophy focuses on what is best.

Game theory focuses on winning at the risk of losing. Most people first learn about game theory in high school math class.

Philosophy is normally not taught until college or else in an expensive prep school. In Catholic high schools you are taught about Augustine and Aquinas and the other Romantic philosophers and their philosophies. These all deal with God and God-ness.
"Always minimize all risk of any kind" seems ridiculous to me, and I don't see how anybody could actually live in accordance with such a principle. Do you avoid getting in cars due to the risk that you might crash? Do you avoid using stairs due to the risk that you might trip and fall? This sounds pathological.

In any case, assuming your goal is to gain wealth, two-boxing does involve a risk: the risk that you will lose out on getting the £1,000,000. Minimizing the risk of going away empty handed will significantly increase the risk of going away without the £1,000,000.

I have no idea what you're talking about with comments like "The only truly philosophical principle involved is risk avoidance" and "philosophically, you should never take any unnecessary risk". You seem to be using the term "philosophy" in a very unusual way.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Sculptor1 wrote: December 30th, 2019, 6:08 pm
amplified cactus wrote: December 30th, 2019, 10:13 am
It's a hypothetical scenario. You can play along with it or not. But by stipulation, the Predictor is almost perfect. It doesn't matter whether it would be possible for such a Predictor to exist in reality.
It does matter, since knowing how she works would inform whether or not I thought she would predict one way or the other.
And since she simply could not know how I would chose, since she cannot know ME, then her capacity to act in this respect is absurd.
Inevitably my decision has to be based on my knowledge of her method, and by extension her decision about how "humans" whatever the F*ck that is suppose to be.
The only information you have when you're faced with the decision is that the Predictor is almost perfect. You know that billions of other people have been in the room before you, and that almost all of the one-boxers got the £1,000,000, while almost all the two-boxers did not. Nobody knows exactly how the Predictor works.

Yes, it's absurd. Presumably it could never happen in reality. Again, it's a hypothetical. If you find it uninteresting or silly or whatever, you don't have to answer.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Newcomb's problem

Post by Steve3007 »

I'm answering this before reading anything other than the OP.
amplified cactus wrote:You are sent into a room containing two boxes. Box A is transparent and contains £1,000. Box B is opaque, and contains either £1,000,000 or nothing - you can't see which. You are given the choice of either taking both boxes, or taking only box B.

Here's the catch. The contents of box B have been set by a Predictor who has the power to predict human behaviour with nearly perfect accuracy. If the Predictor predicted that you would take both boxes, she left box B empty. If she predicted that you would take only box B, she put the £1,000,000 inside it. So: do you take both boxes or only box B?
If my aim is to gain as much money as possible, I take only box B. The predictor knows I will do this so she puts £1,000,000 in box B.
Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes.
This is a new condition that wasn't stated in the original wording of the problem. With this new condition the predicting power of the predictor is irrelevant to the problem, as far as I can see, because the only relevant action that she could take, based on her predictive knowledge, is now unavailable to her.
Post Reply

Return to “General Philosophy”

2023/2024 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021