Newcomb's problem

Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Newcomb's problem

Post by Steve3007 »

If we leave out that new condition (the one that renders the predicting power of the predictor irrelevant) then the single most important word in the original wording of the problem is "nearly". This is not a quantitative word. It is a qualitative word. But if "nearly perfect accuracy" meant "correct significantly more than 999 times out of 1000" then I would one-box. If "nearly perfect accuracy" meant "correct most of of the time but significantly less than 999 times out of 1000" then I would two-box.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Steve3007 wrote: December 30th, 2019, 7:16 pm
amplified cactus wrote:Why two-box?
Once you enter the room, the content of B is fixed. The Predictor makes her prediction and sets the box before you arrive. Your decision can't make any difference to what's in the boxes.
This is a new condition that wasn't stated in the original wording of the problem. With this new condition the predicting power of the predictor is irrelevant to the problem, as far as I can see, because the only relevant action that she could take, based on her predictive knowledge, is now unavailable to her.
It's not a new condition. I was simply presenting how a two-boxer would probably describe the situation. Once you're in the room, the boxes are set. That's the sense in which your decision can't make any difference. Once you're asked to make a decision, the money is either there or it isn't; at that point, you have no causal influence over the contents of box B.
User avatar
Felix
Posts: 3117
Joined: February 9th, 2009, 5:45 am

Re: Newcomb's problem

Post by Felix »

Steve3007: This is a new condition that wasn't stated in the original wording of the problem. With this new condition the predicting power of the predictor is irrelevant to the problem, as far as I can see, because the only relevant action that she could take, based on her predictive knowledge, is now unavailable to her.
Yes, if you know that the Predictors prediction of your decision is based on her psychological/behavioral analysis of you, and she cannot change her action (depositing the money) after you enter the room, than doing the opposite of what you are inclined to do would practically guarantee you'd win the game, e.g., if you would be inclined to pick both boxes, than pick box B instead.
amplified cactus: It's not a new condition. I was simply presenting how a two-boxer would probably describe the situation. Once you're in the room, the boxes are set. That's the sense in which your decision can't make any difference.
You've contradicted yourself. You said before that if the Predictor believes I will choose both boxes, she won't put £1,000,000 in Box B.
"We do not see things as they are; we see things as we are." - Anaïs Nin
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Felix wrote: December 30th, 2019, 9:15 pm
amplified cactus: It's not a new condition. I was simply presenting how a two-boxer would probably describe the situation. Once you're in the room, the boxes are set. That's the sense in which your decision can't make any difference.
You've contradicted yourself. You said before that if the Predictor believes I will choose both boxes, she won't put £1,000,000 in Box B.
Yes, if Predictor predicts that you pick both, then she will leave box B empty. Where's the contradiction? The point is that once you're actually in the room, the prediction has already been made. Nothing you do at that point will causally influence what's in the boxes.
User avatar
h_k_s
Posts: 1243
Joined: November 25th, 2018, 12:09 pm
Favorite Philosopher: Aristotle
Location: Rocky Mountains

Re: Newcomb's problem

Post by h_k_s »

amplified cactus wrote: December 30th, 2019, 6:21 pm
h_k_s wrote: December 30th, 2019, 4:41 pmA fundamental principle of dealing with risk is always to minimize all risk of any kind. This is a personal philosophy dealing with self preservation however.

Therefore if taking both boxes minimizes the risk that you will go away empty handed, that is clearly the best approach philosophically speaking.

The only truly philosophical principle involved is risk avoidance.

Game theory on the other hand teaches you how to play games, which involves taking risks.

But philosophically, you should never take any unnecessary risk, and you should always eliminate all risks that can be eliminated whenever practical to do so.

There is a big difference between game theory and philosophy.

Philosophy focuses on what is best.

Game theory focuses on winning at the risk of losing. Most people first learn about game theory in high school math class.

Philosophy is normally not taught until college or else in an expensive prep school. In Catholic high schools you are taught about Augustine and Aquinas and the other Romantic philosophers and their philosophies. These all deal with God and God-ness.
"Always minimize all risk of any kind" seems ridiculous to me, and I don't see how anybody could actually live in accordance with such a principle. Do you avoid getting in cars due to the risk that you might crash? Do you avoid using stairs due to the risk that you might trip and fall? This sounds pathological.

In any case, assuming your goal is to gain wealth, two-boxing does involve a risk: the risk that you will lose out on getting the £1,000,000. Minimizing the risk of going away empty handed will significantly increase the risk of going away without the £1,000,000.

I have no idea what you're talking about with comments like "The only truly philosophical principle involved is risk avoidance" and "philosophically, you should never take any unnecessary risk". You seem to be using the term "philosophy" in a very unusual way.
To avoid unnecessary risks, you must avoid getting into cars when you do not absolutely need to.

Cars kill more people than does anything else on Earth except mosquitoes.

Once you fully understand this, then you are getting smarter, philosophically speaking.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Newcomb's problem

Post by Steve3007 »

amplified cactus wrote:"Always minimize all risk of any kind" seems ridiculous to me, and I don't see how anybody could actually live in accordance with such a principle.
Yes, it is ridiculous. It is the "I will never get out of bed" principle. Nobody could or does live by it. If there were any one fundamental principle with regard to risk then it is to minimize the risk/reward ratio. But, of course, there isn't. Different people have different attitudes to risk. For the purposes of this puzzle I'm assuming that my motivation is to maximize the probability of gaining the maximum amount of money.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Newcomb's problem

Post by Steve3007 »

amplified cactus wrote:It's not a new condition. I was simply presenting how a two-boxer would probably describe the situation.
Ok. This part of the wording of the OP: "Your decision can't make any difference to what's in the boxes" seemed to be saying that the one relevant action the predictor can take, as a result of her knowledge about me, was forbidden to her. I presume what you meant to say was that my decision can and does make a difference to what's in the boxes (as stated in the original wording of the problem), but the predictor knows, before I arrive, what that decision will be.
I was simply presenting how a two-boxer would probably describe the situation. Once you're in the room, the boxes are set. That's the sense in which your decision can't make any difference. Once you're asked to make a decision, the money is either there or it isn't; at that point, you have no causal influence over the contents of box B.
If, as stated in the original wording of the problem, the predictor "has the power to predict human behaviour with nearly perfect accuracy" then it doesn't matter if she has already put/not put the £1,000,000 in box B before I arrive. She could have either done or not done that before I was born. She could have done it 1,000,000 years ago. It would make no difference. She knows, even before I am born, that I will take box B. Therefore she puts £1,000,000 in box B. Therefore I take only box B.

I presume the intention of the puzzle is to create what appears to be an absurd loop of causality once we propose the existence of an omniscient being who (we propose) can predict everything that any person is going to do at any point in the future, but we also keep the notion that our decisions are manufactured at the point when we decide them. It's similar to the puzzles that arise in considering travel backwards in time. And obviously it's similar to some puzzles which arise when proposing the existence of an omniscient God.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Newcomb's problem

Post by Steve3007 »

amplified cactus wrote:The point is that once you're actually in the room, the prediction has already been made. Nothing you do at that point will causally influence what's in the boxes.
If we're talking in terms of causality and we don't believe that effects can precede causes then we can put it like this: The predictor knows all causal chains in the Universe. She therefore knows all the causal chains that lead me to make the decision that I make, whatever that is. In that sense, she's a bit like the birdlike entity in the last book of the Hitchiker's Guide to the Galaxy. Therefore my decision, when I'm in the room, does not causally affect what she does, but that decision is itself caused by various things that were happening in the past when she made her decision as to whether to deposit the million. So the act of her depositing the million and my decision have a common cause. In that sense, they are causally linked.
User avatar
Terrapin Station
Posts: 6227
Joined: August 23rd, 2016, 3:00 pm
Favorite Philosopher: Bertrand Russell and WVO Quine
Location: NYC Man

Re: Newcomb's problem

Post by Terrapin Station »

If it's predicting me individually, then I'm getting £1,000,000. Everyone else can worry about themselves. ;-)
User avatar
Sculptor1
Posts: 7091
Joined: May 16th, 2019, 5:35 am

Re: Newcomb's problem

Post by Sculptor1 »

amplified cactus wrote: December 30th, 2019, 6:27 pm
Sculptor1 wrote: December 30th, 2019, 6:08 pm
It does matter, since knowing how she works would inform whether or not I thought she would predict one way or the other.
And since she simply could not know how I would chose, since she cannot know ME, then her capacity to act in this respect is absurd.
Inevitably my decision has to be based on my knowledge of her method, and by extension her decision about how "humans" whatever the F*ck that is suppose to be.
The only information you have when you're faced with the decision is that the Predictor is almost perfect. You know that billions of other people have been in the room before you, and that almost all of the one-boxers got the £1,000,000, while almost all the two-boxers did not. Nobody knows exactly how the Predictor works.

Yes, it's absurd. Presumably it could never happen in reality. Again, it's a hypothetical. If you find it uninteresting or silly or whatever, you don't have to answer.
The hypothetical cannot work unless she chooses the same way each time.
Since she has no knowledge of who is going to make the next choice, her decision cannot change since she has no grounds for making a different choice.

Now you tell us that "that almost all of the one-boxers got the £1,000,000, while almost all the two-boxers did not." - which makes a complete mockery of the entire scenario.

If you want to offer a hypothetical then please offer one that makes sense. This one is nonsense.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Steve3007 wrote: December 31st, 2019, 4:48 am
amplified cactus wrote:It's not a new condition. I was simply presenting how a two-boxer would probably describe the situation.
Ok. This part of the wording of the OP: "Your decision can't make any difference to what's in the boxes" seemed to be saying that the one relevant action the predictor can take, as a result of her knowledge about me, was forbidden to her. I presume what you meant to say was that my decision can and does make a difference to what's in the boxes (as stated in the original wording of the problem), but the predictor knows, before I arrive, what that decision will be.
[/quote]
I meant to say exactly what I said. Again, I was simply presenting how most two-boxers think of the situation. Two-boxers tend to say that once you are in the room, nothing you do can make any difference to what's in the boxes. Why? Because, they would say, in order to make a difference to X, you need to be able to causally influence X. Once you are in the room, you have no causal influence over the contents of box B. Think of it this way: if you take the money from box A, what would happen to box B? Nothing, obviously. The money is either in there or it isn't. Nothing you do can change that now. So why not take the extra £1,000? That's how the two-boxer thinks about this problem.
Steve3007 wrote: December 31st, 2019, 4:48 amI presume the intention of the puzzle is to create what appears to be an absurd loop of causality once we propose the existence of an omniscient being who (we propose) can predict everything that any person is going to do at any point in the future, but we also keep the notion that our decisions are manufactured at the point when we decide them. It's similar to the puzzles that arise in considering travel backwards in time. And obviously it's similar to some puzzles which arise when proposing the existence of an omniscient God
The puzzle is intended simply as a thought experiment to probe the underlying principles of (rational) decision making. In particular, it gets a lot of discussion in the debate between evidential decision theory and causal decision theory.
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Sculptor1 wrote: December 31st, 2019, 12:34 pm
amplified cactus wrote: December 30th, 2019, 6:27 pm
The only information you have when you're faced with the decision is that the Predictor is almost perfect. You know that billions of other people have been in the room before you, and that almost all of the one-boxers got the £1,000,000, while almost all the two-boxers did not. Nobody knows exactly how the Predictor works.

Yes, it's absurd. Presumably it could never happen in reality. Again, it's a hypothetical. If you find it uninteresting or silly or whatever, you don't have to answer.
...
Since she has no knowledge of who is going to make the next choice, her decision cannot change since she has no grounds for making a different choice.
...
Where did you get that from? She does know who is going to make the next choice. She makes predictions for each particular individual who enters the room. So she has a list of predictions such as: "Frank will one-box", "Vincent will one-box", "Bob will two-box", etc., and she sets the boxes accordingly. Presumably, she is near-omniscient and uses her information about each person's psychological traits and decision-making processes to make her predictions about what their decisions will be. But this really isn't important. All that matters is that she has made a prediction about what you will choose, and she is near-perfect at predicting people's behaviour. It doesn't matter how her predictions are made.
User avatar
Felix
Posts: 3117
Joined: February 9th, 2009, 5:45 am

Re: Newcomb's problem

Post by Felix »

amplified cactus: Yes, if Predictor predicts that you pick both, then she will leave box B empty. Where's the contradiction? The point is that once you're actually in the room, the prediction has already been made. Nothing you do at that point will causally influence what's in the boxes.
As I said, after entering the room, so as to fool the predictor, I reversed my decision. At the last second, instead of choosing both boxes as I had intended, I chose Box B. But the predictor didn't believe I would do this and so left box B empty. Will she cheat me out of my 1 million pounds?
"We do not see things as they are; we see things as we are." - Anaïs Nin
User avatar
amplified cactus
Posts: 26
Joined: December 29th, 2019, 6:00 pm

Re: Newcomb's problem

Post by amplified cactus »

Felix wrote: January 1st, 2020, 7:54 pm
amplified cactus: Yes, if Predictor predicts that you pick both, then she will leave box B empty. Where's the contradiction? The point is that once you're actually in the room, the prediction has already been made. Nothing you do at that point will causally influence what's in the boxes.
As I said, after entering the room, so as to fool the predictor, I reversed my decision. At the last second, instead of choosing both boxes as I had intended, I chose Box B. But the predictor didn't believe I would do this and so left box B empty. Will she cheat me out of my 1 million pounds?
The Predictor is very likely to have predicted that you would reverse your decision. Whatever process you use to arrive at the decision to pick one box, the Predictor is very likely to have predicted it, and will therefore have put £1,000,000 in the box.

By stipulation, the Predictor is nearly perfect. You can't "cheat" her by changing your mind about things, because she will almost always correctly predict those changes of mind.
Steve3007
Posts: 10339
Joined: June 15th, 2011, 5:53 pm

Re: Newcomb's problem

Post by Steve3007 »

Felix wrote:Yes, if you know that the Predictors prediction of your decision is based on her psychological/behavioral analysis of you, and she cannot change her action (depositing the money) after you enter the room, than doing the opposite of what you are inclined to do would practically guarantee you'd win the game, e.g., if you would be inclined to pick both boxes, than pick box B instead.
As I understand it, if I do the opposite of what I was inclined to do, the Predictor already knew (with near certainty) that I was going to do the opposite of what I was inclined to do. Whatever I decide to do, for whatever reason, no matter now spontaneous I think I'm being, the Predictor knew before I entered the room what I was going to do. Even if I toss a coin and base my decision on that, the Predictor knew beforehand that I was going to do that and knew whether it would be heads or tails. Even if I base my decision on a truly random quantum event, She knows. I presume the Predictor is the kind of lady who knows whether a cat is alive or dead before the box is opened. What a Gal!
amplified cactus wrote:I meant to say exactly what I said. Again, I was simply presenting how most two-boxers think of the situation.
Ok.
Two-boxers tend to say that once you are in the room, nothing you do can make any difference to what's in the boxes. Why? Because, they would say, in order to make a difference to X, you need to be able to causally influence X. Once you are in the room, you have no causal influence over the contents of box B.
Ok. What I would say to those two-boxers is what I said earlier. Even if the Predictor made the decision as to whether to put £1,000,000 into box B before they were born, she did so based on near-certain knowledge of what they would do once they'd been born, grown up and gone into that room. The decision taken in the room and the decision taken earlier by the Predictor have a common cause. That is the sense in which they are causally linked. The scenario presented in the OP dictates that the Predictor knows what I'm going to do before I decide to do it and can make decisions based on that knowledge. That may or may not be a realistic scenario (depending on whether one believes that there could be such a creature as an omniscient being capable of making decisions that affect the material world) but that is the scenario we're presented with and which we are asked to consider.
Think of it this way: if you take the money from box A, what would happen to box B? Nothing, obviously. The money is either in there or it isn't. Nothing you do can change that now. So why not take the extra £1,000?
Because if I do that the Predictor already knew I was going to do it so she refrained (an arbitrarily long time before I entered the room) from putting £1,000,000 in box B. So I'll be left with only £1000. Not even enough for a new car! Obviously this has odd consequences for my view of my own free will. It makes me think that my decision, even though I may believe myself to have manufactured it from no previous cause there and then, in that room, was, in some sense, not really mine. It was the pre-determined end-effect of a causal chain. It might make me apt to believe in notions like fate. But that's simply a consequence of contemplating the curious idea of the existence of omniscient beings who can affect the material world.
That's how the two-boxer thinks about this problem.
If the two-boxer fully appreciates (and believes) what has been said about the Predictor then that seems to me a curious attitude to take. Obviously, though, I would understand it if the two-boxer simply doesn't believe what he/she has been told about the Predictor. If this were the real world, rather than a thought experiment, I wouldn't either! Obviously we suspend our disbelief (as we do when engrossed in any good work of fiction) for the sake of analysing the problem as stated.
The puzzle is intended simply as a thought experiment to probe the underlying principles of (rational) decision making. In particular, it gets a lot of discussion in the debate between evidential decision theory and causal decision theory.
Fair enough. I enjoyed the puzzle. But I think the god-like predictive powers of the Predictor more obviously address issues to do with omniscient gods and determinism. And, as I said, it also reminds me of the standard paradoxes which arise if we consider travel backwards in time to be possible. I'm reminded particularly of the film "Bill and Ted's Bogus Journey". There is a sense in which the backward time traveller, having already seen the future, has this Predictor-esque omniscience. A backward time traveller could simply observe what I do then go back in time, taking that knowledge with them, and decide whether to put the money in the box.

As I understand it, the difference between evidential and causal decision theory is essentially the difference between correlation and cause. Evidential decision theory states that we simply look at which decisions have tended in the past to be correlated with which outcomes and base our decisions on that evidence. We only consider those correlations, and not whether they indicate a causal connection. In causal decision theory we look for actual causal connections.

I'm not sure if the puzzle sheds any light on this because there is a causal connection there even if I change my mind while in the room. My decision, taken in that room, is causally connected to the question of whether there is already £1,000,000 in box B. It has a common cause with the decision as to whether to place that money in box B some time previously.
Post Reply

Return to “General Philosophy”

2023/2024 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021