# Newcomb's problem

Moderator

Once the supercomputer has made the prediction there is of course no reason not to open both boxes but it's just a really bad idea trying to cheat the system since you stand to lose a million dollars just for gaining a thousand dollars.

The ideal general strategy in theory is to open both boxes 49% of the time and only one box (with the million dollars) 51% of the time but that would only gain you 0.049% by doing that over just going for the million dollar box.

Moderator
To which degree the computer can predict which box you are going to take will depend on to which degree humans have free will. The newcomb scenario might not even be possible in theory since it would require prediction accuracy of over 99%. Your choice (one vs both boxes) would be already determined.

Moderator
Probabalistic newcomb switch
The probability exploit (where you select both boxes 49% of the time) can be eliminated via a probabalistic newcomb switch where the probability of the hidden box containing a million dollars will be the same as the probability of you only selecting one box. In that case you want to open just one box 100% of the time since the prediction by the newcomb switch would be determined by the true probability of you actually opening the box. In the real world however you cannot actually find out the real probability and thus instead your goal should be mostly to make sure the newcomb switch will putting 1 million dollars in the hidden box, that might differ from the probability of you actually abstaining from taking the 1000\$.

It is worth noting that this isn't just something theoretical, there are a lot of cases where people will make character judgement where you benefit from being trustworthy long term even if backstabbing can give short term benefit.

Let's say you managed to beat the newcomb switch won (risking 1000000\$ to gain 1000\$) then what do you think would happen the next time you have to face the same scenario?

Moderator
Free will & the newcomb switch
Of course the newcomb switch cannot predict correctly all the time if you have free will (doesn't matter whether or not it's conscious free will).

Even in the case of a deterministic universe predicting it might still not actually be possible due to computational irreducibility, in that case while you couldn't have done otherwise it would not be possible to actually predict your decision with 100% accuracy.

Moderator
Newcomb's problem except you see the content of both boxes
Then we can canculate the expected value via the following formula

E = 1000XB+1000Y(1-B)+1000000B

B = probability of the second box containing a million \$
X = probability of opening the 1000\$ box when there is no money in the second box
Y = probability of opening the 1000\$ box when there is 1000000\$ in the second box.

Now let B depend partly by X and Y

B = 0.8 - Y/10 - X/10

E = 1000X(0.8 - Y/10 - X/10) + 1000Y(0.2+X/10+Y/10) + 1000000(0.8 - Y/10 - X/10)

E = 8000000 - 99200X - 99800Y + 100Y² + 100X² + 200XY

E = 8000000 - 99200X - 99800Y + 100(X+Y)²

dE/dX = -99200 + 200X + 200Y < 0

dE/dY = -99800 + 200X + 200Y < 0

The expected value is maximized by never opening the 1000\$ box.