Newcomb's problem

Admin

Administrator
Moderator
Messages
3,677
#1

Once the supercomputer has made the prediction there is of course no reason not to open both boxes but it's just a really bad idea trying to cheat the system since you stand to lose a million dollars just for gaining a thousand dollars.

The ideal general strategy in theory is to open both boxes 49% of the time and only one box (with the million dollars) 51% of the time but that would only gain you 0.049% by doing that over just going for the million dollar box.
 

Admin

Administrator
Moderator
Messages
3,677
#2
To which degree the computer can predict which box you are going to take will depend on to which degree humans have free will. The newcomb scenario might not even be possible in theory since it would require prediction accuracy of over 99%. Your choice (one vs both boxes) would be already determined.

 

Admin

Administrator
Moderator
Messages
3,677
#3
Probabalistic newcomb switch
The probability exploit (where you select both boxes 49% of the time) can be eliminated via a probabalistic newcomb switch where the probability of the hidden box containing a million dollars will be the same as the probability of you only selecting one box. In that case you want to open just one box 100% of the time since the prediction by the newcomb switch would be determined by the true probability of you actually opening the box. In the real world however you cannot actually find out the real probability and thus instead your goal should be mostly to make sure the newcomb switch will putting 1 million dollars in the hidden box, that might differ from the probability of you actually abstaining from taking the 1000$.

It is worth noting that this isn't just something theoretical, there are a lot of cases where people will make character judgement where you benefit from being trustworthy long term even if backstabbing can give short term benefit.

Let's say you managed to beat the newcomb switch won (risking 1000000$ to gain 1000$) then what do you think would happen the next time you have to face the same scenario?
 

Admin

Administrator
Moderator
Messages
3,677
#4
Free will & the newcomb switch
Of course the newcomb switch cannot predict correctly all the time if you have free will (doesn't matter whether or not it's conscious free will).

Even in the case of a deterministic universe predicting it might still not actually be possible due to computational irreducibility, in that case while you couldn't have done otherwise it would not be possible to actually predict your decision with 100% accuracy.
 

Admin

Administrator
Moderator
Messages
3,677
#5
Newcomb's problem except you see the content of both boxes
Then we can canculate the expected value via the following formula

E = 1000XB+1000Y(1-B)+1000000B

B = probability of the second box containing a million $
X = probability of opening the 1000$ box when there is no money in the second box
Y = probability of opening the 1000$ box when there is 1000000$ in the second box.

Now let B depend partly by X and Y

B = 0.8 - Y/10 - X/10

E = 1000X(0.8 - Y/10 - X/10) + 1000Y(0.2+X/10+Y/10) + 1000000(0.8 - Y/10 - X/10)

E = 8000000 - 99200X - 99800Y + 100Y² + 100X² + 200XY

E = 8000000 - 99200X - 99800Y + 100(X+Y)²

dE/dX = -99200 + 200X + 200Y < 0

dE/dY = -99800 + 200X + 200Y < 0

The expected value is maximized by never opening the 1000$ box.
 

Admin

Administrator
Moderator
Messages
3,677
#6
Diminished marginal utility of wealth doesn't make it worthwhile to go for the 1000$ box
Even if you are a starving homeless going for both boxes still isn't worth it. Sure if you are homeless 1000$ can help but it's not going to change your life. In order to actually change your life you will need something like 100000$ or even better 1000000$.

If instead it's just for example 4 times more money in the box with the more money then it's easier to justify opening both boxes since then you actually gain more relatively speaking from gaming the system to get for example 1000$+4000$.
 

Creamer

Well-known member
Messages
806
#7
game theory is about external predictions, yet it does not take into account the actual person.

if for example team work is the optimal solution it does not mean the other participants will collaborate with you, even when they know this.

I once, organized a move to get a union established in my work place.
it would have increased pay by 70%, decreased work hours, and a whole array of benefits.

I had set up a meeting with the representatives. all my coworkers, who had complained non stop about their basic work rights violations,
needed to do was sacrifice 15 minutes to simply show up.

non did.

when I asked them why, they had excuses like, they didn't have time, or they had no faith, although they said they would show up and it was a good idea. than they simply continued to complain daily, despite me telling them they had no right to complain anymore as they stood me up.

learned helplessness and SMV do appear to play a factor but they are not mentioned in game theory.
 
Top