Our new Indie Games subforum is now open for business in G&T. Go and check it out, you might land a code for a free game. If you're developing an indie game and want to post about it, follow these directions. If you don't, he'll break your legs! Hahaha! Seriously though.

Our rules have been updated and given their own forum. Go and look at them! They are nice, and there may be new ones that you didn't know about! Hooray for rules! Hooray for The System! Hooray for Conforming!

## Posts

To me these would seem to be naturally allowable assumptions. There has to be an upper bound since the money supply is finite, and the problem only mentions one trial.

They do if you pick a random number between 0 and infinity for the for the amount of cash in the smaller briefcase. You can simulate that by fixing the amount (since it's arbitrary and random) and randomizing whether the player opens the smaller or larger briefcase first. Somebody in this thread already simulated this over a bunch of trials and found that switching makes about 25% more, as we would expect.

But there's nothing in there stating that we're telling the person the limit. And in fact, as I mentioned the simulation ALWAYS chooses case 1 and then always switches. It doesn't actually care about the amount in the case I pick. So even if the random amount picked is 250 and it's in case 2, so my program opens case 1 and has 500. It still switches because it knows nothing about the limit/distribution, and I'm testing the fact that some people are saying it's always better to switch cases.

If it were truly always better to switch cases, then if I always pick case 1 and always switch to case 2 I should come out ahead, regardless of what the range of possible values is.

OK, sure. Then your solution to the problem is the following:

The player should always switch if the amount in the briefcase is less than one-third of the complete money supply. The player should always hold his briefcase if the amount is more than one-third of the complete money supply.

Note that if we allow the assumption of an upper bound on X (the amount held in a briefcase), the proper strategy is now to switch whenever the opened briefcase contains Xmax/2 or less, and not switch whenever the opened briefcase contains more than Xmax/2.

Telling the player the limit is irrelevant - if you set a finite upper bound, the whole thing that makes the problem interesting is shot. With no upper bound, the chance that there's 2X or .5X in the other briefcase is always .5/.5. If you add in an upper bound, in 25% of cases there is a 0 probability that there's 2X in the other briefcase, whether the player knows it or not.

Let me also point out that after a relatively small number of trials the player can deduce a fairly good idea of the upper bound, and can adjust his strategy accordingly.

Edit: And I think the idea is that a player only gets one trial. Consider each trial as a different person doing the task. They don't know there's an upper bound, so their "best" strategy according to some here would be to always switch. So 1000 people do this task, always switching cases.

The fact is, the problem without the upper bound is entirely different from the problem with the upper bound.

Yes, but that simulation was flawed in that it picked a number and then flipped a virtual coin to decide if the other case had 0.5X or 2X in it, which isn't how the puzzle works. In the puzzle, the amounts are set before you pick. In the simulations that are being run that show an advantage in switching, the amount isn't determined until after you make your decision. It's a significant difference.

Okay, let's try something different. Smasher already conceded that, assuming you've decided to always switch, it doesn't really matter if you look in the case or not. That knowledge doesn't affect your actions, and there's no possible mechanism by which that knowledge can affect what's actually in the case, so you seeing the amount in the case you chose becomes irrelevant. The relevant sequence of events is this:

- You pick a case

- Someone asks if you'd like to switch

- You decide to switch

The claim is that this strategy yields a clear advantage. Very well. I'll now give a list of scenarios that are exactly equivalent.

- You pick case A. Someone asks you if you'd like to swap. You take case B instead.

- You pick case A. Nobody asks you if you'd like to swap. You take case B instead.

- You reach for case A. Before you take it, you change your mind and take case B instead.

- You deliberate in your head over which case to take. You settle on case A, but change your mind before you actually move, and take case B instead.

- You just decide to take case B from the outset.

If changing cases offers a clear advantage, then it must be true that those are not all equivalent actions, because the final one involves not changing at all. If this is true, explain why.

For extra fun, let's mix and match! For example, you pick case A, then change your mind and pick case B instead. Someone asks you if you'd like to switch, and you decide to go back to case A. If we're pretending that swapping offers a clear advantage, then there's some really bizarre logic going on when you start combining these scenarios.

I make tweet.

Given that (as someone already mentioned) this is a finite supply of money, then the problem always has some upper bound, and the problem without an upper bound is an impossible situation that can never actually happen.

Nicely done. I should have cut it off before the 1,4,9 repetition, that gives it away.

was they're too strong, too strong

-5030

Smasher is wrong here. The point of looking in the case is (bear with me here) to discover that there isn't infinity dollars in it. Think of it this way. By looking in the case, you're trying to confirm that X, the value of dollars in the suitcase, is less than or equal to one half of the upper bound of X. If X is less than or equal to one half of the upper bound of X, you should switch. Since the upper bound is infinity, one half the upper bound of X is infinity. But looking is key.

Also, check this out. You cannot represent a random distribution of X with a bounded random distribution of Y, the value of dollars in the smaller briefcase, because it results in a weighted distribution.

Consider the following case:

We randomly choose to assign 1 or 2 as the value for the smaller case. One half of the time, Y = 1, and one half of the time, Y = 2. Therefore, one half of the time 2Y = 2 and one half of the time 2Y = 4. X, though, can be either Y or 2Y. X is therefore 1 for one fourth of the time, 2 for one half of the time, and 4 for one fourth of the time. X's distribution is weighted.

Now consider the bounded distributions used in previous examples: there are no numbers that are "too small" to be 2Y, as there were in my example, but there are

manyvalues which are "too big" to be Y. In fact, any value over half the upper bound of X cannot be Y and therefore only has half the chance of being X. Any bounded distribution of X based on a bounded distribution of Y is necessarily weighted in this way.The spirit of the problem, I would argue, calls for a random, unweighted distribution on X, which is only possible if X is unbounded.

In that case we can't sensibly talk about the expected value over multiple runs, which is what a lot of this has been.

I believe I'm starting to figure out the disagreement with the envelope problem. From the player's perspective, he essentially

isweighing equivalent chances to double or halve his winnings, thus the optimal choice is always to switch. From a cost-benefit analysis (which is perfectly logical), and based solely on the information the player has, it makes sense to do so on the one trial he is given.However, from the perspective of the person running the trials (and presumably footing the bill), the net cost to him is on average going to be 1.5Y (with the Y & 2Y cases). We assume that every participant is completely logical and looking to get maximum money. For every participant that begins with the small case, their electing to switch will cost you (the "gamemaster") 2Y. For every participant that begins with the large case, their electing to switch will cost you Y. Overall, it comes down to whichever case they began at, which presumably has a 50% likelihood in either direction.One last study. If you begin each individual trial with the same starting conditions as outlined in the original puzzle wording (that is, that the opened case is worth $X), and you assume that the players always switch, your losses will uniformly be $X/2 or $2X, depending on whether or not the starting case is low or high.

And I think the multiple runs is disingenuous because the only person it actually applies to is the overseer.

stilltalking about this, I'll try to solve the briefcase paradox through induction.Base Case:Small Suitcase: 1

Large Suitcase: 2

Loss: 1

Gain: 1

Balance: 0

Induction:Small Suitcase: 2

Large Suitcase: 4

Loss: 2

Gain: 2

Balance: 0

Small Suitcase: N-1

Large Suitcase: 2(N-1)

Loss: N-1

Gain: N-1

Balance: 0

N<+Infinity

The paradox comes in the way people are calculating their gains. If given a $100 briefcase, that's 100% more than a $50 briefcase and 100% less than a $200 briefcase.

Remember, percentages can be misleading:

100+50=100*3/2;

150-50=150*2/3;

No no no no no no no. You've gotten off the rails. I never said there was an upper bound on X for this to have to work. Rather, that E[X] is finite. That is a VERY IMPORTANT distinction, which means that at some point the probabilities of higher values of X need to taper off rather quick.

A nonformative or uniform distribution for reals (or even whole numbers) X > 0 is not possible, because it cannot be a probability distribution. You will have an infinite integral, when you need the integral of the density function across all values to be 1. So it is impossible for all values of X to be equally likely to be chosen to put in the bag. There are places for improper probability density functions in math, but this isn't one of them.

You can't pick a random number between 0 and infinity. Edit: Savant just said so, too, more precisely:

This is one of those abuses of infinity. If there is 0 expected gain for any finite value, then the limit as the upper bound goes to inifinity of the expected gain from switching is 0. That's all that can be said. The problem where the briefcase could contain any value whatsoever is unintelligible unless you look at the limit behavior as you increase the upper bound.

Sure you can. The expected value will just be infinity/2, right?

I make tweet.

So, I created this python code to look at this problem form a strictly numbers perspective.

No matter how many times you run it, no matter how large an N, you get about $125000 gain, which corresponds to the expected value of switch that I calculated as 1.125.

So the question we return to is how X is selected. It seems to me that the distribution the problem calls for is a uniform distribution of X. But there is no such distribution (or at least, I don't think one is possible). So, perhaps the best answer to the problem is that the situation it calls for is impossible. Even if we don't know how X is selected, we know that higher values are necessarily less likely to be the smaller briefcase and therefore we should weight our choice to switch or not switch accordingly.

Sigh. You guys got to learn that if you are going to model it wrong then running the program isn't going to make it right. You are picking a value for X then flipping a coin to decide whether or not to double or halve your starting value. That is a different problem than this one.

You pick your envelope. You're asked if you want to swap. You swap, because you figure it's to your advantage to always swap.

You're about to open the envelope, but you're stopped again - do you wish to swap one more time? Well, it's definitely in your advantage to always swap, so you swap again. Clearly it cannot be in your advantage to switch both times - that presents a contradiction. It can thus not be in your advantage to switch at all, since swapping in both cases is exactly equivalent.

Wikipedia has an interesting take on the problem. If you interpret the problem in a layman's understanding of what it means to have "any possible amount" in the envelope you pick, then the answer is pretty clearly that it's not in your advantage to switch. The problem is when you translate the problem into strict statistical language. The problem basically breaks, and there's no agreement on the best way to reconcile the various issues - such as, say, picking a number from 1 to infinity.

I make tweet.

The selection of X is not given in the problem statement, and cannot be known offhand. If it WAS given, then it would be a different issue. However, look at the page that was linked earlier. It shows that for any probability density h(x) for X, as long as E[X] is finite then E[K-A], the expected value of switching across all values, is zero.

This thing has been solved, if you don't know anything about the distribution of possible money in the bag, then switching has zero expected value. Again, this explains why: http://www.u.arizona.edu/~chalmers/papers/envelope.html.

This code limits the person who sticks to a maximum winnings of N, but limits the person who switches to 2N. Which isn't really representative of the actual problem.

Thanks for linking that, I feel a lot more comfortable now.

Okay. You guys are right. I modelled it incorrectly. This gives an expected value of x.

My bad.

Except that, as the author notes, there's not necessarily any reason to rule out infinite expected value. Consider the case in the paper he links at the start involving the envelope containing 2^n dollars, where n is the number of times you flip a coin before it comes up heads. Infinite expected value is a reasonable conclusion in that case, and is arguably so in this case as well. To use your notation, E[X] would be infinite, which would make the expected value of switching non-zero.

The 50/50 assumption holds when the range is infinite and switching works in that case, but as mentioned before it's not possible to create a uniform distribution over an infinite set. It is possible to simulate the effects of an infinite range using a set of finite ranges, where in each range you perform a trial much like kakos' but discard instances where the chosen envelope is too close to the edge of the range ('too close' having the same definition as above). I think that the discarded runs for a given set will be in the valid range for the respective adjacent ranges (and thus any pair of values in the envelopes will be valid in some range), and likewise a valid instance in one range won't be valid in either of the adjacent ones, so that should work. Here's some code that implements one of the ranges as such:

Basically, what it all comes down to is that the problem is invalid because it assumes a uniform infinite distribution which can't exist, and I suppose this whole discussion is somewhat pointless (though both interesting and enlightening, so maybe not).

EDIT: I know there are distributions with inifnite expectation valujes for which switching is beneficial - I just found it amusing that you happened to pick one where it isn't.

You may want to rethink that...

This argument hinges on the reveal of eight dollars implying that one of the following sequences took place:

H H H T

H H T H

H H T T

which is true. But it also assumes that the probability of each is equal, which is false

in this situation. The probability of H H H Tgiven H H H ? and dollars = 8is 100%, thus the probability of H H H T given H H and dollars = 8 is 50%. Each of H H T H and H H T T has probability 25%.Infinite E[X] is definitely a special case, and in terms of practicality rather than theoretical rigor should probably be ignored. It's not something that should be sprung unannounced. In roughly layman's terms, that means that the average value of the money in the bag at the start is unbounded or infinite. Not just the

possibilitiesof money in the bag being unbounded, but also theexpectedamount of money in the bag. So any finite amount of money found in the bag is below expectations and not acceptable. Assuming that there could be an unbounded amount of money in the bag in the first place is a bit of a stretch for a colloquial problem statement, but on top of that infinite expectation? That's too much.If infinite expectation is to be considered it should really be stated formally.

Except that you don't know how many times she flipped the coin. The idea is that she secretly flips until she gets heads, then gives you 2^n dollars, where n is the number of flips before getting heads, including the heads flip. The expected value for that transaction is infinite. Therefore,

anyfinite amount you receive is disappointing, and it isalwaysin your interest to ask her to do it again (or to switch to a second envelope where the same thing was done).The basic idea, though, that we can translate over to the two briefcases, one doubling the other problem is that infinite expected value can lead you to want to switch no matter what finite value you are offered.

In terms of practicality, X has an upper bound - either the amount of money you can fit into a briefcase or the net worth of the person making the offer or the total money supply. Moreover, in terms of practicality, X wouldn't be uniformly distributed, even over the bounded range (who gives away all of their money?). The whole point of the problem is "theoretical rigor."

Besides, the conclusion that, as you put it, the expected amount of money in the bag is unbounded flows directly from the conclusion that the possibilities of money in the bag are unbounded. The one leads to the other.

For all non-wacky cases of the distribution of X it has zero expectation across all possible values of X to switch, full stop. There can be positive and negative expectations for switching with specific values of X, but they average out to zero. Since knowing a specific measurement of X tells you little about the underlying distribution, then switching is zero expectation.