Talk:Traveler's dilemma
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
centipede game
[edit]The article states that this is an extension of PD. I don't see it in the SA article, but would it be inappropriate to say that this is a variant of the centipede game? Smmurphy(Talk) 02:51, 23 May 2007 (UTC)
- It does bear a strong similarity to the centipede game, except the centipede game has more than one Nash equilbria (but only one subgame perfect equilibrium), while this game has only one NE. This game also has some similarity to Guess 2/3 of the average in that it involves deep iterative deletion of dominated strategies in order to demonstrate the NE. I think its okay to say it's similar to the PD. The socially optimal strategy in this game is not an NE, while dominance reasoning leads individuals to a very inoptimal outcome (like the PD). --best, kevin [kzollman][talk] 20:29, 23 May 2007 (UTC)
Not an extension of Prisoner's Dilemma
[edit]The article stated that TD was an extension of PD if you are limited to 2 distinct numbers. This is incorrect. In PD, the temptation to defect is greater than the reward for cooperation. If you are limited to 2 and 100 as your only options, the temptation to defect is 4, and the reward for cooperation is 100. This would result in 2 Nash Equilibrii, and thus not a prisoner's dilemma. -Todd(Talk-Contribs) 00:46, 10 June 2007 (UTC)
- Indeed. Have re-formulated how this game is linked to PD. (see article) JocK 11:25, 10 June 2007 (UTC)
Style
[edit]What is the general consensus about when or how to capitalize Traveler's Dilemma, Prisoner's Dilemma, Nash's Equilibrium etc. I see it both ways but usually capitalizing both words like any other proper noun. I think all the WP game theory articles should use consistent style throughout on these names. Venado 23:36, 15 June 2007 (UTC)
Confused by word "deep" for IEDS
[edit]I have never heard the term deep used for iterative elimination of dominated strategies. Should it be explained or references identified?Venado 00:02, 16 June 2007 (UTC)
- A deep iteration is an iteration that is applied many times. Perhaps there is a better word, but can't think of any.JocK 06:53, 16 June 2007 (UTC)
- I think it better to leave off the word unless the term "deep iteration" is perhaps described through link etc. Isn't the number of iterations to be performed simply determined by the number of choices available? Classic TD is max $100 but Basu describes variant of TD played with only 2 choices.Venado 14:11, 16 June 2007 (UTC)
- The large number of iterations needed to reach the NE (in TD indeed related to the number of strategic choices) is key to the fact that in TD (and also in 'guess 2/3 of the average') real-life experiments (including real-money experiments) result in strong deviations from the game-theoretical solution. Would therefore not like to see the word 'deep' disappear. JocK 17:41, 16 June 2007 (UTC)
- Thank you. I think your article edit now explains its meaning much more clearly. Venado 15:04, 17 June 2007 (UTC)
- The large number of iterations needed to reach the NE (in TD indeed related to the number of strategic choices) is key to the fact that in TD (and also in 'guess 2/3 of the average') real-life experiments (including real-money experiments) result in strong deviations from the game-theoretical solution. Would therefore not like to see the word 'deep' disappear. JocK 17:41, 16 June 2007 (UTC)
- I think it better to leave off the word unless the term "deep iteration" is perhaps described through link etc. Isn't the number of iterations to be performed simply determined by the number of choices available? Classic TD is max $100 but Basu describes variant of TD played with only 2 choices.Venado 14:11, 16 June 2007 (UTC)
- A "deep" iteration can also indicate a degree of non-linearity, self-organized criticality, or insensitivity to (relatively large) perturbations of a system's internal or external parameters. -- TheLastWordSword (talk) 21:43, 1 December 2016 (UTC)
Meaning of non-rational
[edit]The article includes "as well as those who understand themselves to be making a non-rational choice. Furthermore, the travelers are rewarded by deviating strongly from the Nash equilibrium in the game and obtain much higher rewards than would be realized with the purely rational strategy." and "others have suggested that a new kind of reasoning is required to understand how it can be quite rational ultimately to make nonrational choices"
I think nonrational is being used with a specific meaning that isn't the same as most people would expect and I think this should be avoided if at all possible. I don't know the litrature but I would have thought we should be saying something more like:
"... as well as those who understand themselves to be making a decision which deviates from the Nash equilibrium. This policy to deviate from the Nash equilibrium is perfectly rational if you believe you have reason to believe the assumptions made in arriving at the Nash equilibrium do not apply. It is possible that there are many ways to see a flaw in the Nash equilibrium solution but one follows as an example:
If a player chooses 99 and if the other player knows this strategy then this strategy is dominated by a strategy that chooses 98. The second 'if' does not apply; the other player does not know the first's strategy. The Nash equilibrium follows as if players know each others strategy but they do not.
...others have suggested that a new kind of reasoning is required to understand how it can be quite rational ultimately to make choices that deviate from the Nash equilibrium solution."
I am still not sure about that last sentence is a new reasoning required or has such a new reasoning been suggested? (I can't be the first to suggest something as obvious as above.) crandles 12:32, 8 August 2007 (UTC)
- The start of the article should be a clue, "In game theory." The meaning of "rational" is the meaning it has in game theory. You are right that perhaps it should be explained, but the word shouldn't be changed, because it has a very specific meaning (see rational choice theory and preference). Also, please be careful of original research. Best, Smmurphy(Talk) 15:34, 8 August 2007 (UTC)
- I have (as an interim step) editted in a link to indicate the specific meaning. That doesn't mean I think that is adequate to address the concerns I raised. The start of the article may well be a clue and while, I didn't need that clue, I don't think that is enough. The purpose of an encyclopedia is not to just explain to those who already understand - that would make encyclopia useless. You seem to agree that non-rational has a specific meaning and is easily misunderstood. So is that acceptable? I suggest not. If not, do you reword to avoid that term like I have shown is possible or do you use the term with added explanation?
- I agree original research should not be added. (Note that I said 'something like' rather than editing in that paragraph.) However there must be something in the litrature that suggests why/where the assumptions in (correctly) deducing the Nash equilibrium solution are not a correct reflection of reality in such a way that the Nash equilibrium solution is not an optimum strategy for a human playing the game in the real world. I suggest that needs to be added to the article. Note that the article is titled "Traveler's dilemma" not "The Nash equilibium solution to Traveler's dilemma". crandles 16:48, 8 August 2007 (UTC)
- You are right about jargon, we should be more clear, although I don't think we should remove the word, as that could actually make it less clear to someone familiar with GT. I fully agree with your point that we should explain why we care about certain equilibrium in the articles. Generally, people studying game theory are asked to find NE or SPE or whatever is appropriate in any game they think up, and often the motivation of why those equalibria are sought is lost. Also, those equilibria are not always intuitive, and other equilibria might be suggested for particular games that make more sense. I promise I'll find the time to try to bring in some alternative equalibria and, if relevant, empirical support for one or another (both here and Centipede game - let me know or bring a note to Wikiproject:Game Theory know if you have an issue with a GT article and no one addresses it). I do not agree with you that Basu's application of the NE concept is incorrect, I'm not sure where to start, except to point to Nash equilibrium. As for the literature, I've only read the SA article, but I can search JSTOR and google scholar as well as anyone, and again I promise to poke around a bit soon. By the way, if you (crandles) have any general GT questions, feel free to email me. Best, Smmurphy(Talk) 05:09, 9 August 2007 (UTC)
This dilemma is retarded. It assumes the premise is to get 'more' money than the other player, whereas people's actual base premise, probably on an evolutionary level, is to go for maximum gain. If you want maximal personal gain, without consideration towards your 'competitor' then your best choice is between $98 and $100. If they renamed the money, say 'snoods', and told people that the aim is NOT to get the most snoods possible, in fact you don't even care about snoods, but that the goal is to get more 'snoods' than the competitor, then more people might go for the $2 option. But even so, this tactic in the normal world is so usually associated with a heavy desire for say, vengeance or competitiveness above all other considerations, that most people would be unlikely to pursue this tactic, as it is not usually maximally beneficial. - Sangrail. —Preceding unsigned comment added by 222.154.238.36 (talk) 03:56, 30 January 2008 (UTC)
- You misunderstand the purpose of game theory. Prisoner's Dilemma isn't a useful theory because prisoner's actually have such a dilemma, but because the concepts behind it abstract to many other situations. Likewise, the potential usefulness of this theory isn't limited to playing a game with some bizarre rules. Also, you haven't thought through all of the implications of the game. If you are trying to maximize the amount of money you will take in, there is no reason to choose $100; if your opponent chooses $100, you will make $101 by choosing $99, rather than $100 for choosing $100; if you opponent chooses $99, then you make $99 instead of the $97 you would make for choosing $100; for every other number value there is no difference for choosing $99 vs $100, so naturally, $99 is strictly better for you profit-wise than $100. Similarly, there is no reason for your opponent to choose $100 either, as he will be strictly better off choosing $99. If both of you figure that out, then you should also both realize that there is no reason to choose $99 either for the same reason as $100. You can make similar realizations all the way down to $2. -Todd(Talk-Contribs) 13:01, 31 January 2008 (UTC)
- But consider it from an expected value perspective. Going high and being right once will have roughly the same payoff as going low 50 times, and even if you go high and lose you will quite possibly do better than going low. That is, even if you bid 100 and the other tries 99, you're better off than if you had gone low. Even a low probability that the other will go high makes that the better choice; but the other player knows this (at least intuitively), and knows that you know it, knows that you know he knows it, etc. This being the case you will probably both choose numbers in the high 90s, if not 100, and will be making the rational choice. A.J.A. (talk) 21:18, 18 July 2008 (UTC)
- So, you're basically saying that the example used in the Traveler's dilemma is itself not an example of the Traveler's dilemma? And people wonder why the rationality of game theory is challenged.
- Common sense dictates that your ur-example should be the best example, for those of you whose common sense has been taken over by Game theory. Also, to most people, rational == common sense. I didn't want to confuse you by using one of your terms. Do I really need to throw examples at you?
- Anyways, I myself am not challenging game theory. I just think it needs a facelift. Because any time an outsider can be seen to upset the core tenants of a theory (whether that actually occurred or not), you've just given the theory some horrible PR.
- No, Todd isn't. The best move in the game is to choose $2 at the expense of your suitcase filled with antiques. That is why Kaushik Basu called it a 'Dilemma.' The thing to remember is that everything in game theory is always a game. Leaving aside all of the intellectual gymnastics, the objective is to win the game, even at the expense of all rationality. Kaushik Basu saw this when he called it a dilemma.—An Sealgair (talk) 03:35, 16 November 2014 (UTC)
So, I had a bunch of stuff here earlier, check the history if you care, but I think I found the answer - it just raises a new question. Do strategies stay dominated? 99$ beats 100$ if you don't assume superrationality or "hypercompetent superrationality", and similarly 98$ beats 99$ and 97$ beats 98$ and 96$ beats 97$, but 100$ and 95$ both beat 96$, and 100$ beats 95$. Thus the choices aren't strictly ordered, and the assumption that they are strictly ordered appears to be involved but unstated. Looked at another way, the sub-strategy of trying for the undercut bonus has a maximum payoff increase of 4$; thus if its corresponding payoff decrease exceeds 4$, trying for the undercut bonus becomes an inferior sub-strategy. Darekun (talk) 10:41, 6 February 2008 (UTC)
- The problem with your analysis is that $100 doesn't actually 'beat' $96. If you do manage to get rid of many of the middle numbers, you end up with a stag hunt, and hunting the stag isn't necessarily better than hunting the hare. -Todd(Talk-Contribs) 02:08, 8 February 2008 (UTC)
I don't quite understand
[edit]I don't know much about game theory (although I'm very keen to learn) but this isn't quite clear to me. When one decides to value the luggage as 2$, he/she receives either 2$ (in the event that the other one chose 2$ too) or 4$ (when the other one chose a larger sum). By choosing 100$ one could get any sum between 0$ and 100$. Even better would be 99$ that offers also the possibility of 101$. Why exactly is 2$ such a good choice? Kotiwalo (talk) 17:21, 21 June 2009 (UTC)
- Yes. This is my problem. I understand the chain of reasoning step by step, reducing to $2, but choosing $2 means you end up with $2 or $4. Surely in the chain of reasoning, as soon as you get to thinking about choosing $95 (minimum gain $93, maximum $97), you should realise that you would be better off choosing $100 (minimum gain $98, maximum $100). The only reason to follow the chain of reasoning down to $2 is if you are assuming the other player is 'rational' and you both have the primary objective to 'get more than' and secondary objective to 'not get less than' the other player. If your objective is to maximise your winnings, choosing $2 will never be better than choosing $100 (or $99). --nodmonkey (talk) 18:25, 18 December 2009 (UTC)
Is it because of the assumption that the parties don't trust each other? Kotiwalo (talk) 08:40, 29 June 2009 (UTC)
- Even if they don't trust each other it'd be stupid to choose $2. Unless this has been explained wrong on the article then $2 is NOT the optimum choice as it would be better if one person for $98 and another $102 than one get $4 and the other $0. No-one except a madman would choose $2. Cls14 (talk) 15:57, 28 January 2010 (UTC)
- I think you are misinterpreting it -- if you choose $100 and I choose $2, you get $0 and I get $4. Thus, if you think the other person believes that the Nash Equilibrium is the optimum, then you would logically choose $2, because otherwise you get nothing.
- Well then, the article needs a rewording. It clearly says a $2 deduction if you write the higher value, not getting absolutely nothing.
174.113.156.80 (talk) 02:39, 8 March 2010 (UTC)
- That said, I think that the economic optimum differs based on who is playing the game, but is rarely the Nash Equilibrium (for $2-$100 range and reward/penalty of $2). IMHO, this article could differentiate a little more between the optimum and equilibrium... see here: http://www.uni-hohenheim.de/RePEc/hoh/papers/252.pdf Kjsharke (talk) 21:15, 25 February 2010 (UTC)
I don't get it
[edit]Why would anyone chooses 2 when they can benefit by choosing 98, 99, or 100. (97 and less are strictly dominated strategy, which means that it is nonsense to choose 2) 24.1.201.172 (talk) 00:42, 22 June 2010 (UTC)
- That's sort of the point of the game. (2,2) is the sole Nash Equilibrium, but in reality, it would probably never occur. If you're perfectly rational and assume the other player is perfectly rational and you are thinking about what number he is going to write down, you get caught in the vicious circle which leads you to (2,2).161.253.11.22 (talk) 16:57, 8 February 2013 (UTC)
- This is a game. Choosing $2 is the best move possible. So by choosing $2 you would win the game, even though you would lose most of the value of your suitcase filled with antiques (that's why it's called a dilemma).—An Sealgair (talk) 03:14, 16 November 2014 (UTC)
$100 limit?
[edit]How can one traveler get paid $101 if the airline insurance limit is $100?
The article states if Alice quotes $99 and Babar quotes $100 then Alice gets $101. Since this is a distinct possibility, how then was the airline manager prepared to pay out this amount when, by airline policy, he could only award $100? I suppose you could say he would just pay the dollar out of his own pocket. But that would be a rather off-handed and clumsy explanation given this is a game-theory problem which usually has a very strict design. And besides, just out of principle, it seems awkward to suggest that someone would risk ANY amount of his own money just to do a job like this, particularly if there is no chance of any gain on that risk.
So then the more satisfactory explaination is: He's penalizing Babar for quoting $100 by subtracting $2 from the now agreed upon value of $99 for his luggage and then giving that $2 to Alice.
But for everything to be ligit he would still have indicate on his paperwork that he is awarding $99 to both passagers. So he would have to take $2 from Babar's awarded amount to leave him with $97 (in effect stealing from a customer for the airline) and then give Alice an extra $2 {in effect stealing from the airline to reward a customer). Babar could raise a big stink about the whole thing and the manager would lose his job when they look into the issue and find out how he's been falsifying baggage insurance records.
Considering this, if you assume that the airline manager cannot legally pay out any more than the $100 maximum, then this solves the dilemma. There would be little or no incentive for either passenger to quote $99 if they can't get the $101. And therefore we would never get into the reverse inductive reasoning thats leads them down to the $2 quote.
Dont get me wrong. I understand the point of this problem well and its very interesting. Just thought I'd point out what I believe to be a minor flaw.Racerx11 (talk) 05:03, 30 August 2010 (UTC)
- in none of the outcomes the airline has to pay more than $200 for both travelers combined. JocK (talk) 13:05, 2 September 2010 (UTC)
- True. In the example I give, the total is $198. But its not like he is able to just pull $198 in cash out of a register and divide it up however he sees fit. The airline will want a record of these two settlements. So he will have to fill out some kind of documentation and he wont be able to indicate in this documentation that he payed either passanger more than $100. As explained in the middle of my original post, this means he would have to falsify this documentation and the manipulation of the actual payoffs to the travelers could be viewed by the airline as stealing. Also I would expect an airline would require baggage loss settlements to be paid by check. Possibly mailed to the recipients at a future date, further complicating the managers situation. I know all this is irrelevant to the intents and purposes of the problem. Its only a flaw as a real life example.Racerx11 (talk) 14:42, 2 September 2010 (UTC)
- Indeed, it's irrelevant for the arguments made in the article. However, if you consider it important enough to necessitate a modification of the text, I suggest you change the sentence "the airline is liable for a maximum of $100 per suitcase" into "the airline is liable for a maximum of $200 for both suitcases combined". JocK (talk) 16:58, 2 September 2010 (UTC)
- hehe. ok I don't want to put words in your mouth but Im kinda taking your point as: if I consider it important enough to elaborate on this talk page then I should think it important enough to fix in the article. Well taken and considered even if that wasn't your intent. I will think about this and your suggestion for the fix. I must say at first impression the condition "the airline is liable for a maximum of $200 for both suitcases combined" still doesn't sit too well for me as a satisfactory real life example. I guess Im unsure if I should "fix" it or not. But I will think about all of this some more later. Thanks.Racerx11 (talk) 18:02, 2 September 2010 (UTC)
- Indeed, it's irrelevant for the arguments made in the article. However, if you consider it important enough to necessitate a modification of the text, I suggest you change the sentence "the airline is liable for a maximum of $100 per suitcase" into "the airline is liable for a maximum of $200 for both suitcases combined". JocK (talk) 16:58, 2 September 2010 (UTC)
- True. In the example I give, the total is $198. But its not like he is able to just pull $198 in cash out of a register and divide it up however he sees fit. The airline will want a record of these two settlements. So he will have to fill out some kind of documentation and he wont be able to indicate in this documentation that he payed either passanger more than $100. As explained in the middle of my original post, this means he would have to falsify this documentation and the manipulation of the actual payoffs to the travelers could be viewed by the airline as stealing. Also I would expect an airline would require baggage loss settlements to be paid by check. Possibly mailed to the recipients at a future date, further complicating the managers situation. I know all this is irrelevant to the intents and purposes of the problem. Its only a flaw as a real life example.Racerx11 (talk) 14:42, 2 September 2010 (UTC)
A different strategy for the travelers
[edit]Although there is nothing wrong with the inductive reasoning used by passengers to conclude that $2 is the best quote, it occurs to me that the travelers could each use a very similar line of reasoning and yet come to a very different conclusion.
Each passenger reasons: If I quote $99, I could get $101. But if the other is also thinking the same and quotes $99, THEN WE BOTH GET $99. So I should quote $98 in order to get $100, but then the other is going to think the same and quote $98. WE BOTH GET $98. So I quote $97 and so does he and we BOTH GET $97...and so on until we are both down to each of us quoting $2. Well that sucks! The other person seems reasonable and should think the same way as me. So I will quote $100 and so shall he.
To me this is very similar to the reasoning used in the "equilibrium strategy" which yields the $2 conclusion, but with just a minor tweak, it is changed to the $100 conclusion. The "purely optimal strategy" is still $100 but for different reasons. Its more to do with expected gain. Quoting $2 yields a 2 or 4 dollar gain, while quoting $100 yields a range from 0 to 100 dollars. You should only quote $2 if you are absolutely sure the other will quote $2. In the problem there is nothing to suggest that the other is certain to quote $2, so its safe to assume it is likely he WONT quote $2. Its the gambler's equivalent to making $2 bet on a good chance of winning $100. Thats a bet we should take all day long. So thats the optimum strategy. The equilibrium strategy in the problem is a little like an insurance policy that guarantees at least a $2 payoff, but no more than $4. The price of that policy is giving up the chance at a much higher payoff up to and including $100. Not a very attractive deal.
I don't agree that the equilibrium necessarily be set at $2 since the inductive reasoning used can just as easily (as shown in 1st paragraph) conclude a strategy of quoting $100. Which happens to be the purely optimal strategy anyway. Racerx11 (talk) 17:54, 30 August 2010 (UTC)
- Indeed. Give it a few more years and maybe you'll see some professor coming up with this idea as an epiphany, although it seemed fairly obvious to me. Could be that someone else has already made the point - I haven't looked into it much. II | (t - c) 08:39, 25 November 2010 (UTC)
Definition Of Prediction
[edit]Just a random reader tab-surfing here, I have no wiki account to tie this to. I'm far from an expert on game theory but this seems to me to rely on a premature termination of predictive strategy and response.
The entire paradox is defined by prediction of the second traveler's strategy by the first. Since it is a hypothetical situation you in practice assign a behaviour to the second traveler, writing them a course of logic comparable to a computer program.
Suppose we assign the second traveler a "snap decision" or "stupid" (to be less than PC) strategy. The second traveler makes a decision on the spot without iterating through responsive theory whatsoever. They will pick $100 because this is the highest value, and not think about the first traveler's/"player"'s response. In this situation the optimal strategy is picking $99, thus netting $101.
On another tact, we assign the second traveler an "infinite contemplation" or "genius" behaviour. This is the behaviour seemingly employed in the article, where one follows prediction and response in a leap-frog down to the minimum value. This relies on the ability of the second traveler to apply logic perfectly to the first traveler under the unwavering assumption the first traveler will act accordingly - it does not for example consider the possibility the first traveler would be snap decision/stupid, in which case the optimum strategy would be to pick $99 as given above. The logic for the second traveler in this scenario seems to be "I operate under perfect logic and assume that the first traveler does as well". The second traveler believes the two travelers can "read" each other's "program" and thus even though they cannot watch one another run the program, they know infallibly the outcome. Now, if there were some clause that the travelers could not agree upon a price, that they had to select 2 separate values, and there was some kind of turn taking mechanism, such as a man turning to each to ask what they pick, then the other, until both decide to settle, in this scenario picking $2 instantly would be logical as by getting to it first you prevent the other traveler from beating you to the punch, limiting them to only higher values and thus securing the $2 bonus. But there is no such mechanism in play, and both travelers can pick the same value. The second traveler's assumption that "I operate under perfect logic and assume that the first traveler does as well", can thus be condensed first to "The first traveler will arrive at the same conclusion as me" and then "the first traveler will pick the same value as me". The second traveler thus believes that whatever value it chooses, the first traveler - the player - will also choose. This means both first that attempting to pick less than the first traveler is logically futile, and that if the second traveler picks $100, the first traveler will also pick $100, while if the second traveler attempts to undercut the first traveler and pick $99, the first traveler will also have attempted the exact same thing and picked $99, thus eliminating the possibility of earning the $2 bonus. This could be repeated down to the base minimum of $2, as it has originally, but it seems to me premature to stop the chain of prediction there. Unless there is a flaw in my logic, the second traveler's behaviour is perfectly equivalent to "The first traveler will pick the same value as me". Reaching the $2 minimum should draw the conclusion "It is impossible to bid less than the first traveler and win the additional $2", not the given "This is the best solution". The bonus/malus is thus effectively unobtainable and removed from the game logic, having as much weight on the matter as the statement "The airline manager is a sexist and will always believe a man over a woman. Both travelers are male/female" - both are beyond reach and thus irrelevant. Having removed the bonus/malus factor from the equation, the optimal choice is $100, since the second traveler believes whatever it picks the first traveler will as well, and in this scenario the best choice is $100. The first traveler, the player, has a choice of either following this logic as well, picking $100 and getting $100, or following the logic exhibited on the page, picking $2 and getting $4, or believing they can trick the second traveler and picking $99 and getting $101.
In summary, either $100 or $99 are the optimal choice for the first traveler if the second traveler operates logically, the better of which I won't attempt to conclude here, but $2 is not the optimal choice unless you believe/know the second traveler to operate under a flawed logic, and not the optimal game theory of picking $100 under the belief the player will pick whatever value they do. It is entirely possible for the logic to break under other behaviours. For example if dealing with a traveler using "mean" logic, you know they will choose $51 as the middle of the possible range, and $50 becomes the optimal strategy. As listed above $99 is the best choice with a "snap decision" traveler, while $2 is optimal if dealing with a "paranoid"/"malicious" or similar traveler, who will place priority on how certain a pay-off is over how much it is or who will place priority on inflicting the malus on the other traveler respectively. As I stated in the beginning the problem is defined by prediction and what you believe the other traveler will do - however, if you believe the other traveler to employ optimum game theory then $100 seems demonstrably the choice they will make and either $100 or $99 the best choice "you" can make. Kinda long-winded, but there you go.
PS: Got my terminologies a bit mixed up. I'm referring to Nash equilibrium. It seems to me Nash equilibrium would never arrive at $2, or more accurately not stop at it for the same reason it doesn't stop at $99, $98 or any other value. The point of termination seems arbitary, to me. — Preceding unsigned comment added by 81.178.243.190 (talk) 15:10, 9 July 2011 (UTC)
- Signed Zone (just so people have something to refer to me by if they do) — Preceding unsigned comment added by 81.178.243.190 (talk) 14:55, 9 July 2011 (UTC)
Changes to the introduction
[edit]Someone has added a paragraph at the bottom of the introduction, outlining a vague strategy involving completely arbitrary "estimates" of the probability that the other player will pick a certain value. I think the very specific meaning given to the term "rational" in game theory was missed by whoever posted that. I do not have an account so I don't want to make any edits, but if you think I am correct please delete that last paragraph. If I am missing something I'd be grateful if someone could explain. — Preceding unsigned comment added by 217.165.114.132 (talk) 03:40, 3 April 2012 (UTC)
Is 'proof' the right word for reasoning the optimal strategy is the minimum amount?
[edit]I think these 'proofs' are flawed as they assume that whatever your reasoning the other traveller will know the exact amount you arrive at. With such an assumption, the proofs are indeed proofs. However if you are able to introduce a random element into your choice so that the other traveller cannot work out exactly what you will decide, then the logic of the proof falls apart. If the other traveller doesn't know exactly what I am going to write then his policy cannot be perfectly tailored to be just below your figure.
There is an assumption of rationality and if there is just one clear rational answer to the problem the other traveller can work this out and therefore tailor their answer to make use of this. This is not the same as assuming the other traveller knows all your reasoning and the conclusion you arrive at. It is a false dichotomy to say there is rational and non rational answers and the non rational can be eliminated to leave just one rational answer. There is rational, non rational but also other cases where it is too difficult or impossible to decide what is rational. It is also a false choice to say you must choose 2, 3, 4 ... 100 because you can also decide on a strategy to choose between 99 and 100 in a way the other traveller cannot work out, or choose between 98 or 99 in a way the other traveller cannot work out, or between 98, 99 or 100 in a way the other traveller cannot work out, or ....
If you rule out the possibility of deciding at random in a way that the other traveller cannot work out, then choosing 2 would appear to be the Nash equilibrium. Such introduce a random element strategies do not appear to be ruled out by the article. If we leave them still not ruled out, perhaps the Nash equilibrium is still choose 2 however the 'optimal strategy' isn't that. Should random element strategies be ruled out or should 'proof' be changed to something more like 'paradox creating apparent proof' or something else? crandles (talk) 15:16, 1 January 2016 (UTC)
- The main source of confusion I see with this section is the claim that the Nash equilibrium ($2) is the "optimum" choice. Most people probably think of "optimum" as maximizing the expected value of how much money the traveler would get. The experimental results and simple reasoning clearly show that choosing $2 does not maximize the expected value. My suggestion to improve this section is to clearly explain that this is just a proof of the Nash equilibrium. This article should also explain the assumptions of the Nash equilibrium. The game theory notion of "rational" should also be clearly defined as it relates to this game. The essence of this paradox is that the predictions of game theory in this case do not correspond to reality. Assuming that the mathematics of game theory is correct, this article should clearly explain the differences between the game theory model of the players and actual players, and how specific differences between the game theory model and reality lead to an inaccurate prediction. In response to your comment, it would be interesting to find a source that has done a probabilistic analysis of this problem; however, as far as I know, a probabilistic analysis is not related to the Nash equilibrium. Sapiens scriptor (talk) 00:52, 2 January 2016 (UTC)
- >interesting to find a source - See "Schelling Formalized: Strategic Choices of Non-Rational Personas" Wolpert 2009 [1] ~(discussed at [2] "In our framework a player i adopts a binding “persona” — a temporary utility function — that they honestly signal before play" makes it seem like they are investigating something other than traveller dilemma with rational players which wasn't quite what I was trying to suggest that it can be rational to not follow the Nash equilibrium. Seems quite a lot like Unexpected hanging paradox and that is a paradox not accurate logic. Anyway I don't want to introduce OR whether right or wrong.
- I agree with your "clearly explain that this is just a proof of the Nash equilibrium. This article should also explain the assumptions of the Nash equilibrium. The game theory notion of "rational" should also be clearly defined as it relates to this game." crandles (talk) 11:12, 2 January 2016 (UTC)
Dr. Morone's comment on this article
[edit]Dr. Morone has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:
"When the game is played experimentally, most participants select a value close to $100.
Please change with: "When the game is played experimentally, most participants select a value higher than the Nash equilibrium and closer to $100 (corresponding to the Pareto optimal solution). More precisely, the Nash equilibrium strategy solution proved to be a bad predictor of people’s behaviour in a TD with small bonus/malus and a rather good predictor if the bonus/malus parameter was big (Capra, C.M., J.K. Goeree, R. Gómez and C.A. Holt, 1999. “Anomalous behavior in atraveler’s dilemma?” American Economic Review 89(3), 678–690.)."
Add at the end of the paragraph "Experimental results" the following update on new developments:
"Recently, the TD was tested with decision undertaken in groups rather than individually, in order to test the assumption that groups decisions are more rational, delivering the message that, usually, two heads are better than one (Cooper, D.J. and J.H. Kagel, 2005. “Are two heads better than one? Team versus individual play in signaling games”. American Economic Review 95(3), 477–509). Experimental findings show that groups are always more rational – i.e. their claims are closerto the Nash equilibrium - and more sensitive to the size of the bonus/malus (Morone, A., P. Morone and A.R. Germani, 2014. "Individual and group behaviour in the traveler’s dilemma:An experimental study", Journal of Behavioral and Experimental Economics 49 (2014), 1–7)."
We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.
Dr. Morone has published scholarly research which seems to be relevant to this Wikipedia article:
- Reference : Andrea Morone & Piergiuseppe Morone, 2012. "Individual and Group Behaviours in the Traveller's Dilemma: An Experimental Study," Working Papers 2012/09, Economics Department, Universitat Jaume I, Castellon (Spain).
ExpertIdeasBot (talk) 18:56, 27 June 2016 (UTC)
Low quality article
[edit]This article is of extremely low quality and may cause more confusion than it does good.
- The "solution" of the game talks about asymmetric information, which hints at the airline being a player in the game, which it is not. - It also makes no sense to even mention Backwards Induction as a solution algorithm, since it only applies to dynamic games and this is a static game. - If the goal of the section "Analysis" is to prove that (2,2) is a Nash Equilibrium, then the section should be written up in this way: it should just prove that the best response to the opponent reporting 2 is to also report 2. Then one could go on to also prove that there can be no other Nash Equilibrium (x,x) where x > 2 (this can be proven simply by showing that the best response to x is x-1 for any x > 2. - There is a sentence saying that "The traveler's dilemma can be framed as a finitely repeated prisoner's dilemma". It cites two references, where the first does not exist online (there are dead links to the paper online), and the second does not support the claim (it only references the word "repeat" once and does not support the sentence). This claim does not make much sense: the game is a static game, I don't see what it has to do with repeated prisoners' dilemma at all.