EFTA01137486
EFTA01137487 DataSet-9
EFTA01137514

EFTA01137487.pdf

DataSet-9 27 pages 11,203 words document
P17 V11 P22 V9 P23
Open PDF directly ↗ View extracted text
👁 1 💬 0
📄 Extracted Text (11,203 words)
A Theory of Deception* David Ettingertand Philippe Jehielt 5th January 2009 Abstract This paper proposes an equilibrium approach to belief manipulation and deception in which agents only have coarse knowledge of their opponent's strategy. Equilibrium requires the coarse knowledge available to agents to be correct, and the inferences and optimizations to be made on the basis of the simplest theories compatible with the available knowledge. The approach can be viewed as formalizing into a game theoretic setting a well documented bias in social psychology, the Fundamental Attribution Er- ror. It is applied to a bargaining problem, thereby revealing a deceptive tactic that is hard to explain in the full rationality paradigm. Deception and belief manipulation are key aspects of many strategic interactions, includ- ing bargaining, poker games, military operations, politics and investment banking. Anec- dotal evidences of belief manipulation and deception are very numerous, and Michael Lewis's (1990) best-seller "Liar's Poker" reports colorful illustrations of such strategic behaviors in the world of investment banking in the late 1980s. For example, Lewis explains how "he spent most of his working life inventing logical lies" that worked amazingly well (thanks to the logical appearance, see Lewis (1990) page 186). From the viewpoint of game theory, "We would like to thank the editor and the referee for useful comments. We also thank K. Binmore, D. Fudenberg, D. Laibson, A. Newman, A. Rubinstein, the participants at ESSET 2004, Games 2004, ECCE 1, THEMA, Berkeley, Caked', Institute for Advanced Study Jerusalem, the Harvard Behavioral/experimental seminar, Bonn University, the Game Theory Festival at Stony Brook 2005, and the conference in honor of Ken Binmore UCL 2005, for helpful comments. We are grateful to E. Kamenica for pointing out the literature on the Fundamental Attribution Error. tUniversite de Cergy-Pontoise, THEMA, F-95000 Cergy-Pontoise, France tPSE and UCL; jehieltkenpc.fr 1 EFTA01137487 belief manipulation and deception are delicate to capture because traditional equilibrium ap- proaches assume that players fully understand the strategy of their opponents.' We depart from this tradition by assuming that players may have a partial rather than total under- standing of the strategy of their opponents. This in turn allows us to propose an equilibrium approach to deception, where deception is defined to be the process by which actions are chosen to manipulate beliefs so as to take advantage of the erroneous inferences? To illustrate the phenomenon of deception, we will consider and formalize the following bargaining situation. The owner of a house, Mrs A, wishes to sell her good at some price considered to be high (say above the market price as perceived by real estate agents). A potential buyer, Mr B, comes in. Mr B will accept paying the high price if he is afraid enough that another buyer may be interested in the house. Otherwise, he will prefer to continue bargaining in the hope of getting a lower price. The owner, Mrs A, after mentioning some slight problems with the heating system (thereby conceding a small discount in the price) tells Mr B that there is another potential buyer, and so she is not willing to discount the price any further. Mr B has no way to verify Mrs A's claims (in a reasonable amount of time). Should Mr B trust Mrs A when she says that there is another buyer, or is she bluffing? In the theory to be developed below, mentioning that there are heating deficiencies will make it more likely in Mr B's eyes that Mrs A is an honest seller always telling the truth. As a result, Mr B will be convinced enough that there is indeed another buyer when Mrs A says so, and he will accept paying the high price (minus the small discount conceded for the reported heating deficiencies). By mentioning that there are deficiencies, Mrs A manipulates Mr B's belief about her true nature (whether she is an honest seller or an opportunist), and she exploits Mr B's inference error when she says that there is another buyer. Such a deceptive tactic works in our theory in so far that mentioning small deficiencies is more representative of honest sellers than of opportunist sellers over all transaction situations (with high or low prices, say), and in forming his judgement about Mrs A's type, Mr B somehow only considers the general attitudes of the various types of sellers and does not distinguish how the various types of sellers behave in those various transaction situations 'As regards Lewis deceptive tactic, it is not at all clear from a game theoretic perspective why the fact that the lie is logical (in a given instance) should increase the likelihood that it is believed. If tiers always use logical lies, then logic should even heighten the listener's suspicion. 'From the perspective of this paper, logic may be viewed as more typical of true statements (over all possible statements), thereby making the use of logical lies more effective. 2 EFTA01137488 with high or low price. We will present a detailed formalization of the above deceptive bargaining tactic in Sec- tion II, pointing out that it would not work if Mr B were fully rational.3 Before developing that application, we present in Section I a general framework that allows us to model quite generally such inference errors as the one made by Mr B in a game theoretic equilibrium approach. Specifically, the class of games considered in this paper are two-player multi-stage games with incomplete information and observable actions in which players may be of several types, past actions are assumed to be observable by everyone, and types may affect the preference relations of players. A key non-standard ingredient is that players are also parameterized by how finely they understand their opponent' s strategy. In addition to their preference and informational characteristics, players are endowed with cognitive types. Following Jehiel (2005), cognitive types are modelled by assuming that players partition the decision nodes of their opponents into various sets referred to as analogy classes, and that players understand only the aggregate behavior of their opponent over the various decision nodes forming their analogy classes. Cognitive types are further differentiated according to whether or not the player distinguishes the behaviors of the various types of his opponent. Thus, cognitive types may vary in two dimensions: a player may be more or less fine in the partition of the decision nodes of his opponent (what we call the analogy part), and a player may or may not distinguish the behaviors of the various types of his opponent (the sophistication part). In the above bargaining story, Mr B bundles the announcement nodes of sellers into one analogy class, whether the price is high or low, and he distinguishes the behaviors of honest and opportunist sellers. Thus, Mr B uses a coarse analogy partition, but he is sophisticated in the terminology just defined. Given a strategic environment that includes the specification of players' cognitive types, we define an equilibrium concept that we refer to as the analogy-based sequential equilib- rium. In equilibrium, players have correct expectations about the aggregate behavior of their opponents in their various analogy classes - these are referred to as analogy-based expecta- tions. Whenever they move, players play best-responses to their analogy-based expectations 3Indeed, if Mr B were fully rational, he should understand that opportunist sellers more systematically concede that there are small deficiencies when the price is high, and thus Mr B should be even more cautious about the true presence of another buyer when told that there are heating problems. 3 EFTA01137489 and to their belief about the type of their opponent. As the game proceeds, players update their beliefs about the type of their opponent according to Bayes' rule as derived from their analogy-based expectations.' In Section I we show that in finite environments (£mite numbers of types, actions, and nodes), an analogy-based sequential equilibrium always exists. We also suggest how to inter- pret the solution concept from a learning perspective. Finally, we illustrate the working of the concept in a simple two-person two-period zero-sum game in which the payoff structure is commonly known to players but players may have cognitive types other than the fully rational one. The example serves to illustrate 1) why a player with non-fully rational cog- nitive ability cannot be viewed as a rational player who does not distinguish between some situations (a player with coarser information), 2) how, in a mixed population of rational and coarse players, a rational player always performs better, and 3) why, in our framework with incorrect inferences, there may be room for reputation building even in zero-sum games where there is no value to commitment.5 The framework of Section I is then used in Section II to formalize the above deceptive bargaining tactic. Section III concludes. We shall start, however, by situating our work in the perspective of various literatures. Related literature There have been many attempts to relax the rationality assumptions imposed on economic agents. These include relaxing the ability of agents to optimize their strategy given their beliefs (as in the Quantal Response Equilibrium, Richard McKelvey and Thomas Palfrey, 1995) or relaxing the ability of agents to form correct expectations. By maintaining the ability of agents to optimize their strategies given their beliefs, our paper contributes to the second form of departure from rationality, which we refer to as cognitive limitations. 'More precisely, we assume that players adopt the simplest representation of their opponent's strategy that is consistent with their knowledge (the analogy-based expectation). That is, the opponent's behavior in the various nodes bundled into one analogy class is assumed to be the same and in equilibrium it coincides with the aggregate distribution of the opponent's behavior over the set of nodes forming the analogy class. The evolution of the belief system is then similar to that in sequential equilibrium (David Kreps and Robert Wilson (1982a)) except that it is based on the conjecture about the opponent's strategy as just defined (rather than on the opponent's true strategy). 5The traditional approach to reputation pioneered by Thomas Schelling (1960) associates the idea of successful reputation building with the successful ability to commit to a particular behavior (which is of no use in a zero-sum game, due to the minmax theorem). 4 EFTA01137490 Several routes have been pursued to model cognitive limitations either introducing explicit biases in the inference process (see Daniel Khaneman et al., 1982 for an exposition of such biases as the gambler's fallacy, the base rate neglect, the conjunction fallacy etc...) or deriving the expectations from limited introspective reasoning (as in the level k approach, Dale Stahl, 1993) or deriving the expectations and inference process from the erroneous or coarse perception held by agents about their environment (approaches based on subjective prior or the self-confirming equilibrium and this paper, respectively). Our paper contributes to the last of these routes by further postulating that the coarse perception held by boundedly rational agents is the simplest representation -or model of others- that is consistent with their coarse statistical knowledge. Such a line of research that views bounded rationality equilibrium concepts as a result of partial learning is the common theme of the limited foresight equilibrium (Jehiel, 1995), the analogy-based expectation equilibrium (Jehiel, 2005) and the valuation equilibrium (Jehiel and Samet, 2007).6 Jehiel (2005) developed the analogy-based equilibrium concept to capture bounds on rationality that accommodate coarse perception but fully rational information processing, and extended to static games of incomplete information in Jehiel and Frederic Koessler (2008). Our aim in this paper is to extend this basic structure to extensive games with incomplete information, which is necessary to analyze the evolution of beliefs over time. The extension of these concepts to dynamic games allows us to examine the bask ideas of belief manipulation and deception. Connected to the analogy-based expectation equilibrium, Erik Eyster and Matthew Rabin (2005) have proposed a concept for static games of incomplete information, called cursed equilibrium, in which players do not fully take into account how other people's actions depend on their information.' In problems with interdependent preferences, the cursed equilibrium of Eyster and Rabin gives rise to erroneous equilibrium beliefs (as the analogy-based expectation equilibrium does) about the relation between the strategy and the signal of the opponent. Yet, by the very static nature 6Other approaches based on the idea that to facilitate learning agents do not consider the set of all possible strategies but only a subset are also available, see in particular Olivier Compte and Andrew Postlewaite (2008). ' The cursed equilibrium was developed independently of the analogy-based expectation equilibrium. The fully cursed equilibrium can be viewed as a special case of the analogy-based expectation equilibrium in which players' analogy partitions coincide with their own information partitions. The partially cursed equilibrium can be viewed as an alternative approach to the idea of partial sophistication to that captured by the analogy-based expectation equilibrium (see Jehiel and Koessler (2008) for further discussion). 5 EFTA01137491 of the games considered by Eyster and Rabin, no belief manipulation can be captured by their approach, which constitutes a key difference from the present framework. Even though the starting point of our approach is about modeling the consequences of the coarse perception of agents with cognitive limitations as just explained, it turns out that our paper can also be viewed as formalizing a well studied bias in social psychology, e.g., the Fundamental Attribution Error (FAE) (see Edward Jones and Keith Davis (1965), Lee Ross (1977), Ross, Teresa Amabile and Julia Steinmetz (1977)). Roughly speaking, the FAE is "the tendency in forming one own's judgement about others to underestimate the importance of the specific situation in which the observed behavior is occurring" (Maureen O' Sullivan (2003)).8 In the above bargaining story, Mr B is subject to the FAE. In forming his judgement about whether he is facing an honest seller after Mrs A has reported minor heating deficiencies, Mr B "ignores" that sellers' attitudes are not the same whether the price is high or low. Our model provides an explicit way to formalize such a neglect by Mr B. There have been several earlier game theoretic attempts to capture the phenomenon of deception. These include the ideas of playing mixed strategy (to avoid being detected) in zero-sum interactions (John von Neuman and Oskar Morgenstern (1944)) and of playing a pooling or semi-pooling equilibrium (thereby not revealing one's own type) in signaling games (Michael Spence (1973)) or communication games (Joel Sobel (1985) and Vincent Crawford (2003)) or repeated games (Kreps and Wilson (1982b), Kreps et al. (1982), Drew Fudenberg and David Levine (1989)). Our approach to deception differs from these earlier approaches in that it is based on the idea of belief manipulation (by which we mean that some players end up having erroneous beliefs based on their observation), which cannot arise in the standard rationality paradigm considered in these earlier approaches. In our theory, deception can be viewed as the exploitation by rational players of the FAE made by other 8Ross et al. (1977) report a striking example in support of the FAE. In a pool of Stanford students from various fields, subjects were divided between questioners and answerers. The "questioners" were requested to ask the answerers difficult questions. Every questioner was matched to a single answerer who was almost always from a different field. After the quizz (answerers and questioners then knew how many correct answers were given in their match), it was observed that answerers consistently thought they were worse than questioners, thereby ignoring the fact that the pool of questions on which they performed relatively poorly was not generated at random but drawn from the esoteric knowledge of the questioner. Note that answerers were explicitly told before the quizz that questioners could freely choose the questions they liked best. 6 EFTA01137492 players, where FAE allows for belief manipulation. Finally, it should be mentioned that our setup can be used to formalize a model of persua- sion in the vein of the one developed independently of this paper by Sendhil Mullainathan et al. (2008), in which a persuader fords it advantageous to send (costly) messages even when they are not informative.9 I. A General Framework A. The class of games and the cognitive environment We consider multi-stage two-player games with observed actions and incomplete inform- ation. Extension to more than two players raises no conceptual difficulties. Each player i = 1,2 can be one of finitely many types ai E e1. Player i knows his own type Of, but not that of player j, j i. We assume that the distribution of types is independent across players, and we let pa; > 0 denote the prior probability that player i is of type These prior probabilities pi = (poi)°, are assumed to be known to the players. Players observe past actions and earlier moves by nature except for the choice of their opponent's type. Moreover, there is a finite number of stages, and, at every stage and for every player including nature, the set of pure actions is finite. Player i plays at the same set Hi of histories, whatever his type 01.10 Moreover, the action space of player i at history h E H1 is common to all types Oh and is denoted by Ai(h). The set of all histories is denoted by H and the set of terminal histories is denoted by Z. The set of players who must move at history h is denoted by 1(h), and ha is the history starting with h and followed by a where a E X Ai(h) is the action profile played by the io(h) players who must move at h. Each player i is endowed with a VNM utility function defined on lotteries over terminal histories h E Z. Player i's VNM utility is denoted by ti; and it may depend on the types of 9 In their model, such an application requires nature in state s = 1 (or 2) to be identified with the strategic persuader in state s = 0. It also requires to assume that the listener pools the message moves in state s = 1 (or 2) and s = 0 into one analogy class (while distinguishing the persuader's behavior according to her private information). The analogy-based sequential equilibrium thus obtained corresponds to the more "Bayesian" approach they present in appendix II, thereby providing a learning justification to that approach rather than to the simpler one pursued in the body of their paper. 10 A history refers to the earlier moves made by the players and possibly the earlier moves made by nature except for the choice of players' types which is not included in the history. Given our observability assumptions, histories are commonly known to the players. 7 EFTA01137493 players i and j together with the terminal history. That is, ui(h; 9,, 04 is player i's payoff if the terminal history h E Z is reached, and players i and j are of type Oi and Oi, respectively. Each player i is assumed to know his own payoff structure (but not a priori that of his opponent). The non-standard aspect of our strategic environment F lies in the definition of the types 0i. Types 0; are made of two components Oi = (t,, c1) where ti is the preference type of player i that acts on players' preferences - this is the standard component in the type - and ci is the cognitive type of player i, defining how finely player i understands the strategy of player j - this is the non-standard component in the type. As common sense suggests, the cognitive type of players do not affect players' preferences over the various terminal nodes. That is, for every terminal history h E Z, we have that iii(h; 14,04 = ni(h;010 01j) whenever 0, and 0 have the same preference type ti, and Oi and O.; have the same preference type ti. Cognitive types ci are defined as follows. Each player i forms an expectation about the behavior of player j by pooling together several histories h E Hi at which player j must move, and Parh such pool is referred to as a class of analogy. Players are also differentiated according to whether or not they distinguish between the behaviors of the various types of their opponent. Formally, a cognitive type ci of player i is characterized by (Ana, di), where An; stands for player i's analogy partition and Si is a dummy variable that specifies whether or not type Of distinguishes between the behaviors of the various types Oi of player j. We let Si = 1 when type Oi distinguishes between types 0i's behaviors and Si = 0 otherwise. As in Jehiel (2005), Am is defined as a partition of the set Hi of histories at which player j must move into subsets or analogy classes a ;." When h and h.' are in the same analogy class cr1, it is required that Ai(h) = Ai(h1). That is, at two histories h and h' which player i pools together, the action space of player j should be the same, and A(cri) denotes the common action space in ai. H A partition of a set X is a collection of subsets xk C X such that U xk = X and xk fl xk. = 0 for k # k'. k EFTA01137494 B. Analogy-based sequential equilibrium. Analogy-based expectations: An analogy-based expectation for player i of type 0; is denoted by floe It specifies, for every analogy class a; of player i of type Of, a probability measure over the action space A(a;) of player j. Types Oi of player j are distinguished or not by player i according to whether di = 1 or 0. If di = 1, Po, is a function of 0i and cri, and /30,(0i, a;) is player i's expectation about the average behavior of player j with type Oi in class af. If di = 0, player i merges the behaviors of all types Oi of player j, and floi is a sole function of oti: fisoi(a5) is then player i's expectation about the average behavior of player j in class a; (where the average is taken over all possible types).12 We let tli = (60,)0,Ee, denote the analogy-based expectation of player i for the various possible types 0s E e5. Strategy: A behavioral strategy of player i is denoted by si. It is a mapping that assigns to every history h E 115 at which player i must move a distribution over player i's action space A;(h).13 We let an; denote the behavioral strategy of type Of, and for every h E 115 we let aoi(h) E AA;(h) denote the distribution over A;(h) according to which player i of type 0; selects actions in A;(h) when at h. We let ao,(h)la;] be the corresponding probability that type 0; plays a; E A;(h) when at h, and we let a ; = (a01)91 denote the strategy of player i for the various possible types 0;; a will denote the strategy profile of the two players. Belief system: When player i distinguishes the types of player j, i.e. di = 1, he holds a belief about the type of his opponent and this belief may typically change as time proceeds (and new observations become available). Formally, we let µo; denote the belief system of player i of type th, where µo;(h)]Bj] is the probability that player i of type th assigns to the event "player j is of type 9/' conditional on the history h being realized. When player i does not distinguish the types of player j, no belief system is required. To '2 We could more generally allow players to distinguish partially the types. This would lead to a partitional approach defining which of the types are being confused. The resulting presentation would however be more cumbersome without bringing additional insights. "Nlbced strategies and behavioral strategies are equivalent, since we consider games of perfect recall. 9 EFTA01137495 save on notation, we assume that in this case player i 's belief coincides with the prior pi throughout the game. We call pi the belief system of player i for the various possible types 0i, and we let p be the profile of belief systems for the two players i = 1, 2. Sequential rationality: From his analogy-based expectation /30,, player i of type Oi derives the following repres- entation of player j's strategy: Player i perceives player j to play at every history h E cri according to the average behavior in class ai." The induced strategy depends on the type tki of player j whenever 8; = 1 but not when di = 0. At every history h E Hi where he must play, player i is assumed to play a best-response to this perceived strategy of player j as weighted by his belief poi (h). Formally, we define the glorperceived strategy of player j, crj ', as If di = 1 ape' (h) = /304(81, a;) for every h E a; and 0j E ei If si = 0 4 ,(h). moyi) for every h E ai and Oj E ej Given the strategy si of player i and given history h, we let si lb denote the continuation strategy of player i induced by si from history h onwards. We also let u"(s; Ih, sj Ih; 01, 04 denote the expected payoff obtained by player i when history h has been realized, the types of players i and j are given by Oi and Bj respectively, and players i and j behave according to si and sj respectively. Definition 1 (Criterion) Player i's strategy a; is a sequential best-response to ($, pi) if and only if for all 0i E ei, for all strategies si and all histories h E E pod(h)ioduii (ao, ih,oti h; 0i3O j) > po,(h)Hui;(si h, O1 1 t Ih; 0i, 0j). OjEei 0jEe; Consistency: In equilibrium, two notions of consistency are required. First, analogy-based expectations ''This is the simplest representation compatible with type Oi's knowledge. 10 EFTA01137496 are required to be consistent with the strategy profile. That is, they must coincide with the real average behaviors in every considered class and for every possible type (if types are differentiated), where the weight given to each element of an analogy class must itself be consistent with the real probability of visiting this element. A learning interpretation of this consistency requirement will be suggested. Second, the belief system held by players must be consistent with their expectations, as in Sequential Equilibrium. Formally, letting Pe(01,0i,h) denote the probability that history h is reached when play- ers i and j are of types Of and O respectively, and players play according to a, the consistency of the analogy-based expectations is defined as: Definition 2 Player i's analogy-based expectation $ is consistent with the strategy profile a if and only if: • For any (Ob el) E 8 such that Si = 1, and for all a; E ai) E (6, ,h)Eed pe: (0:, 0i, h) • ael (h) zans)Ee,x,r,PVi r( 0:, 0.1,h) whenever there exist 0 and h E a; such that Pff(0:,0j,h)> 0. • For any Of e such that (Si = 0, and for all oti E Ani, Eyy h)Eexa,P0:1291,Pe Wi> ei,h) coi(h) .60,(at) " E02,9;,h)Eexa: PosiPeri (°:' 0' h) whenever there exist ei and h E a; such that Pa (14,0.1,h) > 0. The consistency of the belief system is defined as: Definition 3 Player i's belief system µ; is consistent with the analogy-based expectation f3; if and only if for any (06 0i) E e such that Si = 1 he, (00)(0) = Poi • 11 EFTA01137497 And for all histories h, ha pg,(ha)[05] = 11,9,(h)[Os] whenever h 115 140,(h)Pliast (h)[a3] ao,(05)(ha) Ea;Eef 120,(h)rjust (h)[a5] whenever h E H5, there exists 0'5 s.t. aot (h)[ail > 0 and player j plays a5 at h. While the consistency of the analogy-based expectations (definition 2) should be thought of as the limiting outcome of a learning process, the consistency of the belief system p, (definition 3) should be thought of as an expression of player i 's inference process. Based on his representation of the strategy of the various types of his opponent, player i makes inferences using Bayes' law as to the likelihood of the various possible types he is facing. The learning process we have in mind to justify the correctness of the analogy-based expectations involves populations of players i and j in which there is a constant share po, of players of type O. In each round, players i and j are randomly matched. At the end of a round, the behaviors of the matched players and their types are revealed. These players exit the population, and they are replaced by new players with the same type.15 All pieces of information are gathered in a general data set, and players have different access to this data set depending on their types.16 At each round of the learning process, players choose their strategy as a best-response to the feedback they received (and the system of belief that derives from it), which in turn generates new data for the next round. If the pattern of behaviors adopted by the players stabilizes to some strategy profile a, every player's analogy- based expectations should eventually converge to the ones that are consistent with a given his cognitive type,14 which motivates the solution concept defined below. 15The replacement scenario is reminiscent of the recurring game framework studied by Matthew Jackson and Ehud Kalai (1997), who assume that each individual player only plays once. This is to be contrasted with a recent paper by Ignacio Esponda (2008), who, in static games of incomplete information, elaborates on Eyster-Rabin's fully cursed equilibrium by assuming that players i have access both to the empirical distribution of actions of players j (but not to how these actions are related to j's private information) and to i's own distribution of payoffs. ' 6 A player i with cognitive type c..; = (Ani,80 such that 4; = 0 has access to the average empirical distribution of behavior in every analogy class a ; E An; where the average is taken over all histories h E a ; and over the entire population of players j. A player with cognitive type c1 = (AN, 4;) such that 4; = 1 has access to the average empirical distribution of behavior in every a ; E An; for each subpopulation of types 0.; of players j. 'Observe that the average in the expression of /30,(11 ,rxi) is taken over all possible realizations of player 12 EFTA01137498 Equilibrium: In equilibrium, both the analogy-based expectations and the belief systems are consistent, and players play best-responses to their analogy-based expectations at every history. In line with the Sequential Equilibrium (Kreps and Wilson (1082a)), we require the analogy- based expectations and belief systems to be consistent with respect to slight totally mixed perturbations of the strategy profile where a totally mixed strategy for player i is a strategy that assigns strictly positive probability to every action ai E Ai(h) at every history h E Hi. This in turn puts additional structure on the expectations and beliefs at histories that belong to analogy classes that are never reached in equilibrium.'s Definition 4 A strategy profile a is an Analogy-based Sequential Equilibrium if and only if there exist analogy-based expectations Pi, belief systems µ: for• i = 1,2, and sequences (ak)k, (fink , (µk)k converging to a, )3, respectively, such that each ak is a totally mixed strategy profile, and for• every i and k: I. a; is a sequential best-response to (p;,µ;) 2. fit is consistent with ak and 3. µ;Vi is consistent with Pt. Compared to the sequential equilibrium, the main novelty lies in the introduction of cog- nitive types who may only know partial aspects of the strategy of their opponent. Compared to the analogy-based expectation equilibrium (Jehiel (2005)), the main novelty lies in the introduction of players' uncertainty about the type of their opponent and the possibility that a cognitive type may distinguish the behaviors of the various types of his opponent. It is the combination of these features that allows us to speak of deception as the exploitation of the FAE. More precisely, such a deception requires the presence of players who are both uncertain about their opponent's type (so that there is room for inference processes) and i's types hence the summation over 0: . That is, we are assuming that player i of type Oi is informed of O j's behaviors whatever the type of player i they are matched with. The weight pe:Pa(CO,h) on (raj (h) simply reflects the relative frequency with which aoi (h) contributes to the aggregate behavior. "For those readers who dislike trembles, one can offer a weaker notion of equilibrium without trembles, similar in spirit to the self-confirming equilibrium (see Drew Fudenberg and David Levine (1998)). Note, however, that trembles have less bite in our setup than in the standard framework because for an analogy class to be reached with positive probability it is enough that one of the histories in the analogy class is reached with positive probability - a requirement that is weaker when the analogy class is larger. 13 EFTA01137499 are partially knowledgeable of the strategy of their opponent, so that the inferences may be erroneous. C. Basic properties We note that in finite environments, an equilibrium always exists, no matter how cognitive types are specified and distributed. Proposition 1 In finite environments, there always exists at least one Analogy-based Se- quential Equilibrium. Proof: The proof follows standard methods, first noting the existence of equilibria in which each player i is constrained to play any action ni E Mit) at any history h E H1 with a probability no less than e, and then showing that the limit as e tends to 0 of such strategy profiles is an Analogy-based Sequential Equilibrium. Q. E. D. We next observe that if every player i is rational (in the sense that for all types O = (ti, ci) of player i, the cognitive type ci = (Ani, 6i) is such that An; is the finest analogy partition U {h}, and player i distinguishes between player j's types, di = 1), then an analogy-based hoki sequential equilibrium coincides with a sequential equilibrium of the game in which every type Oi = (ti, ci) of player i is identified with her preference type Thus, our framework can be viewed as providing a generalization of the sequential equilibrium that allows us to cope with situations in which the cognitive abilities of players need not be perfect. D. A simple illustration In this part, we construct an analogy-based sequential equilibrium in a simple two-person two-period zero-sum game. This example serves to illustrate the working of the concept in a simple scenario. Specifically, consider the two-period-repetition of the following zero-sum stage game G. In stage game G the Row player chooses an action U or D, the Column player chooses an action L or /7, and stage game payoffs are as represented in Figure 4. The overall payoff obtained by the players is the sum of the payoffs obtained in the two periods. That is, there is no discount between period 1 and period 2 payoffs. 14 EFTA01137500 L R U 5, -5 3, -3 D 0, 0 7, -7 Figure 4. The stage game We assume that there are two types of Row players, the Rational type and the Coarse type, where both types are assumed to be equally likely. The Rational Row player has a perfect understanding of the strategy of the Column player, as in the standard case. The Coarse Row player only knows the average behavioral strategy of the Column player over the two time periods (i.e., he bundles period 1 and the possible histories in period 2 into one analogy class). There is one type for the Column player. The Column player is Sophisticated in the sense that he distinguishes between the behaviors of the Rational Row player and the Coarse Row player. But, he is assumed to be Coarse in the sense that for each type of the Row player he only knows the average behavior of this type over the two time periods, i.e. he bundles all histories into one analogy class. Proposition 2 The following strategy profile is an Analogy-based Expectation Sequential Equilibrium. 1) Rational Row Player: Play U in period 1. Play D in period 2 if U was played in period 1, and U otherwise. 2) Coarse Row Player: Play U both in periods 1 and 2. 5) Column Player (Sophisticated Coarse): Play L in period 1. Play R in period 2 if the Row player played U in period 1. Play L in period 2 if the Row player played D in period 1. In equilibrium, (U, L) is played in period 1 and then (D, R) in period 2 whenever the Row player is rational, and (U, L) is played in period 1 and then (U, R) in period 2 whenever the Row player is coarse. The Column player gets an expected payoff of —10 that is less than her value —70/0. The Rational Row player gets an overall payoff of 5 + 7 = 12 and the Coarse Row player gets an overall payoff of 5 + 3 = 8. A key aspect of this equilibrium involves understanding the inference process of the Sophisticated Coarse Column player. The Coarse Row player always plays U, and the Rational Row player plays U and D with an equal frequency on average. These (average) behaviors of the two types of Row players define the analogy-based expectations of the 15 EFTA01137501 Column player. Given these expectations, the Column player updates her belief about the type of the Row player as follows: when action D is being played in period 1, the Column player believes that she faces the Rational Row player for sure. When action U is being played in period 1, the Column player believes that she faces the Coarse Row player with probability 1/24-1/2x 1/2 = ?.4 . Accordingly, the Column player plays R in period 2 because given her belief, this looks like the smartest decision, even though in reality it is not. Thus, by playing U in period 1, the Rational Row player builds a false reputation for being more likely to be a Coarse Row player, which he later exploits in period 2 by getting the high payoff of 7.19 We make several comments about the equilibrium shown in Proposition 2. First, the Column player gets an expected payoff that is less than her value, —70/9, even though, by the very property of the value, the Column player could very well guarantee -70/9 - no matter what the Row player does - by playing the maximin strategy (i.e., play L with probability 4/9 and R with probability 5/9 in both periods). The Column player chooses not to follow the maximin strategy because she thinks that she can do better, given her understanding of the strategy of Row players. Such a feature would, of course, not arise in a standard rationality framework in which the Column player should obtain, in equilibrium, at least what she can secure irrespective of other players' strategies. This helps to clarify the difference from Vincent Crawford (2003), who assumes in a zero-sum pre-play communication game that those agents whose behaviors are not exogenously specified are fully rational and are thus bound to get at least their value in equilibrium?' It also helps to explain why it is not possible to interpret the analogy-based sequential equilibrium as a sequential equilibrium that would obtain in the full rationality paradigm under alternative informational assumptions.' 19The rest of the argument to establish Proposition 2 goes as follows. It is readily verified that the Rational Row player plays a best-response to the Column player's strategy. (He gets an overall payoff of 5 + 7 = 12 and would only get an overall payoff of 0 + 11/2 at best if he were to play D in period 1, and he would obviously get a lower payoff by playing U in period 2.) The Coarse Row player finds it optimal to play U whenever he has to move, because he perceives the Column Player to play L and R with an equal frequency on average over the two time periods, and 1(5 + 3) > 1(0 + 7). 20 Vincent Crawford (2003) captures the idea of lying for strategic advantage in a zero-sum pre-play com- munication game that is populated by sufficiently many mechanical types. But in Crawford's model, the belief of rational players cannot be manipulated, as equilibrium requires that rational players are not mis- taken about either the distribution of types or about their strategies. This is a key difference from our approach. 2! Even if the Column player were assumed not to remember whether she is in stage 1 or 2, she could still 16 EFTA01137502 Second, in the equilibrium of Proposition 2, the Rational Row player obtains a larger payoff than the Coarse Row player. This is no coincidence, as the Rational Row player always has the option to mimic other types' strategies and Rational players assess correctly the payoff attached to any strategy. Finally, it should be noted that it would be impossible to reproduce the behavioral strategies described in Proposition 2 if there were only one type for each player, who would be characterized solely by his analogy partition as in Jehiel (2005).22 IL Deception as a Bargaining Tactic A. The basic setup The owner of a house, Mrs A, wishes to sell her good. The initial price has already been publicly announced. It is either p or p where p > p and p may be thought of as being the "market price" of the house as perceived by real estate agents. A potential buyer, Mr B, comes in, and the following interaction between Mrs A and Mr B takes place. Mrs A tells Mr B whether or not some small repairs (say for heating deficiencies) are needed in the house." If minor deficiencies are announced, the price drops by an amount A. That is, the new price is p — A where p was the originally announced price (A should be thought of as being small relative to p - p). Then Mrs A tells Mr B whether or not there is another buyer who has expressed interest in the house. When the initial price was p = p and Mrs A says that another buyer has expressed interest, the price increases by a very small amount, say e. No such price increase occurs when the initial price is p = Only Mrs A knows whether indeed there are small repairs needed and whether there is another potential buyer. After the announcements are made, Mr B has to decide whether or not to accept the offer (before he can verify the correctness of Mrs A's announcements). secure the value, given that the maxmin strategy does not require any recall (it is stationary). See Jehiel (2005) and Jehiel and Koessler (2008) for further examples illustrating why the analogy-based expectation equilibrium cannot be interpreted as a standard equilibrium of a different game with modified information structure. "For the Column player to play a different action in periods 1 and 2, she should either be indifferent between playing L or R (which cannot be the case here, since the Row player does not play U with probability 7/9 on average) or treat separately the behavior of the Row in the two time periods, but then in period 1 she could not find it optimal to play L given that the Row player always plays U. "It is assumed that Mr B cannot verify the nature of these repairs within a reasonable amount of time. 24 We assume this only for plausibility. The analysis is unaffected if we assume that there is also a price increase when p = p (this is because c is assumed to be small in comparison with p - 6. 17 EFTA01137503 If Mr B says yes, the transaction takes place at the agreed price (i.e., p if no deficiencies were announced and p — A if deficiencies were announced). We let I/( denote Mr B' s payoff when the original price wasp and deficiencies were announced (so that the final price is 25 — A). If Mr B says no, there are several cases. When the original price was the "market price" p, no transaction takes place between Mrs A and Mr B, as we assume that Mrs A expects to sell her house at a price close to p and Mr B expects to buy a similar house at a price close to p (both Mrs A and Mr B would be slightly better off making the transaction now even at prices p — A, p + e, respectively, due to extra delays imposed by the transaction not being made now). When the original price wasp and there is effectively another buyer, no transaction between Mrs A and Mr B takes place. Mrs A gets a payoff that is less than p, due to the risk that the other buyer does not confirm his interest, but significantly larger than p, and Mr B gets a payoff of vr t (corresponding to the outcome of a search for another house). When the original price wasp and there is no other buyer,25 bargaining between Mrs A and Mr B goes on. We do not model this extra piece of bargaining explicitly, but we assume that a transaction eventually takes place at a price significantly lower than p - A (say not too far from p).2° We denote by Vir the payoff obtained by Mr B in this case.' On top of the above specifications, we assume that there are two categories of sellers, those who always tell the truth (whom we call honest sellers) and those who do what serves their interest best (whom we call opportunists). Mrs A can belong to either of these categories, but there is no way for Mr B to know which, except by making inferences from how she behaves (here, what she says in the announcement stage). Finally, we describe the probabilities of the various events, which are assumed to be known to both Mrs A and Mr B. We assume that the probability of the seller being honest is p = Pr(Mrs A is honest) independently of the other random variables. We assume
ℹ️ Document Details
SHA-256
3e532deac4da8674956ec5dff0bfdd0236076ed6208415e15bf7f9870418a282
Bates Number
EFTA01137487
Dataset
DataSet-9
Document Type
document
Pages
27

Comments 0

Loading comments…
Link copied!