Shapley values ​​are core assignments

Mechanisms without money

Transcript

1 Algorithmic Game Theory Summer 2017

2 choices Impossibility results Structured preferences Kidney exchange Stable matching

3 Majority rule with two candidates Direct election of Chancellor: (M) erkel, (S) chulz voters Preference order of voters Specified order 1 MS 2 SM 3 MS 1 MS 2 MS 3 MS Result of the majority rule: M, S Majority rule for two candidates implements many good characteristics : Represents the majority of preferences Every candidate in the position in which he / she is most named Strategic voting is not profitable: Voters with preference like the majority change their vote: No better result. Voters with preference against the majority cannot change the result.

4 Three candidates Direct chancellor election: (M) erkel, (S) chulz, (N) ocheiner voter preference order 1 MSN 2 SNM 3 NMS Majority election results in a circle: 2 voters each like M more than S, S more than N, and N more als M ... This instance shows that the collective preference can be contradictory (circling, not transitive) even though each individual preference is well-defined. The example is called the Condorcet Paradox and was discovered by the Marquis de Condorcet around 1785.

5 Choice of plurality We consider the plurality rule, in which each candidate comes to the position in the ranking list on which he appears most frequently among the voters. We solve ties in alphabetical order. Voters Preference Order Voters Named Order 1 M S N 2 S N M 3 N M S Plurality: M, N, S 1 M S N 2 S N M 3 N S M Plurality: N, S, M Strategic voting is profitable for the third voter! How can we avoid strategic voting? Trivially, we can designate a voter as a dictator, who dictates the outcome by his stated preference. But can we do it differently?

6 Definitions Set A of candidates (or results, alternatives) Set N of n voters (or players) Set L of possible preferences (total orders of A) Each voter i has a preference (or preference order) i L over the candidates A A social Utility function is a function F: L n L. A social selection function is a function f: L n A. A social selection function only returns a single winner, a social utility function returns a complete ranking of all candidates.

7 choices Impossibility results Structured preferences Kidney exchange Stable matching

8 Properties of social utility functions Unanimity: For every L we have F (, ...,) =. Voter i is a dictator in a social utility function if for all 1, ..., n L we have F (1, ..., n) = i. Then F is called a dictatorship. Independence from irrelevant alternatives (IIA): The social preference between two candidates a and b depends only on the preferences of the voters with regard to a and b. Formal: For every a, b A and every 1, ..., n, 1, ..., n L, let = F (1, ..., n) and = F (1, ..., n ). If a i b a i b for all i, then a b a b holds.

9 Independence from irrelevant alternatives (IIA) Plurality violates IIA! Voters Preference Order Voters Named Order 1 MSN 2 SNM 3 NMS Plurality: M, N, S 1 MSN 2 SNM 3 NSM Plurality: N, S, M Order of the pair (M, N) changes, although each voter with regard to M and N has the same pairwise preference in both instances.

10 Arrow's Theorem (Arrow, 1950) Every social utility function over a set of A 3 candidates that satisfies unanimity and IIA is a dictatorship. Let F be a social utility function with the properties mentioned (unanimity and IIA). Lemma (pairwise neutrality) Let 1, ..., n and 1, ..., n be two preference profiles, and = F (1, ..., n) and = F (1, ..., n). If for every voter i we have a i b c i d, then a b c d. Proof: We first rename the candidates so that a b and c b (but possibly a = c, and / or b = d).

11 Pair-wise neutrality Now we adjust i and i so that they are identical with regard to a, b, c, d. We move c and d in i, as well as a and b in i: 1: ..., a, ..., b, ......, c, a, ..., b, d, .. . 1: c, ..., d, ... c, a, ..., b, d, ... 2: ..., b, ..., a, ......, b, d, ..., c, a, ... 2: ..., d, c, ......, b, d, c, a, ... and so on ... ( Note: By assumption, aibcid) IIA guarantees that a and b remain in the same order in; c and d remain in the same order in. Also with IIA it follows that we can now rearrange all other elements as we wish. Then i = i. With unanimity, we now have c a and b d, so c d. With i = i for all i we also get c d. (Lemma)

12 Who is the dictator? Proof (sentence): Pair-wise neutrality shows that a social utility function that fulfills unanimity and IIA pursues a general underlying approach to determine the global preference. This approach is similar for all mentioned orders and all pairwise comparisons between candidates. This insight can be used to show that the function is even a dictatorship. Let a b and c d. If there are no voters with a i b, then b is a. If there are only voters with a i b, then a is b. Critical point: i voter 1 ... i 1 i ... n result a i b b i a b a a i b b i a a b claim: i is the dictator!

13 i is the dictator i is a dictator if c i d c d for all c d A. Consider an arbitrary set of preferences with c i d and e A with e c and e d. Shift third element e so that it is ordered in i like this: 1 e e ... i ... c ... e ... de n ... e Because of IIA this does not change the order of c and d in. (c, e) appears exactly as before (a, b). With paired neutrality, c e applies. In the same way we get e d. So that c d. Since c and d were arbitrary elements, the preference of i determines the whole result. (Sentence)

14 Properties of social selection functions f can be strategically manipulated by voters i if there are 1, ..., n and i with aib, where b = f (1, ..., n) and a = f (1, .. ., in). f is called incentive compatible (IC) if it cannot be strategically manipulated. f is monotonic if from f (1, ..., n) = a b = f (1, ..., i, ..., n) it follows that a i b and b i a. Voter i is a dictator in f: For every 1, ..., n L it holds that if a i b for all b a, then f (1, ..., n) = a. Then f is a dictatorship. f is surjective or exhaustive in A if for each candidate a A there are a set of preferences in which a is the winner.

15 Theorem of Gibbard-Satterthwaite Proposition A social selection function f is incentive-compatible if and only if f is monotonic. Proof: Direct inference from the definition. Theorem (Gibbard 1973; Satterthwaite 1975) Let f be a social selection function f that is surjective with A 3. f is incentive-compatible if and only if f is a dictatorship. Proof: We prove the nontrivial direction of the theorem. For every monotonous social selection function f that is exhaustive in A, we derive a social utility function F that satisfies unanimity and IIA.

16 Extension of social selection functions Let an order of preference and S A. We call S the order in which all elements of S are pushed forward in their order in. Example: S = {a, b, c}, A = S {d, e, f} S aedcfbacbedfbfedacbac fed Let F be an f -extending social utility function with F (1, ..., n) =, where ab and then if f ({a, b} 1, ..., {a, b} n) = a. At first it is not at all clear whether F creates a consistent order at all in this way, i.e. whether it represents a valid social utility function.

17 Proof by Contradiction Lemma If f is a surjective, incentive-compatible social selection function, then the extension F is a social utility function. Show antisymmetry and transitivity. Lemma If f is a surjective incentive-compatible social selection function and not a dictatorship, then the extension F satisfies unanimity, IIA and is also not a dictatorship. This contradicts Arrow's theorem.

18 Proof of Lemmas We prove the theorem by checking the properties of F: Antisymmetry: If a b and b a, then a = b. Transitivity: If a b and b c, then a c. Unanimity: F (, ...,) =. IIA No dictatorship

19 Properties Assertion For every 1, ..., n and every S the winner is f (S 1, ..., S n) S. Proof: f is exhaustive, so there is 1, ..., n with one Winner a S. Move the elements one after the other from S to the front, rearrange the elements in A \ S behind, rearrange the elements in S in front A transformation in S 1, ..., S n. Monotony ensures that no b S ever becomes a winner during this transformation. (Claim)

20 Properties of antisymmetry: If a b and b a, then a = b. Holds because f ({a, b} 1, ..., {a, b} n) {a, b}. Transitivity: If a b and b c, then a c. For a contradiction we assume that a b c a. Choose S = {a, b, c}. If there. let f (S 1, ..., S n) = a. Transformation to S with S = {a, c} shows f (S 1, ..., S n) = a, and therefore a c. This results in a contradiction with antisymmetry. This shows that if f is incentive-compatible and exhaustive in A, then F is a valid social utility function.

21 Properties of unanimity: F (, ...,) =. If a i b for all i, then by assertion and monotony f ({a, b} 1, ..., {a, b} n) = a. IIA: We assume a i b a i b. Note f ({a, b} 1, ..., {a, b} n) = f ({a, b} 1, ..., {a, b} n), since the transformation of {a , b} i in {a, b} i the result remains the same due to monotony and the assertion. Not a dictatorship: Obviously. (Sentence)

22 choices Impossibility results Structured preferences Kidney exchange Stable matching

23 Preferences with a Vertex The Gibbard-Satterthwaite Theorem is daunting, but it assumes that voters can have general preferences. If the preferences contain more structure, then a richer class of incentive-compatible selection functions exists. We are looking at a set of alternatives that can be ordered along a line. We assume that A [0, 1]. Definition A preference order i over A has a vertex if there is a point p i A such that for all x A \ {p i} and λ [0, 1) we have (λx + (1 λ) p i) i x. As an example we consider the problem of setting the room temperature in a shared office. Every employee has an optimal temperature and wants the temperature to be set as close as possible to it.

24 Mechanisms of order The Gibbard-Satterthwaite theorem cannot be applied to preferences with a vertex. Ordering mechanism for preferences with a vertex: Let k be a number in {1, ..., n} Ask only about vertices p 1, ..., p n of the voters. Sort the vertices from 1 to 0 and output the k-th vertex in the sorting Proposition For every fixed k {1, ..., n} the ordering mechanism is incentive-compatible. If n 2 then he is not a dictatorship.

25 Mechanisms of Order Proof: Let p be the result if all voters name the true vertices. If p i> p, then voter i cannot change the result by p i> p i. If he lies a vertex p i p, then only a worse result p p can arise. The same argument can be used for p i

26 Order Mechanisms For every fixed k the order mechanism is anonymous, i.e. it fulfills f (1, ..., n) = f (1, ..., n) if (1, ..., n) is a permutation of ( 1, ..., n) is. Theorem (Moulin 1980; Ching 1997) Let p i be the vertices mentioned. A social selection function f is incentive-compatible, surjective and anonymous for preferences with a vertex if and only if f is an ordering mechanism over a set {p 1, ..., pn, y 1, ..., ym}, where yj [0, 1 ] are fixed results. The result is a complete characterization for anonymous incentive-compatible social selection functions. Anonymity is necessary: ​​a dictatorship is not an ordering mechanism, but surjective and incentive-compatible (but not anonymous).

27 House allocation Matching with preferences over objects n players and n houses Assumption: Player i owns house i (not absolutely necessary, simplified analysis) Player i has a preference order i over houses House and player pairings. However, a player only has one preference for the house he receives. This means that all pairings in which player i gets the same house are equivalent for him. Incentive-compatible mechanisms with good properties?

28 Top Trading Cycles Let G = (V, E) be the directed graph: V set of remaining players with their houses E set of directed edges: (i, l) E l V owns best remaining house for i V top trading Cycles (TTC) mechanism: 1. Question preferences of players from 2. while V 3. Create edge set E as described above 4. Compute all directed circles C 1, ..., C h in G (loops count as circles, circles disjoint ) 5. for every edge (i, l) in every circle C 1, ..., C h do 6. Assign house l to player i. 7. Remove all players in C 1, ..., C h from V

29 Example graph G in round 1: 3 4 player preference order assignment: player 1 house 1 2

30 Example graph G in round 2: 3 4 player preference order Assignment: player 2 house 3 player 3 house 4 player 4 house 2 2

31 Top Trading Cycles Analysis Observations: Every player in G has exit degree 1. There is always at least one circle in G, and all circles in G are disjoint. Let V k be the players removed from TTC on the k th round. Each player in V k gets the house that he likes best apart from the houses of the players in V 1 ... V k 1. Player i V k gets his best house in the kth round. The owner of this house is also in V k and thus also receives his best house. Satz (Roth 1982) The TTC mechanism is incentive-compatible. Proof: Consider player i V j with true preference i. If he tells the truth he will get the best remaining house on turn j.

32 Top Trading Cycles Analysis Consider a house of a player in V 1 ... V j 1 that he likes better. Player i cannot get any of these houses: In round k = 1, ..., j 1 no player l V k wants i's house, otherwise i would be in a circle with l. No player l V k wants i's house in one round

33 Top Trading Cycles Core Assignment Let M be an assignment of houses Player i gets house M (i). Let M S be an assignment that arises from M when the coalition S N divide up its initial houses differently. Definition A set of players SN forms a blocking coalition for M if there is an assignment MS, so that for every player j C MC is just as good: MC (j) j M (j) for at least one player i C MC is better: MC (i) i M (i) An assignment M with no blocking coalition is at the core of the game. A core assignment has an optimality property: No subset of players would like to hold their houses back from the mechanism and distribute them differently (among themselves). In particular, each player i gets a house that they like at least as much as their initial house i.

34 Top Trading Cycles Core Assignment Theorem (Roth, Postlewaite 1977) The TTC mechanism calculates the unique assignment in the core of the game. Proof: Induction: Only the TTC assignment can be in the core, but no other. Beginning: Everyone i V 1 gets their very best house. So V 1 is a blocking coalition for any assignment that doesn't distribute the houses in V 1 like TTC. Assumption: The houses of the players in V 1, ..., V j 1 must be distributed as in TTC. Step: Assuming everyone i V j gets the best (remaining) house. So V j is a blocking coalition for every assignment that distributes the houses in V 1, ..., V j 1 as TTC, but V j not as TTC. The following applies: Either the TTC assignment is in the core or the core is empty.

35 Top Trading Cycles Core Allocation Let M be the TTC allocation and M S be a redistribution of the houses of player S M. Player i S previously had house i. If S is to be blocking with MS, then i must get a house afterwards, otherwise it would be worse off than in M. The redistribution in S forms a set of circles: Circle contains players from N j and N l with j

36 Random Serial Dictatorship Random Serial Dictatorship (RSD) Mechanism: 1. Arrange players in random order 2. Question player preferences 3. Let V be the set of all houses 4. for i = 1, 2, 3, ..., n do 5.Assign player i his best house h from V to 6. Remove h from V This mechanism is an orderly variant of the dictatorship. The following result can be proven similar to that for TTC. Theorem For each chosen order, the RSD mechanism is incentive-compatible. Is the assignment of RSD at the core? For any order? For none? For some?

37 Elections Impossibility Results Structured Preferences Kidney Exchange Stable Matching

38 Kidney donation In many countries there are now (plans for) kidney swap programs. Patients often have an acquaintance / relative as a living donor whose kidney (e.g. due to blood group) does not match the patient. The goal is an organ exchange: two incompatible (patient, donor) pairs exchange the donor organs if they match the other patient. Donor S 1 blood group. B Donor S 2 blood group. A patient P 1 blood group. A patient P 2 blood group. B.

39 Top trading cycles for kidney donation In principle, a house allocation: Houses are donor organs, players are patients Preferences about organs = probability of a successful transplant. TTC mechanism is incentive compatible and at the core of the game Notes: No legal obligation to donate. All operations of a circle are carried out simultaneously, otherwise an incentive problem arises: Donor S i resigns as soon as patient P i has received his kidney. Difficult: long circles = many simultaneous operations. If, on the other hand, a pure kidney donation is added, the circle becomes the path, then the kidney is always removed from donor S i before patient P i receives his donation. Very often rather binary preferences: kidney suits the patient or not.

40 Organ donation through matching Approach with matching in a graph G = (V, E): For patient P i, let E i be the set of compatible donors Every pair (P i, S i) is a node vi V We are looking for simple exchanges or Circles of length 2: edge {vi, vj} E if S i E j and S j E i patient can lie when naming the set E i Obviously: It is only worth lying one F i E i Matching mechanism for kidney swap: Question the sets F i of the patients Create graph G as above, where E = {{vi, vj} S i F j, S j F i} Compute a matching M of G with the greatest cardinality and kidney swap according to the edges in M

41 Maximum matching with priority list The maximum matching is not unique. The kidneys distribute different maximum matchings to different patients. We have to choose the matching in a monotonous way. To do this, we prioritize the patients. This is a common approach, e.g. when creating waiting lists for organ donation. Maximum matching with priorities 1. Let M 0 be the set of all maximum matchings of G 2. for i = 1, 2, 3, ..., n do 3. Let Z i be the set of matchings in M ​​i 1 in which nodes vi is matched 4. if Z i then M i Z i; else M i M i return any matching from M n set The matching mechanism with priorities for kidney exchange is incentive-compatible.

42 Choices Impossibility Results Structured Preferences Kidney Exchange Stable Matching

43 Stable matching (,,) (,,) (,,) (,,) (,,) (,,) Each player has a preference order over the players on the other side Order: (best partner, second best, ...) .

44 Stable matching Set X of n men, set Y of n women Every man x X has preference order x over all y Y. Every woman y Y has preference order y over all x X. Every person prefers to be married than alone. For a matching M, let M (x) Y be the partner of man x X in M, and M (y) X the partner of woman y Y in M. Let M (x) = if x alone in M, and M (y ) = the same.

45 Stable matching When is a matching stable? What is a threat to stability? In M, {x, y} is a blocking pair if and only if x and y like each other better than their respective partners y = M (x) and x = M (y). x y A matching M without a blocking pair is a stable matching. x y Stable matching plays a decisive role in many applications: Doctors / hospitals, university approval, labor market, etc.

46 Existence and calculation of the deferred acceptance algorithm (with male application): 1. while there is an unmatched man x X do 2. x proposes to the dearest woman y, who has not yet rejected it 3. if y unmatched then {x, y} temporarily matched; 4. else if y tentatively matched with x then 5. if x y x then {x, y} tentatively matched, y rejects x; 6. else {x, y} temporarily matched, y rejects x; Theorem (Gale, Shapley 1962) The deferred acceptance algorithm calculates a stable matching after a maximum of n 2 iterations. Proof: The algorithm only needs n 2 iterations, since each man proposes to each woman at most once. Each player is provisionally matched with a maximum of one partner at any time. Therefore the algorithm calculates a matching M.

47 Existence and Computation The following invariants apply in the course of the algorithm: 1. For every man x, the preference of the women to whom he proposes is strictly decreasing. 2. For every woman y the preference of the temporary partner is strictly increasing monotonically. This particularly applies if a woman is provisionally matched, then she is also matched in the final matching. Now suppose M has a blocking pair {x, y} with y x M (x) and x y M (y). x likes his partner M (x) less (or is possibly even alone). Because of invariant 1. he proposed to y and was rejected. So y must have received a proposal from a better man x. Because of invariant 2. your final partner in M ​​can only be (even) better. So, M (y) y x y x, a contradiction.

48 Uniqueness The deferred acceptance algorithm is underspecified: It is not stated which man will propose next. Are there several stable matchings? Can the algorithm calculate several (all?) Stable matchings? (2, 1) A 1 (A, B) (1, 2) B 2 (B, A) Stable matchings are not unique. In this case, there is a matching that is optimal for men (solid) and optimal for women (dashed), in which everyone gets the best partner at the same time.

49 Optimality Let h (x) Y be the best woman such that a stable matching M with M (x) = h (x). Let h (y) X be the best man such that a stable matching M with M (y) = h (y). Definition A stable matching M is optimal for men if M (x) = h (x) for all x X. optimal for women if M (y) = h (y) for all y Y. In an optimal matching, all players on one side get the best at the same time Partners they can get in any stable matching. It is not clear whether such a thing is even possible. The following sentence proves this and even more: Regardless of the order in which the applications are made in the algorithm, there is always a clear optimal matching. Theorem The deferred acceptance algorithm with a man's application always calculates exactly the stable matching that is optimal for men and, with a woman's application, the optimal for women.

50 Optimality Proof: With a contradiction to stability: Let M be the matching of the algorithm with male proposal. Let M be a stable matching in which a man j X has a strictly better partner M (j) j M (j). Since M (j) j M (j), there is an iteration in the algorithm in which j proposed to M (j) and is rejected. There may be more iterations in the algorithm in which a man x is rejected by his partner M (x) in M. Consider the first of these iterations. Ms M (x) only rejects man x because she gets a better proposal from man i x (i.e. i M (x) x). Since this is the very first rejection of its kind, man i likes woman M (x) more than M (i) (i.e. M (x) i M (i)). So (i, M (x)) is a blocking pair in M ​​and M is not a stable matching contradiction

51 Applications and Incentives Rate The deferred acceptance algorithm with male application is incentive-compatible for men. Proof: We agree on the following notation. A liar has obda number 1. Real profile: π = (1, ..., n), algorithm calculates M Man 1 lies: π = (, 2, ..., n), algorithm calculates M If lying is profitable , then M (1) 1 M (1). We will show that in this case M is not stable for π a contradiction. The following assertion shows that in the multitude of men who benefit from the lie, the assigned partners are exchanged among themselves.

52 Proposals and Incentives Assertion Let R = {x M (x) x M (x)} be the men who benefit in M. For every man x R and his new partner y = M (x), the partner x = M (y) in M ​​is also x R. Proof: x = 1: Sure, x R per assumption that 1 wants to lie x 1: Since x R we have yx M (x). Then x y x, because otherwise M would have the blocking pair {x, y}. So: M (x) x y, because otherwise M would have the blocking pair {x, y}. So: M (x) x M (x) and thus x R. (Claim) The set of female partners of benefiting men is the same in both matchings M and M: T = {y M (y) R} = {y M (x) R}.

53 Proposals and Incentives Since all men from R are better matched in M, all women from T are better matched, otherwise there would be a blocking pair in M. Now consider the calculation of M with the real profile π: Let x be the last man of R making a proposal. This application goes to his final partner y = M (x) T. No further applications from men in R, therefore all except x already matched as in M. So: y rejected M (y) in the previous round. y can only have a request from a man x R with x y M (y). x R means M (x) x M (x), and rejection of y means y x M (x). Hence y x M (x) and x y M (y). Since x R, x 1. So M has a blocking pair for π, a contradiction. (Sentence)

54 Not incentive-compatible for the passive side Deferred acceptance with women's application is not incentive-compatible for men: B A A A C B C B C women: B A A A C B C B C men: A B C A B C

55 Literature Arrow. A Difficulty in the Concept of Social Welfare Gibbard. Manipulation of Voting Schemes. A general result. Econometrica 41 (4):, Satterthwaite. Strategy-proofness and arrows conditions. Existence and Correspondence Theorems for Voting Procedures and Social Welfare Functions. Journal of Economic Theory 10 (2): Moulin. On strategy-proofness and single peakedness. Public Choice 35 (4): 'Ching. Strategy-proofness and median voters. International Journal of Game Theory 26 (4):, Roth. Incentive compatibility in a market with indivisible goods. Economic Letters 9 (2):, Shapley, Scarf. On Cores and Indivisibility. Journal of Mathematical Economics 1 (1): 23 27, 1974.

56 Literature Roth, Postlewaite. Weak versus strong domination in a market with indivisible goods. Journal of Mathematical Economics 4 (2):, Roth, Sönmez, Ünver. Pairwise kidney exchange. Journal of Economic Theory 125 (2):, Gale, Shapley. College Admissions and the Stability of Marriage. American Mathematical Monthly 69: 914, Dubins, Freedman. Machiavelli and the Gale-Shapley Algorithm. American Mathematical Monthly 88 (7):, Roth. The Economics of Matching: Stability and Incentives. Mathematics of Operations Research 7 (4):, Brandt, Conitzer, Endriss, Lang, Procaccia. Handbook of Computational Social Choice (Chapter 14) Nisan, Vazirani, Roughgarden, Tardos. Algorithmic Game Theory, (Chapters 9 and 10) Roughgarden. Twenty Lectures on Algorithmic Game Theory, (Chapters 9.4 and 10)