Virtual Conference on Social Choice Theory and Applications

Abstracts for presentations on February 5 - 6, 2021 (all times Eastern Standard Time)


  1. Friday, February 5
    1. Probabilistic Social Choice
      • “Majority Properties of Positional Social Preference Correspondences”
        Mostapha Diss and Michele Gori

        We characterize the positional social preference correspondences (spc) satisfying the qualified majority property for any given majority threshold. We also characterize the positional spcs satisfying the minimal majority property. We next evaluate the probability that the Borda, the Plurality and the Antiplurality fulfil the two aforementioned properties under two assumptions on individuals’ preferences in the presence of three and four alternatives for various sizes of the society. Our results show that the Borda spc is the positional spc which better behaves in relation with the qualified majority principle and the minimal majority principle. Finally, we propose some remarks on the concept of Condorcet consistency for social choice correspondences.

      • “Exact Condorcet Efficiency of the Plurality Rule with Five Candidates”
        Mostapha Diss, Abdelhalim El Ouafdi, Issofa Moyouwou, and Hatem Smaoui

        By an appropriate use of polyhedral symmetries, Schürmann [Schürmann, A. (2013). Exploiting polyhedral symmetries in social choice. “Social Choice and Welfare”, 40(4), 1097-1110] was the first to evaluate the exact Condorcet efficiency of the plurality rule with four candidates under the Impartial Anonymous Culture (IAC) assumption. We combine polyhedral symmetries and the Gehrlein-Fishburn method to provide the exact Condorcet efficiency of the plurality rule with five candidates. This is promising news that extending the scope of some existing results on the probabilities of voting outcomes under IAC with at most four candidates to elections with five or more candidates is still possible.

      • “The U.S. Electoral College and the Probability of Disputable Outcomes Under Direct and Indirect Elections”
        Vincent Merlin and Jack Nagel

        Defenders of the Electoral College routinely invoke a traditional argument to reject proposals for a national popular vote. Granted, they say, Florida in 2000 was a national nightmare, but the agony would be far greater if such a dispute extended over the entire nation. Proponents of a direct vote reply by conjecturing that the Electoral College increases the frequency of disputable elections. This paper investigates whether we should expect disputable outcomes to be more frequent under the present indirect system as compared with a direct vote and, if so, by how much. We use two methods: an historical analysis of actual outcomes in presidential elections, and an a priori formal mode borrowed from Social Choice Theory (IAC). Depending on the thresholds one posits for disputability, the historical analysis shows that disputable elections have been about two to six times more frequent under the Electoral College, while the most realistic mathematical model produces an impressively compatible intermediate ratio of 4:1.

      • “Statistical Inference in Social Choice”
        Michel Regenwetter and Yu (Rain) Huang

        Imagine that, for each pair of candidates, you collect a random sample from the population and ask each respondent which one among the two candidates they prefer. What inference can you draw about the Condorcet winner, the Condorcet order, the Borda winner, the Borda order, or the agreement/mismatch among Condorcet and Borda in the population from which the samples were drawn? The answer requires nontrivial ‘‘order-constrained’’ statistical methods. I will formulate the problem, sketch current solutions, and provide an overview of open mathematical and computational challenges.

    2. Comparing Voting Rules
      • “Primacy Effects in Proportional Representation and Single-member District Elections: Evidence From a Natural Experiment in Polish Local Elections ”
        Jarosławaw Flis and Marek Kaminski

        We study the positional effects that occur when voters cast their votes exclusively because a candidate or party is listed first on a ballot. In proportional representation (PR) systems, it is difficult to estimate this effect because parties’ positions on the ballot are typically fixed in all districts and it is impossible to separate the first-position bonus from a party’s normal electoral performance. A rare natural experiment in 2014 local elections in Poland enabled us to estimate such a premium at 7.2% of all votes cast for open-list PR (OLPR) elections. In singlemember district (SMD) elections (both majority runoff and first-past-the-post elections), which were conducted simultaneously with the PR elections, there was no first-position bonus. We attribute this difference to the greater complexity of voting in OLPR systems in particular and also to ballot design—whether the ballot is a booklet or a single sheet. We also investigated a smaller subset of constituencies that included voters who faced fewer ballots and who were relatively more interested in the OLPR elections; in this case the premium was substantially lower. Our results show that (a) in OLPR systems, one can expect more positional votes for the first-listed party than in SMD systems and (b) the positional effects are substantially stronger when voters face more elections and when they are more inclined to treat OLPR elections as second-order elections.

      • “Relative Efficiency of Plurality and Borda for Majoritarian Principals”
        Jac C. Heckelman and Robi Ragan

        No Social Choice Function (SCF) is anonymous, neutral, reinforcing, continuous, strict majority consistent and Condocet loser consistent across all potential profiles. The first four properties are fairness and logical conditions which define the class of SCF which are scoring rules. The last two properties are derived from different aspects of majority rule. Borda is the only Condorcet loser consistent scoring rule and plurality is the only strict majority consistent scoring rule. We prove that the only non-trivial number of voters and candidates for which Borda or plurality will satisfy all the listed properties occur for the same combination of four voters and three candidates. We also show there is no overlap in the set of profiles for which Borda and plurality fail one of the conditions. We estimate absolute and conditional failure rates for joint and individual satisfaction of the properties using the IC distribution for up to 30 candidates and 200 voters. Both rules asymptotically converge to zero failure rates but Borda does so much quicker. When there are more than four candidates, plurality has a lower absolute failure rate than Borda only if there are three voters; for four candidates this holds true only when there are five voters. For three candidates we confirm neither rule will violate either property when there are four voters. In all other cases, failures to jointly satisfy the majoritarian properties occur less often under Borda.

      • “Two-Stage Majoritarian Choice”
        Sean Horan and Yves Sprumont

        We propose a class of decisive collective choice rules that rely on a linear ordering to partition the majority relation into two acyclic relations. The first of these relations is used to make a shortlist of the feasible alternatives while the second is used to make a final choice. Rules in this class are characterized by four properties: two classical rationality requirements (Sen’s expansion consistency and Manzini and Mariotti’s weak WARP); and adaptations of two classical collective choice requirements (Arrow’s independence of irrelevant alternatives and Saari and Barney’s no preference reversal bias). These rules also satisfy some other desirable properties including a version of May’s positive responsiveness.

    3. Presidential Elections
      • “The 2016 Election Inversion in Historical and Theoretical Perspective”
        Nicholas Miller

        This presentation will examine the 2016 Electoral College inversion in light of both the history of presidential elections and also various theoretical perspectives on the causes and expected frequency and direction of inversions. In so doing, it will summarize and update my 2012 paper (based on historical data) on “Election Inversions by the U.S. Electoral College” (in Dan S. Felsenthal and Moshe Machover, eds., “Electoral Systems: Paradoxes, Assumptions, and Procedures”, Springer) and revise and update my 2015 PCS presentation (based on simulations) on “Election Inversions by Variants of the U.S. Electoral College”.

      • “Identifying Voter Preferences Through Two-Stage Multivoting Elections: Experiments in the Preface of the 2020 Democratic Primaries”
        Emilia J. Suggs

        This paper examines a method of quantifying voter preferences and behavior using a two-stage multivoting (2SMV) model. The 2SMV model gives voters an endowment of additional votes exceeding the number of policies under consideration in a direct democracy-style election. Voters may freely allocate this endowment to any of the policies up for election. Using the 2SMV mechanism, the paper provides a methodology for identifying voter preferences and voting behavior within a staged multivoting system. From this methodology, three types of voting behavior are defined: policy indifference, strictly-dominating preferences, and fixed-weight preferences. Using experimental data collected from college students, the study evaluates the performance of the two-stage multivoting system in the context of the 2020 Democratic Presidential Primaries, compared to the traditional one-person, one-vote (1P1V) system. Using the full sample of observations, the 1P1V system resulted in a tie between Joe Biden and Bernie Sanders, while the 2SMV system selected Joe Biden over Bernie Sanders as the nominee by a 206 net vote difference. The study finds that the 2SMV system produces more unique and distant ranks between candidates, reducing the prevalence of ties common within the 1P1V system.

      • “Modelling the Influence of Campaign Contributions and Advertising on Presidential Elections”
        Maria Gallego

        We provide a stochastic electoral model of a Presidential election where candidates use the contributions they receive from special interest groups (SIGs) to run their campaign. Prior to the election, candidates announce their policy platforms and advertising (ad) campaigns and use the contributions of SIGs to generate a SIG policy and ad campaign valence that enhance their electoral prospects. Voters’ preferences depend on candidates’ policies relative to their ideal policy and on candidates’ ad campaign messages relative to their ideal message frequency, “their campaign tolerance level ”and are also influenced by endogenous the SIG policy and ad campaign valences. Voters non-campaign evaluation of candidates, “voters’ mean valence”, and their private idiosyncratic valence also inuence their choices. In equilibrium, candidates’ critical campaigns depends on candidates’ weighted electoral mean (the electoral pull) and on the marginal effect that the SIG valences (the SIG pull) have on voters choices. In local Nash equilibrium (LNE), candidates’ campaign balance the electoral and SIG pulls. Candidates campaign constitute a strong (weak) LNE of the election if the expected vote shares of all candidates are greater than the sufficient (necessary) pivotal vote shares which happens only when there are enough voters voting for each candidate with high enough probability. If the expected vote share of at least one candidate is lower than its necessary pivotal vote share, then the critical campaigns are not a LNE of the election.

  2. Saturday, February 6
    1. Strategy and Uncertainty in Social Choice
      • “Chicken Games in Approval Voting ”
        James Green-Armytage

        In certain single-winner elections using approval voting, the payoff structure faced by two factions within a broader political coalition resembles a game-theoretical "chicken" game: If one faction plays "straight" by approving only its own candidate, while the other faction plays "swerve" by approving both candidates, the "straight" faction will enjoy its most-preferred outcome. If both factions play "straight," they each receive a "crash" payoff where neither of their preferred candidates win. The purpose of this paper is to estimate the frequency with which this dynamic may arise in practical approval-voting elections, and to characterize its instances in terms of a unit square where the outcome is determined by the percentage of each faction playing "straight." We perform this analysis using three data-generating processes: American National Election Studies survey data, a spatial model, and an impartial anonymous culture model.

      • “To What Extent Does the Model of Processing Incomplete Rankings Affect the Likelihood of the Truncation Paradox?”
        Eric Kamwa

        Given a voting rule, if some voters can favor a more preferable outcome by providing only a part of their sincere rankings on the competing candidates rather than listing their entire preference rankings on all the competing candidates, this rule is said to be vulnerable to the truncation paradox. The few papers that assess the occurrence of this paradox implicitly assume the pessimistic model under which when a voter submits a truncated ballot, only the candidate indicated on the ballot receives points from this voter while the others receive no points. In this paper, we assess the likelihood of the truncation paradox under two other models: the optimistic model and the averaged model. We characterize for three-candidate elections and large electorates, all the voting situations where the truncation paradox can occur for the whole family of one-shot and runoff scoring rules under the averaged and the optimistic approaches and we compute the likelihood of this paradox. It comes that the model may really matter: under the optimistic model, all the one-shot scoring rules are immune to the truncation paradox which is more likely to occur under the pessimistic model than under the averaged model; for each of the scoring runoff rules, we find that the likelihood of the truncation paradox is higher under the pessimistic model and it is lower under the optimistic model. Our analysis is performed under the Impartial Anonymous Culture assumption.

      • “Comparing the Manipulability of Approval, Evaluative and Plurality Voting with Trichotomous Preferences”
        Abdelhalim El Ouafdi, Dominique Lepelley, and Hatem Smaoui

        We consider a framework where voters’ preferences are supposed to be trichotomous and only three candidates are in contention. We compare the following three voting rules on the basis of their vulnerability to strategic manipulation by coalitions of voters: (2,1,0)- Evaluative Voting (EV), Approval voting (AV) and Plurality voting (PV). We first assume that the voters do not react to the deviation of some of them from their sincere preferences (naive behavior) and we show that AV dominates PR which dominates EV. We then take into consideration the possible reactions of voters (non-naive behavior) and we show that AV still dominates PV and EV, but that EV becomes less manipulable than PV.

      • “Belief-Averaging and Relative Utilitarianism: Savage Meets Arrow”
        Florian Brandl

        We consider social welfare functions when the preferences of individual agents and society maximize subjective expected utility in the tradition of Savage. A system of axioms is introduced whose unique solution is the social welfare function that averages the agents’ beliefs and sums up their utility functions, normalized to have the same range. The first distinguishing axiom requires that an act about which beliefs agree becomes socially more preferred if it gains support among the agents. The second is a weakening of Arrow’s independence of irrelevant alternatives that only applies to non-redundant acts.

    2. Social Welfare Functions and Social Choice Correspondences
      • “Critical-level Sufficientarianism”
        Walter Bossert, Susumu Cato, Kohei Kamaga

        This paper provides an axiological foundation of a sufficientarian theory of justice that has its roots in the utilitarian tradition of using total well-being as a goodness criterion. The class of sufficientarian principles that we advocate are the only ones that satisfy some appealing axioms, all but one of which are based on utilitarian ideas. The remaining property is an axiom we call absolute priority, which is a requirement that we consider to be at the core of sufficientarianism. It postulates that those below the threshold are to be given primary consideration when assessing the relative goodness of distributions of well-being. The class can be traced back to the critical-level generalized-utilitarian principles that are familiar from the literature on population ethics, and our proposal illustrates how utilitarianism can successfully be integrated into a sufficientarian theory. In addition, we examine well-established transfer principles in the context of sufficientarianism and we discuss the links of our contribution to some issues that arise in population ethics.

      • “How to Compare Rulers”
        Emre Ergin

        Given a preference order how to compare two other preference orders? We present an intuitive way based on von Neumann-Morgenstern expected utility framework to derive utilities for preferences over some alternatives, from utilities for those alternatives. We also cover the implications of this framework, and question the use of distances when evaluating social welfare functions.

        “Maxmin Fixed Agenda Choice Correspondence”
        Somdeb Lahiri

        In this paper we provide an axiomatic characterization of the most-likely-maxmin and the probable-maxmin extended choice correspondence for a decision maker who has state-dependent preferences (represented by a linear order) over the set of alternatives.

      • “Aggregation Without Interpersonal Comparisons?”
        Jake Nebel

        Harsanyi (1955) claimed to derive a weighted-utilitarian social welfare function from principles of individual and social rationality and respect for the unanimous preferences of rational individuals, without assuming the possibility of interpersonal comparisons of utility. This is puzzling, not only because weighted utilitarianism is generally taken to require interpersonal comparisons of some form, but also because it appears to violate a standard lesson drawn from Arrow’s theorem: namely, that “admitting cardinality of utilities without interpersonal comparisons does not change Arrow’s impossibility theorem at all” (Sen 1999, 357).
           For these reasons, the general consensus appears to be that Harsanyi was wrong to claim that his result required no interpersonal comparisons. Some (e.g., Broome 1991) argue that the possibility of interpersonal comparisons of utility is presupposed by one of Harsanyi’s premises. Others (e.g., Mongin 1994) argue that it is a surprising conclusion of the theorem (at least, when the theorem is extended to the multi-profile setting of social welfare functionals), which is not presupposed by any one of the premises. The basis for this view is that, given the Independence of Irrelevant Alternatives, Harsanyi’s premises together entail that the social welfare functional is not invariant to individual-specific positive affine transformations of utility functions.
           I will argue instead that Harsanyi was right: his aggregation theorem (including its multi-profile version) and its weighted-utilitarian conclusion do not require interpersonal comparisons of utility. I defend this conclusion by providing an alternative conception of the quantitative representation of utility, on which utilities are not represented as numbers (such as 1 and 2) but as dimensioned quantities (such as 1 util and 2 utils), but where different people’s utilities may be quantities of different dimensions. I argue that, in the framework of dimensioned quantities, standard invariance axioms that are thought to be entailed by various measurability and (non)comparability assumptions do not follow—at least, not without further assumptions about the absence of dimensional constants in social welfare functionals. In particular, cardinal measurability with no interpersonal comparable utility does not require the social welfare functional to be invariant to individual-specific positive affine transformations of utility functions. I defend this view by analogy to laws of nature relating more familiar dimensioned quantities.
           I conclude by comparing Harsanyi’s aggregation theorem with what Sen and others have thought to be more “direct” arguments for utilitarianism, such as those appealing to cardinally measurable utility with only unit-comparability. I conclude that, far from being more direct, this kind of argument is not even valid when we move to the framework of dimensioned quantities.

    3. Deliberation and Cooperation
      • “Group-Managed Real Options: Voting, Polarization, and Investment Dynamics”
        Lorenzo Garlappi, Ron Giammarino, Ali Lazrak

        We analyze a dynamic investment problem where decisions are made through voting within a group of agents with heterogeneous beliefs. We show that disagreement generates inefficient under-investment – the group rejects projects that are unanimously deemed profitable by each member – and inertia – investment is delayed relative to a single-agent case. When facing both investment and abandonment timing decisions, the group behavior cannot be replicated by that of a representative or median member. These coordination frictions hold in groups of any size, for general voting protocols and are exacerbated by polarization, investment reversibility, and more stringent voting rules.

      • “Deliberation and Epistemic Democracy”
        Huihui Ding and Marcus Pivato

        We study the effects of deliberation on epistemic social choice, in two settings. In the first setting, the group faces a binary epistemic decision analogous to the Condorcet Jury Theorem. In the second setting, group members have probabilistic beliefs arising from their private information, and the group wants to aggregate these beliefs in a way that makes optimal use of this information. During deliberation, each agent discloses private information to persuade the other agents of her current views. But her views may also evolve over time, as she learns from other agents. This process will improve the performance of the group, but only under certain conditions; these involve the nature of the social decision rule, the group size, and also the presence of “neutral agents” whom the other agents try to persuade.

      • “Centralized Refugee Matching Mechanisms with Hierarchical Priority Classes”
        Dilek Sayedahmed

        In this study I investigate a country acceptance problem in the refugee relocation context. I design two new matching algorithms based on hierarchical priority classes, both of which are to be carried as a centralized and computerized refugee matching system to match refugee families with countries. I conduct an axiomatic and fair resource allocation analysis and concentrate on the stability and fairness properties of the matching algorithms that I design. I explicitly model and analyse countries’ preferences together with the mandated prioritization of refugee families by host countries. My analysis consists of two types of ranking profiles for countries: preference profile versus mandated priority profile. This allows me to capture how the difference between the two profiles creates blocking pairs of countries and refugee families due to the enforced hierarchical priority classes. Since an algorithm that is stable with respect to the mandated priority profile may no longer be stable with respect to countries’ preferences and lead to potential blocking pairs, I weaken the stability axiom. Because of the mandated priorities for countries which give certain refugees a lower priority at each country, I recognize the need and importance of taking into account the preferences of refugees. I rule out certain salient blocking pairs by establishing new weak stability axioms: top stability and credible stability. I also provide new fairness axioms: top priority class (PC) fairness and credible PC fairness. I propose two new algorithms to combine countries’ preferences and enforced priority rankings: the Top Prioritization Mechanism and the Deferred Acceptance (DA)-Match Prioritization Mechanism. I show that the Top Prioritization Mechanism is top stable and top PC fair. The DA-Match Prioritization Mechanism is credibly stable and credibly PC fair. For the refugee families, the DA-Match Prioritization Mechanism weakly Pareto dominates the DA that uses countries’ preferences. Beyond the refugee relocation context, the algorithms that I design in this study have other applications and policy implications as well, such as centralized college admissions, the design of public school choice systems, and immigration.

      • “Envy free fair division of tangled cakes and envy free division of graphs up to k items”
        Ayumi Igarashi and William S. Zwicker

        A tangle is a connected topological space constructed by gluing together several copies of the unit interval [0, 1]. We explore which tangles guarantee envy-free (aka EF) allocations of connected shares for n agents, meaning that such allocations exist no matter which non-atomic and countably additive measures represent agents’ valuations. Each single tangle T corresponds in a natural way to an infinite topological class G(T ) of multigraphs, infinitely many of which are graphs. This correspondence links EF fair division of tangles to EFkouter fair division of graphs: the vertices of a graph, treated as indivisible objects, are allocated to the agents, each agent’s share of vertices is contiguous (connected as an induced subgraph), no agent envies another’s share after she pretends some selection of k or fewer vertices disappear from that share, and that disappearance does not destroy contiguity. We know from Bilò et al. that all Hamiltonian graphs guarantee EF1outer allocations when the number of agents is 2, 3, or 4 and guarantee EF2outer allocations for arbitrarily many agents.
          We show that exactly six tangles are stringable (roughly, a continuous analogue of Hamiltonian + eulerian); these guarantee EF allocations of connected shares for any number of agents, and their associated topological classes contain only Hamiltonian graphs. Any nonstringable tangle T has a finite upper bound r on the number of agents for which EF allocations of connected shares are guaranteed. Most graphs in the associated nonstringable topological class G(T ) are not Hamiltonian, and a negative transfer theorem shows that for each k ≥ 1 most of these graphs fail to guarantee EFkouter allocations of vertices for r + 1 or more agents. This answers a question posed in Bilò et al [1] and explains why a focus on Hamiltonian graphs was necessary for certain results in that paper.
           With bounds on the number of agents, however, we obtain positive results for some nonstringable classes. An elaboration of Stromquist’s moving knife procedure shows that the nonstringable lips tangle L guarantees envy-free allocations of connected shares for three agents (or for two, but not for four or more). We then modify the discrete version of Stromquist’s procedure (in Bilò et al [1]) to show that all graphs in the topological class G(L) (most of which are non-Hamiltonian) guarantee EF1outer allocations for three agents (or for two).