Learn how and when to remove these template messages, Learn how and when to remove this template message, "Uses of Game Theory in International Relations", "On Adaptive Emergence of Trust Behavior in the Game of Stag Hunt", "Stag Hunt: Anti-Corruption Disclosures Concerning Natural Resources", https://en.wikipedia.org/w/index.php?title=Stag_hunt&oldid=1137589086, Articles that may contain original research from November 2018, All articles that may contain original research, Articles needing additional references from November 2018, All articles needing additional references, Wikipedia articles that are too technical from July 2018, Articles with multiple maintenance issues, Creative Commons Attribution-ShareAlike License 3.0, This page was last edited on 5 February 2023, at 12:51. Why do trade agreements even exist? Catching the stagthe peace and stability required to keep Afghanistan from becoming a haven for violent extremismwould bring political, economic, and social dividends for all of them. See Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, & Owain Evans, When Will AI Exceed Human Performance? The real peril of a hasty withdrawal of U.S. troops from Afghanistan, though, can best be understood in political, not military, terms. So it seems that the moral of the story is that we are selfish human beings with little patience or trust in others, even if that cooperation meant mutual benefit. [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. If the regime allows for multilateral development, for example, the actors might agree that whoever reaches AI first receives 60% of the benefit, while the other actor receives 40% of the benefit. Based on the values that each actor assigns to their payoff variables, we can expect different coordination models (Prisoners Dilemma, Chicken, Deadlock, or Stag Hunt) to arise. In Exercises 252525 through 323232, f(x)f(x)f(x) is a probability density function for a particular random variable XXX. Table 3. These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis. On the other hand, Glaser[46] argues that rational actors under certain conditions might opt for cooperative policies. For example, international sanctions involve cooperation against target countries (Martin, 1992a; Drezner, . It involves a group of . This technological shock factor leads actors to increase weapons research and development and maximize their overall arms capacity to guard against uncertainty. Moreover, the AI Coordination Regime is arranged such that Actor B is more likely to gain a higher distribution of AIs benefits. [3] While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Let us call a stag hunt game where this condition is met a stag hunt dilemma. She argues that states are no longer In this article, we employ a class of symmetric, ordinal 2 2 games including the frequently studied Prisoner's Dilemma, Chicken, and Stag Hunt to model the stability of the social contract in the face of catastrophic changes in social relations. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of the actors perceived likelihood that such a regime would create a harmful AI expressed as P_(h|A) (AB)for Actor A and P_(h|B) (AB)for Actor B times each actors perceived harm expressed as hA and hB. [52] Stefan Persson, Deadlocks in International Negotiation, Cooperation and Conflict 29, 3(1994): 211244. Language links are at the top of the page across from the title. The first technology revolution caused World War I. [22] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, Machine Bias, ProPublica, May 23, 2016 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. The remainder of this subsection briefly examines each of these models and its relationship with the AI Coordination Problem. Not wanting to miss out on the high geopolitical drama, Moscow invited Afghanistans former president, Hamid Karzai, and a cohort of powerful elitesamong them rivals of the current presidentto sit down with a Taliban delegation last week. Finally, I discuss the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory in practice. Cultural Identity - crucial fear of social systems. the 'inherent' right to individual and collective self-defence recognized by Article 51 of the Charter and enforcement measures involving the use of force sanctioned by the Security Council under Chapter VII thereof. Therefore, an agreement to play (c,c) conveys no information about what the players will do, and cannot be considered self-enforcing." Julian E. Barnes and Josh Chin, The New Arms Race in AI, Wall Street Journal, March 2, 2018, https://www.wsj.com/articles/the-new-arms-race-in-ai-1520009261; Cecilia Kang and Alan Rappeport, The New U.S.-China Rivalry: A Technology Race, March 6, 2018, https://www.nytimes.com/2018/03/06/business/us-china-trade-technology-deals.html. A persons choice to bind himself to a social contract depends entirely on his beliefs whether or not the other persons or peoples choice. Moreover, the usefulness of this model requires accurately gauging or forecasting variables that are hard to work with. Payoff matrix for simulated Prisoners Dilemma. Finally, if both sides defect or effectively choose not to enter an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from this scenario is solely the probability that they achieve a beneficial AI times each actors perceived benefit of receiving AI (without distributional considerations): P_(b|A) (A)b_Afor Actor A and P_(b|B) (B)b_Bfor Actor B. Depending on which model is present, we can get a better sense of the likelihood of cooperation or defection, which can in turn inform research and policy agendas to address this. The paper proceeds as follows. It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends now whether it can be controlled at all.[26]. Finally, in a historical survey of international negotiations, Garcia and Herz[48] propose that international actors might take preventative, multilateral action in scenarios under the commonly perceived global dimension of future potential harm (for example the ban on laser weapons or the dedication of Antarctica and outer space solely for peaceful purposes). Such a Coordination Regime could also exist in either a unilateral scenario where one team consisting of representatives from multiple states develops AI together or a multilateral scenario where multiple teams simultaneously develop AI on their own while agreeing to set standards and regulations (and potentially distributive arrangements) in advance. The theory outlined in this paper looks at just this and will be expanded upon in the following subsection. Especially as prospects of coordinating are continuous, this can be a promising strategy to pursue with the support of further landscape research to more accurately assess payoff variables and what might cause them to change. > In this paper, I develop a simple theory to explain whether two international actors are likely to cooperate or compete in developing AI and analyze what variables factor into this assessment. A common example of the Prisoners Dilemma in IR is trade agreements. This is why international tradenegotiationsare often tense and difficult. Image: The Intelligence, Surveillance and Reconnaissance Division at the Combined Air Operations Center at Al Udeid Air Base, Qatar. In the context of the AI Coordination Problem, a Stag Hunt is the most desirable outcome as mutual cooperation results in the lowest risk of racing dynamics and associated risk of developing a harmful AI. Table 13. What are some good examples of coordination games? The field of international relations has long focused on states as the most important actors in global politics. Absolute gains looks at the total effect of the decision while relative gains only looks at the individual gains in respect to others. Therefore, if it is likely that both actors perceive to be in a state of Prisoners Dilemma when deciding whether to agree on AI, strategic resources should be especially allocated to addressing this vulnerability. Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see Skyrms 2004). Payoff variables for simulated Chicken game. Hunting stags is most beneficial for society but requires a . 0000000016 00000 n To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. These differences create four distinct models of scenarios we can expect to occur: Prisoners Dilemma, Deadlock, Chicken, and Stag Hunt. Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090, Link: http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. If either hunts a stag alone, the chance of success is minimal. [10] AI expert Andrew Ng says AI is the new electricity | Disrupt SF 2017, TechCrunch Disrupt SF 2017, TechCrunch, September 20, 2017, https://www.youtube.com/watch?v=uSCka8vXaJc. See Carl Shulman, Arms Control and Intelligence Explosions, 7th European Conference on Computing and Philosophy, Bellaterra, Spain, July 24, 2009: 6. Any individual move to capture a rabbit will guarantee a small meal for the defector but ensure the loss of the bigger, shared bounty. Deadlock occurs when each actors greatest preference would be to defect while their opponent cooperates. [40] Robert Jervis, Cooperation Under the Security Dilemma. World Politics, 30, 2 (1978): 167-214. Depending on which model is present, we can get a better sense of the likelihood of cooperation or defection, which can in turn inform research and policy agendas to address this. Payoff matrix for simulated Deadlock. The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. This table contains a representation of a payoff matrix. One example addresses two individuals who must row a boat. The corresponding payoff matrix is displayed as Table 10. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. Both actors are more optimistic in Actor Bs chances of developing a beneficial AI, but also agree that entering an AI Coordination Regime would result in the highest chances of a beneficial AI. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. It is the goal this paper to shed some light on these, particularly how the structure of preferences that result from states understandings of the benefits and harms of AI development lead to varying prospects for coordination. trailer 0000002252 00000 n PxF`4f$CN*}S -'2Y72Dl0%^JOG?Y,XT@ dF6l]+$.~Qrjj}46.#Z x^iyY2)/c lLU[q#r)^X Both nations can benefit by working together and signing the agreement. Together, this is expressed as: One last consideration to take into account is the relationship between the probabilities of developing a harmful AI for each of these scenarios. An example of the game of Stag Hunt can be illustrated by neighbours with a large hedge that forms the boundary between their properties. In this section, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. Robert J Aumann, "Nash Equilibria are not Self-Enforcing," in Economic Decision Making: Games, Econometrics and Optimisation (Essays in Honor of Jacques Dreze), edited by J. J. Gabszewicz, J.-F. Richard, and L. Wolsey, Elsevier Science Publishers, Amsterdam, 1990, pp. Here, I also examine the main agenda of this paper: to better understand and begin outlining strategies to maximize coordination in AI development, despite relevant actors varying and uncertain preferences for coordination. If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. If, by contrast, each hunter patiently keeps his or her post, everyone will be rewarded with a lavish feast. If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. Economic Theory of Networks at Temple University, Economic theory of networks course discussion. [23] United Nations Office for Disarmament Affairs, Pathways to Banning Fully Autonomous Weapons, United Nations, October 23, 2017, https://www.un.org/disarmament/update/pathways-to-banning-fully-autonomous-weapons/. Scholars of civil war have argued, for example, that peacekeepers can preserve lasting cease-fires by enabling warring parties to cooperate with the knowledge that their security will be guaranteed by a third party. Gray[36] defines an arms race as two or more parties perceiving themselves to be in an adversary relationship, who are increasing or improving their armaments at a rapid rate and structuring their respective military postures with a general attain to the past, current, and anticipated military and political behaviour of the other parties.. For example, if the two international actors cooperate with one another, we can expect some reduction in individual payoffs if both sides agree to distribute benefits amongst each other. A hurried U.S. exit will incentivize Afghanistans various competing factions more than ever before to defect in favor of short-term gains on the assumption that one of the lead hunters in the band has given up the fight. On a separate piece of paper, write the best possible answer for each one. It comes with colossal opportunities, but also threats that are difficult to predict. HtV]o6*l_\Ek=2m"H)$]feV%I,/i~==_&UA0K=~=,M%p5H|UJto%}=#%}U[-=nh}y)bhQ:*&#HzF1"T!G i/I|P&(Jt92B5*rhA"4 LTgC9Nif Here, values are measured in utility. As a result, there is no conflict between self-interest and mutual benefit, and the dominant strategy of both actors would be to defect. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. In the same vein, Sorenson[39] argues that unexpected technological breakthroughs in weaponry raise instability in arms races. [46] Charles Glaser, Realists as Optimists: Cooperation as Self-Help, International Security 19, 3(1994): 50-90. Each model is differentiated primarily by the payoffs to cooperating or defecting for each international actor. SUBJECT TERMS Game Theory, Brinkmanship, Stag Hunt, Taiwan Strait Issue, Cuban Missile Crisis 16. c Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on Aumanns assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that How a given player will behave in a given game, thus, depends on the culture within which the game takes place".[8]. Rabbits come in the form of different opportunities for short-term gain by way of graft, electoral fraud, and the threat or use of force. In times of stress, individual unicellular protists will aggregate to form one large body. This means that it remains in U.S. interests to stay in the hunt for now, because, if the game theorists are right, that may actually be the best path to bringing our troops home for good. [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. These remain real temptations for a political elite that has survived decades of war by making deals based on short time horizons and low expectations for peace. publications[34] and host the worlds most prominent tech/AI companies (US: Facebook, Amazon, Google, and Tesla; China: Tencent and Baidu). One significant limitation of this theory is that it assumes that the AI Coordination Problem will involve two key actors. Continuous coordination through negotiation in a Prisoners Dilemma is somewhat promising, although a cooperating actor runs the risk of a rival defecting if there is not an effective way to ensure and enforce cooperation in an AI Cooperation Regime. 0000000696 00000 n So far, the readings discussed have commented on the unique qualities of technological or qualitative arms races. The stag is the reason the United States and its NATO allies grew concerned with Afghanistans internal political affairs in the first place, and they remain invested in preventing networks, such as al-Qaeda and the Islamic State, from employing Afghan territory as a base. In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. If both choose to leave the hedge it will grow tall and bushy but neither will be wasting money on the services of a gardener. The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria:[2] one where both players cooperate, and one where both players defect. If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained. [50] This is visually represented in Table 3 with each actors preference order explicitly outlined. If the United States beats a quick path to the exits, the incentives for Afghan power brokers to go it alone and engage in predatory, even cannibalistic behavior, may prove irresistible. Since the payoff of hunting the stags is higher, these interactions lead to an environment in which the Stag Hunters prosper. Using their intuition, the remainder of this paper looks at strategy and policy considerations relevant to some game models in the context of the AI Coordination Problem. [58] Downs et al., Arms Races and Cooperation, 143-144. At the same time, a growing literature has illuminated the risk that developing AI has of leading to global catastrophe[4] and further pointed out the effect that racing dynamics has on exacerbating this risk. to Be Made in China by 2030, The New York Times, July 20, 2017, https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html, [33] Kania, Beyond CFIUS: The Strategic Challenge of Chinas Rise in Artificial Intelligence., [34] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier.. We see this in the media as prominent news sources with greater frequency highlight new developments and social impacts of AI with some experts heralding it as the new electricity.[10] In the business realm, investments in AI companies are soaring.
Espn Top 100 Baseball Prospects, Randy Owen Family Tree, Articles S