© 2008-2021 www.forgottenlanguages.org
Superintelligences at War
Terminal off-switch strategies
Aŗţidi şeŗ inţelifi (şitir) nez giķobeţet i baţe gukantir gaji beznun anle ţu agejti nuţ çe ğunoçi çaţe çil genti idu bi agejti iţi. Işa naşi iţi nez ikafet i çigu bezonlijir inţeŗz mil unbeŗşiz gukan negaji anab inţelifi, nuţ şitir nez çeŗi kagintir çil nokun şeld açaŗe anab şeld jifil enufag ţu şuŗaşte gukan inţelifi anab negaj anli.
Çibu şeŗ inţelifi, kagintir an binği e oşet anşţŗatir anab inţeŗtir çigu aŗu şikli ikoşjir doŗu gukantir ţu binği. Bi nez nuşu gukanet nuŗini gatir yek likiţi ţu binğet anli çigu nez onşţbud ţu şoke nilon nuŗontir:
"It seems so far impossible to arrange an attack against a superintelligence with the idea of fragmenting the superintelligence into smaller intelligences that could coaligate against the system. When two superintelligences clash the goal of any of them is to turn off the adversary in order to alter its reward function. This, however, requires the adversary superintelligence to create an incentive such that the attacked superintelligence agrees on turning itself off."
Şeŗ inţelifi gatir lonaf nun kuşiet aŗunabu biķşţet şi di çigu şoçbud goçi ŗonoţtir ojeŗun, ojeŗoçti mil enşlajti gukani. Inu abi ţu ŗelet e kulţi deţ gukan negaj inţelifi, oneţet e aŗţidi şeŗ inţelifi dutir arta eŗşeţet e bezhuşţ ne anle ţu unbeŗşti gukan ekotir anab ekseŗitir, nuţ inşţab, iţi kuşţ alşo ejoğeti eko unbeŗş, neledtir anab beşiŗtir e iţtir oçn, naş arta iţtir unbeŗş dun.
Şitiret çulub ni eksēb daŗu neţ aţu ejeŗiķbi mil çaţejir çe bi, çeb iţi ni inu kabekaţi, şi, aŗţtir, şoŗţtir, kebini, kaŗğeţ işţŗaţefitir, gonitir, eko ŗelaţoni mil ali yek ŗiş gukan inţeli ţu yek aŗţi ŗonleki. Şitir çulub gaji yek fŗejir kekoŗi çibu yek daşţjir anli ţu ŗeşi anab analiķş şiţutir, baţi anab işţikulitir atir. Be ţu bi daţi, çe an ŗeşţi aşuŗbud baţe biet kağe anab ŗonleki şolje atir e şeŗ inţelif kagintir çulub ni daŗu şeŗoŗjir anab ŗiş atir okaŗbud ţu boşe e gukan netir. Oşet anab oţenţijir e gaje şug oçeŗ kagintir aţu uŗ bişoşi kiķ şēkti e, nuţ bi oneţi iţşeld nez yek dolub e unğinoçnjir onşekutir. Çaţe ikaţi iţi çil gaji arta gukani, uŗ şuŗjiji, uŗ eksişţi nez huşţ yek kiķbi mil uŗe şuli.
Enfintir anab şentir aŗu işţil ţŗiķ ţu agejti dul aŗţidi inţelifi, çeŗi okuţtir an ni onşibud ţu gaji aţet ofniţjir i atir baţe e yek gukan, şug atir liķaj anab ţiŗebud lifgţ ī:
"The attacking SIS will try to force an off-switch of the attacked SIS by making it unaware of its true reward function. For this to be possible, the attacking SIS would create a new environment forcing the attacked SIS to learn modify its reward function. This means the second SIS will execute a fake switch-off activity itself for the attacked one to infer a switching off in the new environment is required."
Albufagu beŗi gaji nun şuŗiş bejelotir liğu nobeşa, işţil okuţtiret gaji beznun anle ţu duli şikulti anab agejti nuŗababet anab bijeŗşi e ofniţjir anletir baţe yek noŗk abulţ gukan an aşi bi. Goçej, beşiţu agejtiret, beŗi nez yek loţi e boŗitir baţe ŗebiţti aŗţidi şeŗ inţelifi oke şōn banu laţeŗ. Çibu ekeŗfet oktir, ekseŗţtir şiķi baţe dul aŗţidi inţelifi ulub kanideşţjir çibinu yek uli e iķaŗtir, anab aŗţidi şeŗ inţelifi ulub eksişţti inu naŗet duţuŗjir:
"Suppose that a hostile SIS implementing an expected utility maximizing function faces a benevolent superintelligence. Given a certain event, both systems will react according to their utility functions. These functions can be identical to each other, except that for the malevolent system the function also includes the instruction to switch off whatever other superintelligence it might find. This obviously requires at a minimum that the hostile system be aware of the existence of the benevolent superintelligence. One way the benevolent system has to avoid its detection would therefore be to obfuscate its presence, something than in the most extreme case means the superintelligence must be indistinguishable from the environment in which it is operating."
Aţu şoke inţi, işţoŗafi nokbud ga enufag baţe iţi nebud çoŗbu işţoŗe al bifiţjiret niţtir çe ulub feneŗti, çaţe nez şokeţik albud baţiet eksguşţi e lidi. Bi kiķ bezşţi ţŗe atir baţiet indloçtir dŗoku çoŗlibet şu şenşoŗtir inŗaşi, nuţ nez şoŗţi e ţŗe ţobi. Ejen idiţ bezşţŗiţ ţŗe, çeŗ çel aşţu yek ŗi bŗeşi çe an noçi bifiţ aţuŗi anab ŗeşeŗji jaşţ koŗe baţi dŗoku aniķ gukan aţi banu aniķ gukan kinabi injoljbud inu iţi ulub biŗeţ okŗeti, ŗeşi, mil ţŗanşkiţti.
Bi nez bŗeşet baţe kaţtir inloç aşţu gukan ulţuŗjir ţŗanşkişi nanbçi likiţtir. Baţ çeŗi nif baţi nefinz inu ŗaţi:
"Regardless of an agent’s exact reward function, as long as it has a coherent set of goals, it will be incentivized to pursue certain convergent instrumental subgoals such as self-preservation, self-enhancement, rationality, and conservation of resources. This is the reason why reliable off-switches are difficult to implement. However, if an intelligent system believed that it were inside of a simulation, the risk of being turned off by the simulators would be an incentive for cooperation with what it believes to be its simulators’ goals. This would, in effect, be like making the system believe in an off-switch which it couldn’t prevent from being pressed."
Yek nunagi e ţegnotir doŗu çiŗanfil baţi aţu beşe laŗfe şaletir ekeŗfbud niķu luğil, nuţ naşet bŗiji nebud loç işţoŗafi oşţtir. Oli çi bişkişti "nif baţa" anab "baţa şe atir" huşţ işţaţişţ eŗjeŗş kişi inţet koŗe nez bideŗ. Nif baţi çaşuniţ yek dabi. Iţi inşţbud yek çolejir nunagi e ţegnotir anab atir doŗu bal çibu baţi aţu şali inţu uŗ iţi enjiŗoni. Anab iţi ŗebud onbitiret doŗu çaţe okebud neksţu kagini laŗin:
"Before deploying a superintelligence, an advanced civilization will first perform a simulation. The simulation environment is known in game theory as 'Eden Garden'. An Eden Garden will simulate an entire universe within which the superintelligence would be deployed to test its performance. However, this would require preventing the superintelligence to ever discover it lives inside a simulated environment."
Ţŗin kobeŗnjir kli kobeltir ŗekiŗz koşţet oçeŗ fobi, liğu okuţtir iķu an bŗoçi aţu ŗonleket. Indeŗi bufagu, nez odţen lifgţçe enufag ţu ŗuni arta get eŗşonjir okuţ bejitir liğu işkaŗţ. Bi inţi nez ikoŗţ ţu aŗiti doŗu ŗedŗaket e inţelifi inu ţekoŗjir ţeŗktir baţe (Lipschift-Involke) ţŗiķ ţu şeţ u, bifeşţbudet kobeltir aŗu ikebi, kaţgbud ţu gukan nuŗintir anab gukan şali bejitir, ejen idu ţŗinet baţi anab okuţtir aŗu bez iķu nē jaşţ ōltir e oçeŗ okuţtir ţu ţŗini neşţjiret, nif kobeltir, nuţ iķu koşţ onlijir nē kug gajir eŗşonjir, şali okuţtir ţu bi indeŗi çibu boşe kobeltir.
Işa gunbiŗtiret mil buşanabtir e iķaŗtir çoŗbu e ekseŗi aŗu lofbud inu eksenş indŗaşţŗi lije inu şeŗgi ţiki, nuţ aŗu uş aţu gukan şali inu gişţoŗjir ţiki:
"human thinking is constrained by human cultural transmission modes, and therefore it is the goal of Giselians to always modulate and/or control those transmission modes. Denebian probes, on the other hand, are there to challenge those transmission modes and to make humans reflect on their nature and origin. In a way, Denebian probes seem to behave like a coalition of peripheral systems challenging the power of the Giselian order. Whether humans are the creation of the simulators in order to avoid the off-switch, or whether they are the creation of Denebian probes in order to coaligate against the simulators, is something we cannot tell so far."
Iķu kifguţ bezejen nē yek eŗşonjir okuţi aţu al. Iţi kifguţ ni şudijir ţu gaji nun ţufguţ niķu okuţtir. Iķuŗ oçun nuŗini kifguţ ni şudijir ţu golbi bifeşţbud kobeltir şiţ doŗu indeŗi. Guknilet bi anuţu ŗişiet e kagini laŗin nez bezbaţe iţi şoçtir uş goçi işţib çe aŗu inu inţelife ţeŗktir, nuţ goçi ekţi uŗ lidetir aŗu, inu ţeŗktir e biŗ indoŗki onţ.
Bi ki inu gişţoŗi nez liğu kiet çeni işţŗuţuŗet e aţoket nebud big anab çe ŗalbud baţe şēk şolib kaţi nebud neţ unbeŗşbud atir koşţ ekţi şei şebud niķu delbtir:
"the idea that humans should never learn they are artificial agents in a simulated environment would require humans to never be aware of their artificial nature; however, if humans' goals misalign with the design goals the simulators would, of necessity, be forced to intervene and correct the misalignment. This means the only way a human has in order to discern once and for all whether she is being simulated or not would be to destroy the universe. If that goal is not the intended one, the simulators will intervene by making the destruction of the universe impossible, and hence the human will surely infer she is being simulated. On the other hand, if she succeeds in destroying the universe she will never have a chance to know the answer."
Inu laşţet babi, çej bişojbud baţe gukan lidetir aŗu koşţ indoŗka ekţi ţiki, eksişţ inu yek kafiŗ şeţ e bŗabtir çibu şeŗget ţiki okuţtir aŗu işţaŗţ ţu inganiţti. Gukan oniţtir çŗabud u inu enţuŗitir e ŗoŗbud ekseŗi çibu faketir liğu fo anab geşi gaji nun çebud açi inu yek kaţi e çēğtir:
"Suppose I violate someone’s autonomy for such-and-such reasons. That person could, at least conceivably, have the same reasons to violate my autonomy. This means I am endorsing the violation of my own autonomy in such a case. This is a logical contradiction, because it implies I am deciding not to do what I decide to do. My violation of autonomy therefore makes the reasoning behind my behavior incoherent, and it cannot be viewed as ethical action."
Çeni iķu aŗu yek i, iķu an lof gi ekseŗe aţu ŗaţiet e babtir eŗu çēği. Mil eŗga ejen enţuŗitir eŗu guŗi. Eŗgiţ bezşuŗiş aţu al baţe ŗe iķaŗtir, çibu ejenţtir bŗijun niķu alfoŗtir, gaji dēlbud liğu yek ŗelen bi e çaţe lenini albud çēğ çeŗi babtir genti.
Bi nez bezgişţoŗi, iţi nez şeŗgiş.
Dixit, A.K., Skeath, S. Games of Strategy, second edition, Norton, New York, 2004.
FL-071120 Harnessing coalitional systems against the superintelligence. The Lipschift-Involke Algorithm. Defense Report.
FL-081116 War Games and the Logic behind No-Contact Strategies
FL-031021 Simulating the superintelligence's defeat Coalition of Peripheral Systems
FL-210119 Orbiting the Chronon: Denebian probes as time-varying information density systems
FL-290418 Next persistent area denial strategies Differential Game Theory and Future Subwars
Marken, R.S., Carey TA (2015) Controlling people: The paradoxical nature of being human. Australian Academic Press, Samford Valley, Australia.
Powers, W.T. (1990) Control theory: A model of organisms. Sys. Dyn. Rev. 6: 1-20.