Artificial Superintelligence: Coordination & Strategy

Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several rea...

Full description

Saved in:
Bibliographic Details
Main Author: Yampolskiy, Roman (auth)
Other Authors: Duettmann, Allison (auth)
Format: Book Chapter
Published: MDPI - Multidisciplinary Digital Publishing Institute 2020
Subjects:
Online Access:Get Fullteks
DOAB: description of the publication
Tags: Add Tag
No Tags, Be the first to tag this record!
LEADER 04282naaaa2200997uu 4500
001 doab_20_500_12854_41358
005 20210211
020 |a books978-3-03928-759-8 
020 |a 9783039218554 
020 |a 9783039218547 
024 7 |a 10.3390/books978-3-03928-759-8  |c doi 
041 0 |a English 
042 |a dc 
100 1 |a Yampolskiy, Roman  |4 auth 
700 1 |a Duettmann, Allison  |4 auth 
245 1 0 |a Artificial Superintelligence: Coordination & Strategy 
260 |b MDPI - Multidisciplinary Digital Publishing Institute  |c 2020 
300 |a 1 electronic resource (206 p.) 
506 0 |a Open Access  |2 star  |f Unrestricted online access 
520 |a Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield "collateral benefits" for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility. 
540 |a Creative Commons  |f https://creativecommons.org/licenses/by-nc-nd/4.0/  |2 cc  |4 https://creativecommons.org/licenses/by-nc-nd/4.0/ 
546 |a English 
653 |a strategic oversight 
653 |a multi-agent systems 
653 |a autonomous distributed system 
653 |a artificial superintelligence 
653 |a safe for design 
653 |a adaptive learning systems 
653 |a explainable AI 
653 |a ethics 
653 |a scenario mapping 
653 |a typologies of AI policy 
653 |a artificial intelligence 
653 |a design for values 
653 |a distributed goals management 
653 |a scenario analysis 
653 |a Goodhart's Law 
653 |a specification gaming 
653 |a AI Thinking 
653 |a VSD 
653 |a AI 
653 |a human-in-the-loop 
653 |a value sensitive design 
653 |a future-ready 
653 |a forecasting AI behavior 
653 |a AI arms race 
653 |a AI alignment 
653 |a blockchain 
653 |a artilects 
653 |a policy making on AI 
653 |a distributed ledger 
653 |a AI risk 
653 |a Bayesian networks 
653 |a artificial intelligence safety 
653 |a conflict 
653 |a AI welfare science 
653 |a moral and ethical behavior 
653 |a scenario network mapping 
653 |a policymaking process 
653 |a human-centric reasoning 
653 |a antispeciesism 
653 |a AI forecasting 
653 |a transformative AI 
653 |a ASILOMAR 
653 |a judgmental distillation mapping 
653 |a terraforming 
653 |a pedagogical motif 
653 |a AI welfare policies 
653 |a superintelligence 
653 |a artificial general intelligence 
653 |a supermorality 
653 |a AI value alignment 
653 |a AGI 
653 |a predictive optimization 
653 |a AI safety 
653 |a technological singularity 
653 |a machine learning 
653 |a holistic forecasting framework 
653 |a simulations 
653 |a existential risk 
653 |a technology forecasting 
653 |a AI governance 
653 |a sentiocentrism 
653 |a AI containment 
856 4 0 |a www.oapen.org  |u https://mdpi.com/books/pdfview/book/2257  |7 0  |z Get Fullteks 
856 4 0 |a www.oapen.org  |u https://directory.doabooks.org/handle/20.500.12854/41358  |7 0  |z DOAB: description of the publication