3. Exponential growth of technologies
Unlimited possibilities
Including destruction of humanity
X-Risks
4. Main X-Risks
Large scale nuclear war
Runaway global warming
Nanotech - grey goo
AI
Synthetic biology
Nuclear doomsday weapons
and large scale nuclear war
5. Contest: more than 50 ideas were added
David Pearce and Sakoshi Nakamoto
contributed
Crowdsoursing
6. International
Control
System
Plan A is complex:
Friendly
AI
Rising
Resilience
Space
Colonization
Decentralized
monitoring of
risks
We should do all it simultaneously:
international control, AI, robustness, and space
Plan A1.1 Plan A2 Plan A3 Plan A4Plan A1.2
9. Plan A1.1
International Control
System
Planing
Step 1
Research
• Long term future model
• Comprehensive list of risks, with probability
assessment and prevention roadmap
• Wiki and internet forum
• Integration of approaches, funding,
education, translation
• Additional studies areas: biases, law
10. Plan A1.1
International Control
System
Step 2
Preparation
Social
support
•
peer-reviewed journal, conferences,
intergovernment panel,
international institute
• Popularization (articles, books,
forums, media)
• Public support, street action
• Political support: lobbyism, parties
11. Plan A1.1
International Control
System
Step 3
International
Cooperation
First level of
defence on
low-tech level
•
risks
• Group of supernations takes responsibility of x-risks
prevention
• International law about x-risks
• International agencies dedicated to certain risks
• A smaller catastrophe could help unite humanity
12. Plan A1.1
International Control
System
Step 3
Risk
control
First level of
defence on
low-tech level
• International ban on dangerous technologies or
voluntary relinquishment
• Freezing potentially dangerous projects for 30 years
• Concentrate all bio, nano and AI research in several
controlled centers
• Differential technological development: develop
13. Plan A1.1
International Control
System
Step 4
Worldwide risk
prevention
authority
Second level of
defence on
high-tech level
• Center for quick response to any emerging risk; x-risks police
• Worldwide video-surveillance and control
on a system of international treaties
• Narrow AI based expert system on x-risks, Oracle AI
• Control over dissemination of knowledge of mass
14. Plan A1.1
International Control
System
Step 4
Active
shields
Second level of
defence on
high-tech level
• Geoengineering anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous
ideation by the means of brain implants
• Worldwide monitoring, security and control
15. Risks of plan A1.1
Fatal mistake in world
control system
Global catastropheGlobal catastrophe
Active
Shields
Worldwide risk
prevention authority
16. Result
Singleton
«A world order in which there is a single
decision-making agency at the highest level» (Bostrom)
Worldwide government system based on AI
Super AI which prevents all possible risks and provides
immortality and happiness to humanity
Colonization of the solar system, interstellar travel and
Dyson spheres
Colonization of the Galaxy
Exploring the Universe
18. Plan A1.2
Decentralised risk
monitoring
Step 1
Values
Transformation
• The value of the indestructibility of the civilization
• Reduction of radical religious (ISIS) or nationalistic
values
• Popularity of transhumanism
• Movies, novels and other works of art that honestly
depict x-risks and motivate to their prevention
• Memorial and awareness days: Earth day, Petrov
day, Asteroid day
19. Plan A1.2
Decentralised risk
monitoring
Step 2
Improving
human
intelligence and
morality
• Higher IQ, New rationality, Fighting cognitive biases
• High empathy for new geniuses, lower proportion
of destructive beliefs
• Engineered enlightenment: use brain science
• Prevent worst forms of capitalism
• Promote best moral qualities
20. Plan A1.2
Decentralised risk
monitoring
Step3
Cold war
and WW3
prevention
Dramatic
social
changes
• International conflict management authority like
international court
• Large project which could unite humanity
• Antiwar and antinuclear movement
• Cooperative decision theory in international politics
• Prevent brinkmanship
• Prevent nuclear proliferation
21. Plan A1.2
Decentralised risk
monitoring
Step 4
Decentralized
risks
monitoring
• Transparent society: groups of vigilantes,“ Anony
mous” style hacker groups
• Decentralized control: local police,mutual control,
whistle-blowers
• Net based safety: ring of x-risks prevention organi
zations
• Economic stimulus: prizes for any risk found and
prevented
22. Plan A2
Friendly AI
Solid Friendly
AI theory
AI practical
studies
Seed
AI
Superintelligent
AI
Study
and Promotion
23. Plan A2
Friendly AI
Step 1
Study
and
Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• •Slowing other AI projects (recruiting scientists)
• •FAI free education, starter packages in programming
25. Plan A2
Friendly AI
Step 2
Solid
Friendly
AI theory
• Human values theory and decision theory
• Full list of possible ways to create FAI,
and sublist of best ideas
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
• AI self-improvement
• A clear theory that is practical to implement
26. Plan A2
Friendly AI
Step 3
AI
practical
studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they
agree to implement it and adapt it to their systems
• Tests of FAI theory on non self-improving models
27. Plan A2
Friendly AI
Step 4
Seed
AI
Creation of a small AI capable
of recursive self-improvement and based
on Friendly AI theory
29. Plan A2
Friendly AI
Step 5
Superintelligent
AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary
death, and existential risks
• AI Nanny – one hypothetical variant of
super AI that only acts to prevent
existential risks (Ben Goertzel)
Singleton
Unfriendly AI
32. Plan A3
Rising Resilience
Step 1
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity
• Universal methods of catastrophe
prevention
• Building reserves (food stocks,)
• Widely distributed civil defence,
33. Plan A3
Rising Resilience
Step 2
Useful ideas
to limit
catastrophe
scale
• Limit the impact of catastrophe: quaran
tine, rapid production of vaccines, grow
stockpiles
• Increase time available for preparation
supporting general research risks,
connect disease surveillance systems
• Worldwide x-risk prevention exercises
• The ability to quickly adapt to new risks
34. Plan A3
Rising Resilience
Step 2
High-speed
tech dev.
needed to quickly
pass risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow
process of resource depletion
• Invest more in defensive technologies than in offensive
36. Plan A3
Rising Resilience
Step 4
Timely
achievement
of immortality
Miniaturization
for survival
and
invincibility
• Nanotech-based immortal body
•
capable of living in space
• Mind uploading
• Integration with AI
• Earth crust colonization by miniaturized nano tech bodies
• Moving into simulated world inside small self sustained
computer
40. Plan A4
Space colonisation
Step 1
Temprorary
asylums in
space
• Space stations as temprorary asylums (ISS)
• Cheap and safe launch systems
41. Plan A4
Space colonisation
Step 2
Space
colonies on
large planets
Creation of space colonies on the Moon and
Mars (Elon Musk) with 100-1000 people
42. Plan A4
Space colonisation
Step 3
Colonization
of the Solar
system
• Self-sustaining colonies on Mars and large
asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space
colonies there
• Millions of independent colonies inside
asteroids and comet bodies in the Oort
cloud
43. Plan A4
Space colonisation
Step 4
Interstellar
travel
• “Orion” style, nuclear powered “generation
ships” with colonists
• Starship which operate on new physical
principles with immortal people on board
• Von Neumann self-replicating probes with
human embryos
44. Result
Interstellar distributed humanity
Many unconnected human civilizations
New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations
48. Plan B
Survive the catastrophe
Step 1
Preparation
• Fundraising and promotion
• Textbook to rebuid civilization
(Dartnell’s book «Knowledge»)
• Hoards with knowledge, seeds and raw
materials (Doomsday vault in Norway)
• Survivalist communities
49. Plan B
Survive the catastrophe
Step 2
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural
refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Atarctica
50. Plan B
Survive the catastrophe
Step 3
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Differnt types of asylums
• Frozen embryos
52. Plan B
Survive the catastrophe
Step 4
Miniaturization
for survival
and invincibility
• • Earth crust colonization by miniatur-
ized nanotech bodies
• Moving into simulated world inside
small self sustained computer
• Adaptive bunkers based on nanotech
53. Plan B
Survive the catastrophe
Step 5
Rebuilding
civilisation after
catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
56. Plan C
Leave backups
Step 1
Time
capsules with
information
• Underground storage with information
and DNA for future non-human
civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
58. Plan C
Leave backups
Step 2
Messages to ET
civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
• Voyager-style spacecrafts with information
about humanity
60. Plan C
Leave backups
Step 3
Preservation of
earthly life
• Create conditions for the re-emergence of
new intelligent life on Earth
• Directed panspermia (Mars, Europe, space
and dust)
• Preservation of biodiversity and highly
developed animals (apes, habitats)
62. Plan C
Leave backups
Step 4
Robot-replicators
in space
• Mechanical life
• Preservation of information about
humanity for billions of years
• Safe narrow AI
63. Result
Resurrection by another civilization
Resurrection of concrete people
Creation of a civilization which has a lot of common val-
ues and traits with humans
64. Plan D
Improbable ideas
Saved by
non-human
intelligence
Quantum
immortality
Strange
strategy to
escape
Fermi paradox
Technological
precognition
Manipulation of
the extinction
probability
using
Doomsday
argument
Control of the
simulation
(if we are in it)
65. Plan D
Improbable ideas
Idea 1
Saved by
non-human
intelligence
• Maybe extraterrestrials are looking out for us
and will save us
• Send radio messages into space asking for help
if a catastrophe is inevitable
• Maybe we live in a simulation and simulators
will save us
• The Second Coming, a miracle, or life after
death
66. Plan D
Improbable ideas
Idea 2
Quantum
immortality
• If the many-worlds interpretation of QM is true,
an observer will survive any death including any
global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal
correspondence between observer survival and
survival of a group of people (e.g. if all are in
submarine)
• Another human civilizations must exist in the
68. Plan D
Improbable ideas
Idea 3
Strange strategy
to escape Fermi
paradox
Random strategy may help us to escape some
dangers that killed all previous civilizations in
space
70. Plan D
Improbable ideas
Idea 4
Technological
precognition
• Prediction of the future based on advanced
quantum technology and avoiding dangerous
world-lines
• Search for potential terrorists using new
scaning technologies
• Special AI to predict and prevent new x-risks
71. Plan D
Improbable ideas
Idea 5
Manipulation
of the extinction
probability
using Doomsday
argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so low-
ering it’s probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
72.
73. Plan D
Improbable ideas
Idea 6
Control of the
simulation
(if we are in it)
• Live an interesting life so our simulation isn’t
switched off
• Don’t let them know what we know we live in
simulation
• Hack the simulation and control it
• Negotiation with the simulators or pray for help
74. Bad plans
Prevent x-risk
research
because it only
increases risk
Controlled
regression
Depopulation
Unfriendly AI
may be better
than nothing
Attracting good
outcome by
positive
thinking
75. Bad plans
Idea 1
Prevent x-risk
research
because it only
increases risk
• Do not advertise the idea of man-made global
catastrophe
• Don’t try to control risk as it would only give rise
to them
• As we can’t measure the probability of global
catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
76. Bad plans
Idea 2
Controlled
regression
• Use small catastrophe to prevent large one
(Willard Wells)
• Luddism (Kaczynski): relinquishment of dangerous
science
• Creation of ecological civilization without
technology (“World made by hand”, anarcho-primi
tivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
•
world
78. Bad plans
Idea 3
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger (Malthus)
• Extreme birth control
• Deliberate small catastrophe (bio-weapons)
80. Bad plans
Idea 4
Unfriendly AI
may be better
than nothing
• Any super AI will have some memory about
humanity
• It will use simulations of human civilization to
study the probability of it’s own existence
• It may share some human values and distribute
them through the Universe
82. Bad plans
Idea 5
Attracting good
outcome by
positive thinking
• Preventing negative thoughts about the end of the world
and about violence
• Maximum positive attitude «to attract» positive outcome
•
terrorists and superpowers to stop them
• Start partying now
83. Next stage of the research will be creation
of collectively editable wiki-style roadmaps
They will cover all existing topics of
transhumanism and future studies
Create AI system based on the roadmaps
or working on their improvement
Dynamic roadmaps
84. You could read all
roadmaps on:
www.immortality-roadmap.com
www.existrisks.org