What do you believe is the upper threshold of harm that AI/machine systems could cause this century (w/ 1-99% threat)?
4
Ṁ61
2101
9%
Cascading effects (y-risk)* that can threaten millions of lives but can be reeled in** due to dynamics/buffers preventing x-risk territory being viable?
29%
X-risk that can wipe out the human species due to cascading effects without any central plan/conscious intent (potential for resistance but insufficient)
12%
Events creating a global war-scale event that the human species can survive but with losses in the millions or at most billions
49%
X-risk that can wipe out the human species in a short-timespan due to conscious/planning machine intelligence

*Note: The above risk question is not limited to x-risk but considering non-x-risks (you might call it y-risk).

**Reeled in meaning that a scenario unfolds that does not reach x-risk due to various dynamics or/and buffers against total collapse (e.g., machines and AI fail to reach sufficient planning or survival skills to navigate the vast non-linear dynamics needed in a non-superintelligence scenario and when in crisis humans tend to find novel solutions - just one possibility).

My question predicates on the possibilities that true AGI and superintelligence may not be within reach this century, that specialisation and excess trust in systems can enable cascading scenarios that remain fundamentally or very likely buffered from wiping out humanity.

Get
Ṁ1,000
and
S3.00
Sort by:

The idea that AGI is not within reach this century has the same vibes as claiming that AI will never beat humans at chess or go, or that an AI can exist that does one, but not both.

The reality is that when you already have a general-purpose AI system that is human-level or superhuman in most ways, and then you throw tens of billions of dollars and thousands of talented researchers at the problem of getting it to beat humanity at routing around hard problems on the global gameboard, you will very likely succeed.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules