*Note: The above risk question is not limited to x-risk but considering non-x-risks (you might call it y-risk).
**Reeled in meaning that a scenario unfolds that does not reach x-risk due to various dynamics or/and buffers against total collapse (e.g., machines and AI fail to reach sufficient planning or survival skills to navigate the vast non-linear dynamics needed in a non-superintelligence scenario and when in crisis humans tend to find novel solutions - just one possibility).
My question predicates on the possibilities that true AGI and superintelligence may not be within reach this century, that specialisation and excess trust in systems can enable cascading scenarios that remain fundamentally or very likely buffered from wiping out humanity.
The idea that AGI is not within reach this century has the same vibes as claiming that AI will never beat humans at chess or go, or that an AI can exist that does one, but not both.
The reality is that when you already have a general-purpose AI system that is human-level or superhuman in most ways, and then you throw tens of billions of dollars and thousands of talented researchers at the problem of getting it to beat humanity at routing around hard problems on the global gameboard, you will very likely succeed.