Will "Training AGI in Secret would be Unsafe and Un..." make the top fifty posts in LessWrong's 2025 Annual Review?
3
Ṁ3002027
11%
chance
1D
1W
1M
ALL
As part of LessWrong's Annual Review, the community nominates, writes reviews, and votes on the most valuable posts. Posts are reviewable once they have been up for at least 12 months, and the 2025 Review resolves in February 2027.
This market will resolve to 100% if the post Training AGI in Secret would be Unsafe and Unethical is one of the top fifty posts of the 2025 Review, and 0% otherwise. The market was initialized to 14%.
This question is managed and resolved by Manifold.
Get
1,000and
3.00
Related questions
Related questions
Will "My AGI safety research—2024 review, ’25 plans" make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "AGI Safety and Alignment at Google DeepMind:
..." make the top fifty posts in LessWrong's 2024 Annual Review?
26% chance
Will "A short course on AGI safety from the GDM Ali..." make the top fifty posts in LessWrong's 2025 Annual Review?
14% chance
Will "Current safety training techniques do not ful..." make the top fifty posts in LessWrong's 2024 Annual Review?
10% chance
Will "Safety consultations for AI lab employees" make the top fifty posts in LessWrong's 2024 Annual Review?
7% chance
Will "The case for training frontier AIs on Sumeria..." make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "AGI Safety & Alignment @ Google DeepMind is h..." make the top fifty posts in LessWrong's 2025 Annual Review?
7% chance
Will "Shallow review of technical AI safety, 2024" make the top fifty posts in LessWrong's 2024 Annual Review?
27% chance
Will "Access to powerful AI might make computer sec..." make the top fifty posts in LessWrong's 2024 Annual Review?
14% chance
Will "This might be the last AI Safety Camp" make the top fifty posts in LessWrong's 2024 Annual Review?
7% chance