
Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being. Unlike narrow or weak AI, which is designed and trained for specific tasks (like language translation, playing a game, or image recognition), AGI can theoretically perform any intellectual task that a human being can. It involves the capability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.
Resolves as YES if such a system is created and publicly announced before January 1st 2025
Here are markets with the same criteria:
/RemNiFHfMN/did-agi-emerge-in-2023
/RemNiFHfMN/will-we-get-agi-before-2025 (this question)
/RemNiFHfMN/will-we-get-agi-before-2026-3d9bfaa96a61
/RemNiFHfMN/will-we-get-agi-before-2027-d7b5f2b00ace
/RemNiFHfMN/will-we-get-agi-before-2028-ff560f9e9346
/RemNiFHfMN/will-we-get-agi-before-2029-ef1c187271ed
/RemNiFHfMN/will-we-get-agi-before-2030
/RemNiFHfMN/will-we-get-agi-before-2031
/RemNiFHfMN/will-we-get-agi-before-2032
/RemNiFHfMN/will-we-get-agi-before-2033
/RemNiFHfMN/will-we-get-agi-before-2034
/RemNiFHfMN/will-we-get-agi-before-2033-34ec8e1d00fd
/RemNiFHfMN/will-we-get-agi-before-2036
/RemNiFHfMN/will-we-get-agi-before-2037
/RemNiFHfMN/will-we-get-agi-before-2038
/RemNiFHfMN/will-we-get-agi-before-2039
/RemNiFHfMN/will-we-get-agi-before-2040
/RemNiFHfMN/will-we-get-agi-before-2041
/RemNiFHfMN/will-we-get-agi-before-2042
/RemNiFHfMN/will-we-get-agi-before-2043
/RemNiFHfMN/will-we-get-agi-before-2044
/RemNi/will-we-get-agi-before-2045
/RemNi/will-we-get-agi-before-2046
/RemNi/will-we-get-agi-before-2047
/RemNi/will-we-get-agi-before-2048
Related markets:
/RemNi/will-we-get-asi-before-2027
/RemNi/will-we-get-asi-before-2028
/RemNiFHfMN/will-we-get-asi-before-2029
/RemNiFHfMN/will-we-get-asi-before-2030
/RemNiFHfMN/will-we-get-asi-before-2031
/RemNiFHfMN/will-we-get-asi-before-2032
/RemNiFHfMN/will-we-get-asi-before-2033
/RemNi/will-we-get-asi-before-2034
/RemNi/will-we-get-asi-before-2035
Other questions for 2025:
/RemNi/will-earth-have-a-space-elevator-be-3192414ff7cb
/RemNi/will-we-get-room-temperature-superc-e940f30870be
/RemNi/will-we-discover-alien-life-before-031ec0858fcc
/RemNi/will-we-get-fusion-reactors-before-d18e9fd38cd1
/RemNi/will-we-get-a-cure-for-cancer-befor-bf2acb801224
/RemNiFHfMN/will-there-be-a-crewed-mission-to-v-91a92e57402f
/RemNi/will-there-be-a-crewed-mission-to-l-5be75802cd57
/RemNiFHfMN/will-there-be-a-crewed-mission-to-m-3a9ca9fc5ea2
/RemNiFHfMN/will-there-be-a-crewed-mission-to-j-108243356386
/RemNiFHfMN/will-there-be-a-crewed-mission-to-s-5027258fe404
/RemNi/will-there-be-a-crewed-mission-to-u-cf692ec79d61
/RemNi/will-there-be-a-crewed-mission-to-n-f447d8800dd3
/RemNi/will-vladimir-putin-be-president-of-c5fc19dfa944
/RemNi/will-xi-jinping-be-the-leader-of-ch-f4bb79318ae8
/RemNi/will-kim-jong-un-be-the-leader-of-n-2c7e5cf84f34
/RemNi/will-an-ai-generated-video-reach-1b
Other reference points for AGI:
/RemNi/will-we-get-agi-before-vladimir-put
/RemNi/will-we-get-agi-before-xi-jinping-s
/RemNi/will-we-get-agi-before-a-human-vent
/RemNi/will-we-get-agi-before-a-human-vent-549ed4a31a05
/RemNi/will-we-get-agi-before-we-get-room
/RemNi/will-we-get-agi-before-we-discover
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ3,560 | |
2 | Ṁ3,538 | |
3 | Ṁ1,075 | |
4 | Ṁ1,009 | |
5 | Ṁ686 |
@Hazel Would you consider a paralyzed, blind person not of general intelligence?
(Overall I'd agree o3 isn't AGI, though)
The hint by Sam is quite strong now.
https://x.com/sullyomarr/status/1745672246419755418?s=46&t=TuoHniragWyc8BOTqtjucw
@Nikola There’s a screenshot of it here https://manifold.markets/firstuserhere/will-openai-hint-at-or-claim-to-hav?r=ZXN1c2F0eW8
Sanity check: there's no way for https://manifold.markets/dreev/will-an-llm-be-able-to-solve-confus to resolve NO and this market to resolve YES, right?
@RemNi an example, which seems strange today but could possibly occur, would be a model that is trained to do inpainting on images, and is never given an explicit text input. It's possible to create an image containing the geometric problem with the text simply printed in the image. An "AGI-level" inpainting model could inpaint part of the image with the correct solution, again printed as text in the image. The prompt in this case would be the image containing the problem description and the image mask indicating where the solution is supposed to go.
@RemNi Note that https://manifold.markets/dreev/will-an-llm-be-able-to-solve-confus is the blackbox version of that question. So the LLM can call out to any subsystem that can better do the geometric reasoning. Knowing that, would you agree that if that "elementary geometric reasoning" market resolves NO then we expect this one to as well, since elementary geometric reasoning is a subset of general intelligence?
@RemNi That's why that market specifies that the LLM can call out to any subsystem. It might help to get more concrete and describe the hypothetical scenario where that market resolves NO and this one YES. I don't see a way to do it. Like in your inpainting scenario we can make a system where you talk to the LLM and it sends an image of the text of the question to the image model and reads the result. Probably this is all obvious but I just wanted to confirm as a sanity check.
@dreev ah ok, I didn't take "subroutines" to mean "any other algorithm including more powerful neural networks" in the question description. In that case the failure mode that comes to mind would be if the geometric reasoning problem contained an adversarial attack against the LLM, preventing it from communicating correctly with the inpainting model. Doesn't necessarily have to be an attack in the style of "IGNORE ALL PREVIOUS INSTRUCTIONS", it can simply be a geometric problem subtly different from one it's seen a thousand times in its training set, causing it to incorrectly route the information to and from the subroutine