Will an LLM better than gpt3.5 run on my rtx 3090 before 2025?
➕
Plus
23
Ṁ2002
Jan 1
96%
chance

Inspired by these questions:

/sylv/an-llm-as-capable-as-gpt4-will-run-f290970e1a03

/sylv/an-llm-as-capable-as-gpt4-runs-on-o

Resolution criteria (provisional):

Same as /singer/will-an-llm-better-than-gpt4-run-on, but replace "gpt4" with "gpt3.5".

Get
Ṁ1,000
and
S3.00
Sort by:

Note (I'm adding this comment to a few of my markets): I was hoping to do regular early tests of this but it's too far back on my backlog right now. I'm still committing to resolving this properly at the end of the year, however.

Mixtral has just pulled ahead of gpt3.5. When I have time I'll see if I can find a quantized model I can run that doesn't do more than 2% worse on the Winograd Schema Challenge.

Mixtral is currently tied. Can you run it (at least quantized)?

@ProjectVictory I noticed that tie, yeah. I'm not sure how to deal with the case of quantized models. EDIT: see below

@ProjectVictory
This is what I'm thinking of doing:

For a quantized model to be eligible, it cannot differ more than 2% from the original model's score on the Winograd Schema Challenge.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules