Will a LLM-based AI be used for a law enforcement decision before 2025?
➕
Plus
55
Ṁ4832
Jan 1
50%
chance

This prediction is specifically about law enforcement, not law as a whole. That is, LLM output is used for something like determining if someone should be investigated or arrested. Think police, not lawyers.

This prediction will be resolved as "yes" iff reliable media sources or official court documents in any country show that a LLM AI was used in any capacity by law enforcement to make a decision as described above.

Please keep the following conditions in mind:

  • It does not include LLM output being used to argue a case, or being used as evidence in a case.

  • It does not include the use of an LLM in some criminal capacity leading to someone's arrest (as the AI was used by the guilty party, not law enforcement).

  • It does not include the use of some other kind of AI or machine learning, either known today or developed in the future.

  • It does not include law enforcement using an LLM in a non-decision-making capacity.

  • It does not matter if the usage was decisive (whether the output was the ultimate cause of the decision), successful, or substantial, only that one was used as part of the decision-making process.

  • It does not include usage by intelligence services like the CIA/NSA

Get
Ṁ1,000
and
S3.00
Sort by:

@CharlesFoster According to resolution criteria:

"It does not include law enforcement using an LLM in a non-decision-making capacity."

I'm not OP, but I'd say that's a no, writing reports is not decision making.

Does the legality of using LLM matter? If a court ruled that the use of a LLM as an interrogation tool by law enforcement in a particular case was illegal, would that include facts sufficient to resolve yes, or would it make the facts in that situation not resolve yea?

@DanPowell good question, the legality does not matter here. In fact, the existence of a legal dispute on if prior LLM usage was acceptable would be solid evidence for a yes resolution all by itself.

As an interrogation tool… That's a bit more of a special case. My gut says yes since I don't think I can meaningfully distinguish the work of an interrogator as not some kind of decision making

What if a LLM was used to direct police officers to certain areas, or teach them about crime rates or something like that? So not a decision to arrest or not arrest, but to direct police work?

@drcat That would count as yes for our purposes.

https://web.archive.org/web/20231201115230/https://www.nytimes.com/2023/03/30/technology/police-surveillance-tech-dubai.html

Keep an eye on this conference. This article is for 2023's con, but I suspect 2024's is when firms will unveil their LLM bullshit. Honestly, people who work on this tech are slimy worms

bought Ṁ40 NO from 75% to 72%

Does the NSA count? Also, this applies globally yes? i.e. Chinese law enforcement included

predictedNO

@troops_h8r NSA no, FBI yes. Intelligence versus law enforcement.

Yes, applies globally.

LLMs are already widely used to interpret text and audio surveillance material, flagging people for further investigation. This would meet your definition- it is determining if someone should be investigated further. As you've said, it does not need to be decisive or substantial.

predictedNO

@WXTJ do we have this in reliable media sources or court documents? To my knowledge, everything today is in development, and is not actually informing law enforcement activities.

LLMs do not interpret audio. That is another type of model.

predictedYES

@Karu you're right about audio - thanks.

I can't find a reliable source. I might have misunderstood something I read. Thanks for questioning it.

Here is something from CETaS suggesting it's not being used. But who knows.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules