Traders currently assign Anthropic a 71.5% implied probability of fielding the second-best large language model by end of June, driven by Claude Opus 4.6 and 4.7 variants posting leading scores on GPQA Diamond, SWE-Bench Verified, and long-context reasoning benchmarks. These results reflect targeted advances in agentic coding and precise output quality that have narrowed the gap with OpenAI’s GPT-5.5 family, while Google’s Gemini 3.1 Pro trails at 14.5% amid slower momentum on developer-preferred tasks. Recent competitive releases from all three labs within weeks of one another have kept the ranking fluid, yet Anthropic’s consistent edge in enterprise coding workflows and lower hallucination rates on complex prompts continue to anchor market sentiment. Upcoming benchmark updates and any mid-June model refinements remain the key near-term catalysts that could shift these probabilities.
Riepilogo sperimentale generato dall'AI con riferimento ai dati di Polymarket. Questo non è un consiglio di trading e non ha alcun ruolo nella risoluzione di questo mercato. · AggiornatoQuale azienda ha il secondo miglior modello di intelligenza artificiale alla fine di giugno?
Anthropic 72%
Google 15%
OpenAI 9.1%
xAI 2.8%
$400,986 Vol.
$400,986 Vol.

Anthropic
72%

15%

OpenAI
9%

xAI
3%

DeepSeek
1%

Microsoft
1%

Meta
1%

Alibaba
1%

Moonshot
<1%

Z.ai
<1%

Meituan
<1%

Baidu
<1%

Mistral
<1%

Amazon
<1%

ByteDance
<1%
Anthropic 72%
Google 15%
OpenAI 9.1%
xAI 2.8%
$400,986 Vol.
$400,986 Vol.

Anthropic
72%

15%

OpenAI
9%

xAI
3%

DeepSeek
1%

Microsoft
1%

Meta
1%

Alibaba
1%

Moonshot
<1%

Z.ai
<1%

Meituan
<1%

Baidu
<1%

Mistral
<1%

Amazon
<1%

ByteDance
<1%
Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies second place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Mercato aperto: Oct 10, 2025, 5:27 PM ET
Resolver
0x2F5e3684c...Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies second place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x2F5e3684c...Traders currently assign Anthropic a 71.5% implied probability of fielding the second-best large language model by end of June, driven by Claude Opus 4.6 and 4.7 variants posting leading scores on GPQA Diamond, SWE-Bench Verified, and long-context reasoning benchmarks. These results reflect targeted advances in agentic coding and precise output quality that have narrowed the gap with OpenAI’s GPT-5.5 family, while Google’s Gemini 3.1 Pro trails at 14.5% amid slower momentum on developer-preferred tasks. Recent competitive releases from all three labs within weeks of one another have kept the ranking fluid, yet Anthropic’s consistent edge in enterprise coding workflows and lower hallucination rates on complex prompts continue to anchor market sentiment. Upcoming benchmark updates and any mid-June model refinements remain the key near-term catalysts that could shift these probabilities.
Riepilogo sperimentale generato dall'AI con riferimento ai dati di Polymarket. Questo non è un consiglio di trading e non ha alcun ruolo nella risoluzione di questo mercato. · Aggiornato
Fai attenzione ai link esterni.
Fai attenzione ai link esterni.
Domande frequenti