Anthropic maintains overwhelming trader consensus at over 90 percent implied probability for fielding the top large language model by month-end, driven by the February releases of Claude Opus 4.6 and Sonnet 4.6, which posted leading scores on SWE-bench Verified for coding, GPQA Diamond for reasoning, and ARC-AGI-2 for general intelligence while introducing 1-million-token context and native agent teams. These capabilities have widened the gap versus OpenAI’s GPT-5.5 family and Google’s Gemini 3.1 Pro on enterprise-relevant benchmarks, with no comparable frontier updates reported since April. A credible surprise release or benchmark reversal from Google or OpenAI before May 31 remains the only realistic scenario that could narrow the lead, though current development timelines make such shifts unlikely.
Експериментальне резюме, згенероване ШІ з посиланням на дані Polymarket. Це не торгова порада і не впливає на вирішення цього ринку. · ОновленоWhich company has the #1 AI model end of May? (Style Control On)
Anthropic 91%
Google 10%
OpenAI 1.3%
Alibaba <1%
$593,854 Обс.
$593,854 Обс.

Anthropic
91%

10%

OpenAI
1%

Alibaba
<1%

Meta
<1%

xAI
<1%

Mistral
<1%

Meituan
<1%

ByteDance
<1%

Baidu
<1%

DeepSeek
<1%

Microsoft
<1%

Amazon
<1%

Moonshot
<1%

Z.ai
<1%
Anthropic 91%
Google 10%
OpenAI 1.3%
Alibaba <1%
$593,854 Обс.
$593,854 Обс.

Anthropic
91%

10%

OpenAI
1%

Alibaba
<1%

Meta
<1%

xAI
<1%

Mistral
<1%

Meituan
<1%

ByteDance
<1%

Baidu
<1%

DeepSeek
<1%

Microsoft
<1%

Amazon
<1%

Moonshot
<1%

Z.ai
<1%
Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control on will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Ринок відкрито: Apr 14, 2026, 5:18 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control on will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies first place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Anthropic maintains overwhelming trader consensus at over 90 percent implied probability for fielding the top large language model by month-end, driven by the February releases of Claude Opus 4.6 and Sonnet 4.6, which posted leading scores on SWE-bench Verified for coding, GPQA Diamond for reasoning, and ARC-AGI-2 for general intelligence while introducing 1-million-token context and native agent teams. These capabilities have widened the gap versus OpenAI’s GPT-5.5 family and Google’s Gemini 3.1 Pro on enterprise-relevant benchmarks, with no comparable frontier updates reported since April. A credible surprise release or benchmark reversal from Google or OpenAI before May 31 remains the only realistic scenario that could narrow the lead, though current development timelines make such shifts unlikely.
Експериментальне резюме, згенероване ШІ з посиланням на дані Polymarket. Це не торгова порада і не впливає на вирішення цього ринку. · Оновлено
Обережно з зовнішніми посиланнями.
Обережно з зовнішніми посиланнями.
Часті запитання