Recent benchmark evaluations and April 2026 model releases have positioned Anthropic’s Claude Opus 4.7 as the clear market favorite to rank third among frontier large language models by month-end. Traders cite Claude’s sustained leadership on coding-specific metrics such as SWE-bench Verified, where it edges out competitors, while Gemini 3.1 Pro and GPT-5.5 variants hold stronger aggregate scores on reasoning and multimodal tasks. This separation of strengths creates a narrow but stable gap that favors Anthropic for the third slot. With no major capability jumps expected before June, the current ordering is likely to hold, though any surprise improvement in Google’s latest Gemini preview could shift probabilities toward a Google third-place finish.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket. Это не является торговой рекомендацией и не влияет на то, как разрешается этот рынок. · ОбновленоWhich company has the third best AI model end of May?
Anthropic 69%
Google 29%
OpenAI 1.3%
xAI <1%
$91,800 Объем
$91,800 Объем

Anthropic
69%

29%

OpenAI
1%

xAI
1%

Baidu
<1%

Meta
<1%

Z.ai
<1%

ByteDance
<1%

Alibaba
<1%

Moonshot
<1%

Meituan
<1%

DeepSeek
<1%

Microsoft
<1%

Amazon
<1%

Mistral
<1%
Anthropic 69%
Google 29%
OpenAI 1.3%
xAI <1%
$91,800 Объем
$91,800 Объем

Anthropic
69%

29%

OpenAI
1%

xAI
1%

Baidu
<1%

Meta
<1%

Z.ai
<1%

ByteDance
<1%

Alibaba
<1%

Moonshot
<1%

Meituan
<1%

DeepSeek
<1%

Microsoft
<1%

Amazon
<1%

Mistral
<1%
Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies third place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Открытие рынка: Apr 14, 2026, 5:18 PM ET
Resolver
0x69c47De9D...Results from the "Rank" column under the "Text Arena | Overall" Leaderboard tab at https://lmarena.ai/leaderboard/text with style control off will be used to resolve this market.
Models will be ordered primarily by their leaderboard rank at the market’s check time. If two or more models are tied on rank, they will be ordered by their Arena score, including any underlying, unrounded, granular values reflected in the data below the leaderboard. If a tie remains, alphabetical order of company names as listed in this market group will be used as a final tiebreaker (e.g., if the two models are tied by exact arena score, “Google” would be ranked ahead of “xAI”). This market will resolve based on the company that occupies third place under this ranking system.
The resolution source for this market is the Chatbot Arena LLM Leaderboard found at https://lmarena.ai/. If this resolution source is unavailable at check time, this market will remain open until the leaderboard comes back online and will resolve based on the first check after it becomes available. If it becomes permanently unavailable, this market will resolve based on another resolution source.
Resolver
0x69c47De9D...Recent benchmark evaluations and April 2026 model releases have positioned Anthropic’s Claude Opus 4.7 as the clear market favorite to rank third among frontier large language models by month-end. Traders cite Claude’s sustained leadership on coding-specific metrics such as SWE-bench Verified, where it edges out competitors, while Gemini 3.1 Pro and GPT-5.5 variants hold stronger aggregate scores on reasoning and multimodal tasks. This separation of strengths creates a narrow but stable gap that favors Anthropic for the third slot. With no major capability jumps expected before June, the current ordering is likely to hold, though any surprise improvement in Google’s latest Gemini preview could shift probabilities toward a Google third-place finish.
Экспериментальная сводка, созданная ИИ на основе данных Polymarket. Это не является торговой рекомендацией и не влияет на то, как разрешается этот рынок. · Обновлено
Не доверяй внешним ссылкам.
Не доверяй внешним ссылкам.
Часто задаваемые вопросы