Skip to main content
icon for Anthropic이 펜타곤과 맺을 거래는...?

Anthropic이 펜타곤과 맺을 거래는...?

icon for Anthropic이 펜타곤과 맺을 거래는...?

Anthropic이 펜타곤과 맺을 거래는...?

$131,591 거래량

2026.05.31
Polymarket

$131,591 거래량

Polymarket

5월 31일

$13,073 거래량

15%

6월 30일

$23,899 거래량

24%

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by June 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.The Pentagon's May 1 announcement of agreements with seven leading AI firms—including OpenAI, Google, and SpaceX—for deploying models on classified networks explicitly excluded Anthropic, intensifying an ongoing dispute that began in February when the Department of Defense labeled the company a supply chain risk over refusals to grant unrestricted access to Claude and Mythos for military applications like autonomous weapons and surveillance. Anthropic's safety-first stance, prioritizing AI safeguards, contrasts with competitors' willingness to engage, shifting trader sentiment toward lower implied probabilities of a near-term deal amid stalled litigation—a federal judge's March injunction was overturned on appeal in April—and Pentagon CTO Emil Michael's May 7 statement indicating no quick resolution. Watch for federal court rulings or high-level negotiations as key catalysts before any June deadline.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
거래량
$131,591
종료일
2026.06.30
마켓 개설일
Apr 27, 2026, 11:41 AM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by June 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.The Pentagon's May 1 announcement of agreements with seven leading AI firms—including OpenAI, Google, and SpaceX—for deploying models on classified networks explicitly excluded Anthropic, intensifying an ongoing dispute that began in February when the Department of Defense labeled the company a supply chain risk over refusals to grant unrestricted access to Claude and Mythos for military applications like autonomous weapons and surveillance. Anthropic's safety-first stance, prioritizing AI safeguards, contrasts with competitors' willingness to engage, shifting trader sentiment toward lower implied probabilities of a near-term deal amid stalled litigation—a federal judge's March injunction was overturned on appeal in April—and Pentagon CTO Emil Michael's May 7 statement indicating no quick resolution. Watch for federal court rulings or high-level negotiations as key catalysts before any June deadline.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
거래량
$131,591
종료일
2026.06.30
마켓 개설일
Apr 27, 2026, 11:41 AM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by May 31, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.

외부 링크에 주의하세요.

자주 묻는 질문

"Anthropic이 펜타곤과 맺을 거래는...?"은 3개의 가능한 결과가 있는 Polymarket의 예측 마켓으로, 트레이더들이 어떤 결과가 발생할지에 따라 주식을 매수 및 매도합니다. 현재 선두 결과는 24%의 "6월 30일"이며, 이어서 14%의 "5월 31일"입니다. 가격은 실시간 크라우드소싱 확률을 반영합니다. 예를 들어 24¢에 거래되는 주식은 마켓이 해당 결과에 24%의 확률을 부여함을 의미합니다. 이 확률은 트레이더들이 새로운 진전과 정보에 반응함에 따라 지속적으로 변화합니다. 정확한 결과의 주식은 마켓 정산 시 각 $1에 교환 가능합니다.

오늘 현재 "Anthropic이 펜타곤과 맺을 거래는...?"은 총 $131.6K의 거래량을 생성했습니다 마켓이 Mar 6, 2026에 시작된 이후. 이 수준의 거래 활동은 Polymarket 커뮤니티의 강한 참여를 반영하며 현재 확률이 깊은 참가자 풀에 의해 정보에 기반하도록 보장합니다. 이 페이지에서 실시간 가격 변동을 추적하고 모든 결과에 직접 거래할 수 있습니다.

"Anthropic이 펜타곤과 맺을 거래는...?"에서 거래하려면 이 페이지에 나열된 3개의 가용 결과를 탐색하세요. 각 결과에는 마켓의 내재 확률을 나타내는 현재 가격이 표시됩니다. 포지션을 잡으려면 가장 가능성이 높다고 생각하는 결과를 선택하고, 찬성이면 "Yes", 반대이면 "No"를 선택하고, 금액을 입력하고 "거래"를 클릭하세요. 마켓이 정산될 때 선택한 결과가 맞으면 "Yes" 주식은 각 $1을 지급합니다. 틀리면 $0을 지급합니다. 수익을 확정하거나 손실을 줄이고 싶다면 정산 전 언제든지 주식을 매도할 수 있습니다.

"Anthropic이 펜타곤과 맺을 거래는...?"의 현재 유력 후보는 24%의 "6월 30일"이며, 마켓이 해당 결과에 24%의 확률을 부여합니다. 두 번째로 가까운 결과는 14%의 "5월 31일"입니다. 이 확률은 트레이더들의 주식 매수 및 매도에 따라 실시간으로 업데이트되어 가장 가능성 있는 결과에 대한 최신 집단 시각을 반영합니다. 새로운 정보가 나타남에 따라 확률이 어떻게 변화하는지 자주 확인하거나 이 페이지를 북마크하세요.

"Anthropic이 펜타곤과 맺을 거래는...?"의 정산 규칙은 각 결과가 승자로 선언되기 위해 정확히 무엇이 일어나야 하는지를 정의합니다 — 결과를 결정하는 데 사용되는 공식 데이터 소스를 포함합니다. 이 페이지의 댓글 위 "규칙" 섹션에서 완전한 정산 기준을 검토할 수 있습니다. 거래 전 규칙을 주의 깊게 읽는 것을 권장합니다. 이 마켓이 어떻게 정산되는지를 관리하는 정확한 조건, 예외 사항, 출처를 명시하고 있습니다.