Geopolitical Risk Assessment: The Anthropic-Pentagon Impasse and the Securitization of Ethical AI Frameworks
(Feb. 28, 2026. A current context overview in the complex evolving situation of ethical AI.)
The transition of the United States’ national security apparatus into an "AI-first warfighting force" reached a critical inflection point on February 27, 2026, when the administration of President Donald J. Trump enacted a government-wide prohibition on the use of technology from Anthropic, PBC. This directive, issued via an unprecedented executive order and social media communication, ordered all federal agencies to "immediately cease" their reliance on the Claude family of models, citing the company’s refusal to permit unrestricted military access as a primary justification. Simultaneously, Defense Secretary Pete Hegseth, utilizing his secondary title as Secretary of War, designated Anthropic a "Supply-Chain Risk to National Security," a classification governed by 10 U.S. Code § 3252 and the Federal Acquisition Supply Chain Security Act (FASCSA). This designation, historically reserved for entities originating from or influenced by foreign adversaries such as the People’s Republic of China or the Russian Federation, marks the first instance in which an American frontier artificial intelligence laboratory has been formally categorized as a threat to the state’s defensive integrity.
The rupture between Anthropic and the Department of War is not merely a contractual dispute over procurement terms; it represents a fundamental collision between corporate ethical sovereignty and the sovereign authority of the state to direct dual-use technologies. At the center of the dispute are Anthropic’s "Constitutional AI" safeguards, specifically non-negotiable prohibitions against the use of its models for fully autonomous lethal weapons systems and mass domestic surveillance of American citizens. The administration’s move to blacklist the firm, while simultaneously welcoming a "negotiated" safeguard agreement with OpenAI and a permissive "all lawful use" deal with xAI, suggests a strategic effort to enforce ideological alignment across the domestic AI supply chain.
The Structural Reorganization: From the Department of Defense to the Department of War
The escalating tensions must be analyzed within the context of the institutional rebranding of the American military establishment. On September 5, 2025, President Trump signed an executive order authorizing the use of the "Department of War" as a secondary title for the Department of Defense, a designation intended to signal a shift toward "maximum lethality" and "readiness to wage war" to secure national interests. This rebranding, the first of its kind since the 1947 National Security Act, was accompanied by a directive to subordinate officials, including Secretary Pete Hegseth, to prioritize "offensive" capabilities over "tepid legality". The administration argued that the original War Department had secured victories in two world wars, while the rebranded Defense Department had presided over prolonged conflicts and geopolitical stalemates.
This shift in nomenclature established the ideological foundation for the confrontation with Anthropic. Secretary Hegseth and Under Secretary for Research and Engineering Emil Michael publicly articulated a vision for military AI that operates "without ideological constraints" and is "not woke". The administration’s "Preventing Woke AI in the Federal Government" executive order (EO 14319), issued in July 2025, mandated that all federal procurement of large language models (LLMs) adhere to "Unbiased AI Principles," specifically "truth-seeking" and "ideological neutrality". From the perspective of the Department of War, Anthropic’s ethical red lines were viewed not as safety necessities, but as "Silicon Valley ideology" attempting to "strong-arm" the Commander-in-Chief.
Constitutional AI and the Ethical Impediment to Unrestricted Use
Anthropic, founded in 2021 by researchers emphasizing AI safety, built its business model around "Constitutional AI," a training methodology designed to make models helpful, harmless, and honest by adhering to a predefined set of principles. Unlike traditional reinforcement learning, these principles are integrated into the model’s training process, creating architectural constraints rather than superficial filters. The two red lines at the center of the dispute, prohibitions on autonomous lethal force and mass domestic surveillance, were formalized in Anthropic’s acceptable use policy and its $200 million defense contract signed in July 2025.
Dario Amodei, Anthropic’s CEO, has repeatedly defended these restrictions on both ethical and technical grounds. On the issue of autonomous weapons, Amodei argues that current frontier models are insufficiently robust for life-or-death targeting decisions without human oversight, warning that drone swarms capable of lethal targeting without human input represent a catastrophic misuse of technology. Regarding surveillance, Amodei maintains that AI-powered analysis of billions of data points could be used to suppress political dissent and violate fundamental democratic rights. Anthropic’s position is that these safeguards are the bare minimum required for the Department of War to remain compliant with constitutional protections and international humanitarian law.
The Pentagon’s counter-argument, articulated by Under Secretary Emil Michael and spokesman Sean Parnell, asserts that "lawful" use is the responsibility of the military, not the technology provider. The department maintains that mass surveillance of Americans is already illegal under the Fourth Amendment, and that existing policies require "meaningful human control" over lethal force. Consequently, they viewed Anthropic’s insistence on explicit contractual carve-outs as redundant and a direct challenge to the military’s operational freedom.
The Maduro Operation: A Catalyst for Tensions
The transition from productive partnership to public confrontation was accelerated by the real-world deployment of Claude in classified environments. Anthropic was the first frontier AI laboratory to be cleared for use on the Department of War’s classified networks, achieving this through a partnership with Amazon Web Services (AWS) and Palantir Technologies. Claude was integrated into Palantir’s Maven Smart System (MSS), an AI-powered platform used for targeting, battlespace awareness, and intelligence analysis.
In January 2026, Claude was reportedly used to analyze and synthesize vast amounts of classified data that facilitated the capture of former Venezuelan President Nicolás Maduro. While defense officials praised Claude’s efficacy, the operation reportedly triggered internal concerns at Anthropic. An Anthropic employee reportedly reached out to Palantir to inquire about Claude’s specific role in the operation, questioning whether the deployment approached the company's "red lines" regarding lethal targeting. This inquiry, combined with the Pentagon’s desire to use Claude in potential future operations in regions like Iran, led to a hardening of the government’s stance. The Department of War began to view Anthropic as an unreliable partner that might "shut off" critical systems during active combat for "nonlegal, prudential, discretionary reasons".
The February 2026 Ultimatum: Timeline of the Breakdown
The dispute culminated in a series of high-stakes meetings and public ultimatums in late February 2026. On Tuesday, February 24, Secretary Hegseth and Under Secretary Emil Michael held a tense in-person meeting with Dario Amodei at the Pentagon. Hegseth demanded that Anthropic grant the military "unrestricted use" of its models for "all lawful purposes," a request that would require the removal of the specific safeguards against domestic surveillance and autonomous weapons.
A deadline was set for 5:01 p.m. Eastern Time on Friday, February 27, 2026. Failure to comply would result in the termination of Anthropic’s $200 million contract and the designation of the firm as a supply-chain risk. Throughout the week, the Pentagon engaged in "preparatory steps," contacting defense contractors like Boeing, Lockheed Martin, and Palantir to assess their reliance on Claude and prepare for a potential "unwinding" of the technology.
On Thursday, February 26, Amodei published a formal statement refusing to budge on the core safeguards. He argued that Anthropic "cannot in good conscience" permit the use of its technology for mass surveillance, which he called "incompatible with democratic values," or for autonomous weapons, which he claimed were not yet "reliable enough". Under Secretary Emil Michael responded on social media using the hashtag #CallDario, accusing Amodei of having a "God complex" and lying about the military's intent to engage in mass surveillance.
The impasse remained as the Friday deadline approached. At approximately 4:00 p.m., President Trump posted on Truth Social, declaring a government-wide ban on Anthropic products and characterizing the company’s leadership as "leftwing nut jobs" attempting to "strong-arm" the Department of War. Shortly thereafter, Secretary Hegseth confirmed the supply-chain risk designation, stating that the relationship with Anthropic was "permanently altered" and that no military contractor could continue commercial activity with the firm.
Legal Doctrine: 10 USC § 3252 and the Definition of Supply-Chain Risk
The application of 10 U.S. Code § 3252 to a domestic American company represents a significant expansion of executive power in the domain of technology procurement. The statute allows the head of a covered agency to exclude a source from procurement actions based on a "written determination" that such action is necessary to protect national security by reducing supply-chain risk. Under the law, "supply chain risk" is defined by the likelihood that an "adversary" may sabotage, subvert, or introduce "unwanted function" into a system.
Legal experts, including representatives from the Brennan Center and former Trump AI advisers, have noted that the statute was not designed for domestic contract disputes. Historically, FASCSA orders have targeted foreign entities like the Swiss firm Acronis AG (due to Russian ties) or Chinese telecommunications giants. By labeling Anthropic’s transparent usage restrictions as "subversion," the administration has effectively reclassified ethical dissent as a form of "adversarial" behavior. Anthropic has announced it will challenge the designation in court, arguing that the Secretary of War lacks the statutory authority to ban all commercial activity with a domestic provider and that the designation fails to meet the legal criteria for "hostile" activity.
The Competitive Pivot: OpenAI and the Amazon/OpenAI Partnership
While Anthropic faced blacklisting, its primary competitor, OpenAI, successfully navigated the political landscape to secure its own position within the Department of War’s classified networks. Hours after the Anthropic ban was announced, OpenAI CEO Sam Altman confirmed a deal to deploy OpenAI’s models with "technical safeguards" and the involvement of "Forward Deployed Engineers" (FDEs).
Altman’s strategy emphasized "de-escalation" and "reasonable agreements". In an internal memo to staff, Altman suggested that Anthropic had "overreacted" and that OpenAI could maintain its "red lines" through a partnership model rather than a confrontational contractual one. The OpenAI agreement ostensibly includes protections against domestic mass surveillance and ensures "human responsibility" for the use of lethal force, yet it grants the Pentagon the "any lawful use" language it had sought from Anthropic. The critical difference lies in the mechanism of enforcement: while Anthropic sought model-level architectural constraints, OpenAI proposed operational oversight via FDEs and technical safeguards that "the DoW also wanted".
The $50 Billion Strategic Realignment
The shift in the military landscape was mirrored by a massive financial realignment in the private sector. On February 27, 2026, Amazon announced a $50 billion investment in OpenAI, a move that included making AWS the "exclusive third-party cloud distribution provider" for OpenAI’s "Frontier" enterprise platform. As part of the agreement, Amazon and OpenAI are co-developing a "Stateful Runtime Environment" (SRE) for Amazon Bedrock.
The SRE represents a technological leap forward, allowing AI systems to retain context, integrate with multiple software tools, and manage long-term workflows without losing memory between interactions. This capability is essential for "agentic" AI tools that the Department of War seeks to deploy for battlefield management and targeting. Amazon’s decision to pivot its $50 billion capital commitment toward OpenAI, while Anthropic faced a government-wide ban, effectively cements OpenAI as the primary provider for the next generation of American military AI infrastructure.
Institutional and Cultural Fallout: The "We Will Not Be Divided" Movement
The blacklisting of Anthropic has triggered a widespread reckoning across the technology industry. More than 360 employees at Google DeepMind and OpenAI signed an open letter titled "We Will Not Be Divided," urging their companies to rally behind Anthropic’s ethical boundaries. The letter argued that the Department of War’s tactics were an attempt to "divide and conquer" the AI industry by forcing companies to compete for contracts by lowering their safety standards.
High-ranking industry figures, including Google’s Chief Scientist Jeff Dean, have publicly supported Anthropic’s position on autonomous weapons. Dean shared a 2018 open letter stating that the "decision to take a human life should never be delegated to a machine," reinforcing the idea that Anthropic’s "red lines" represent a broad consensus within the research community. This internal organizing creates a significant "talent risk" for companies like OpenAI and xAI; if their workforce views their participation in military contracts as unethical, it may lead to a mass exodus of the specialized engineers required to maintain technological superiority.
Furthermore, the supply-chain risk designation has created a "chilling effect" on the broader defense industrial base. Every military contractor, from hardware makers to cloud providers, must now audit their systems to ensure they have no commercial activity with Anthropic. This process is particularly burdensome for companies like Palantir, which must rebuild layers of the Maven Smart System that were dependent on Claude’s reasoning capabilities. Former defense officials warned that this would lead to "unexpected and bad consequences" for military operations, forcing a transition to alternative models on a compressed and potentially risky timeline.
The "Woke AI" Narrative and the Erosion of Corporate Sovereignty
The administration’s framing of the dispute as a battle against "woke AI" is a central component of its broader strategy to dismantle diversity, equity, and inclusion (DEI) initiatives across the federal government. Executive Order 14319 mandates that LLMs used by the government must be "ideologically neutral" and not "manipulate responses" in favor of dogmas like DEI. In the context of national security, the administration has expanded the definition of "woke" to include any ethical restriction that limits the "lethality" of military tools.
Under Secretary Emil Michael explicitly targeted Anthropic’s "Soul Doc" and its Constitutional AI principles, highlighting a line that directed the model to be "less likely to be viewed as harmful or offensive to a non-western cultural tradition". Critics argue that such training data prioritizes "non-Western sensitivities" over Western victory, effectively rendering the AI "fundamentally incompatible with American principles". This narrative has been bolstered by Elon Musk, who characterized Anthropic’s AI as "misanthropic and evil". By characterizing ethical safeguards as ideological bias, the administration has established a precedent where the state can dictate the internal training objectives and moral frameworks of private technology companies under the guise of "truth-seeking" and "neutrality".
Geopolitical Implications: The Maduro Precedent and Future Conflict
The operational success of Claude in the Maduro capture demonstrated the transformative power of AI in intelligence and tactical operations. However, the subsequent blacklisting of the model provider highlights a critical vulnerability in the "state-corporate" nexus of AI warfare. As the U.S. government pushes for an "AI-first warfighting force," it is increasingly reliant on a small number of private firms that hold the monopoly on frontier model capabilities.
The Anthropic case suggests that the U.S. will no longer tolerate "partner companies" that attempt to act as ethical gatekeepers for military operations. This shift has significant implications for America’s allies and adversaries. By mandating "all lawful use" and "maximum lethality," the Trump administration is signaling a move toward a more aggressive and potentially autonomous posture in future conflicts, such as the looming possibility of strikes against Iranian targets. Allies who adhere to more restrictive ethical standards for AI, such as those codified in the European Union’s AI Act, may find it increasingly difficult to interoperate with American systems that have been stripped of these guardrails.
Future Outlook and Strategic Conclusions
The resolution of the Anthropic-Pentagon dispute will define the legal and ethical landscape of artificial intelligence for the next decade. The core question remains: who has the final authority over the deployment of AI in high-stakes environments? The Department of War asserts that this authority belongs exclusively to the state; Anthropic asserts that the technical and ethical risks are too high to permit unconditional access.
The immediate impact is a "six-month phase-out" period during which federal agencies must transition away from Claude. During this time, the administration has threatened "major civil and criminal consequences" if Anthropic does not cooperate with the transition, suggesting that the Defense Production Act (DPA) may be used to compel the company to provide the technical expertise needed to migrate its systems to OpenAI or xAI.
For the AI industry, the blacklisting of Anthropic serves as a warning that corporate ethical sovereignty is subordinate to national security mandates in the eyes of the current administration. Companies that wish to remain part of the lucrative federal supply chain must either adopt a "negotiated safety" model like OpenAI or a "fully permissive" model like xAI. Anthropic’s legal challenge, if successful, may offer a reprieve for laboratories seeking to maintain their ethical independence; if it fails, it will likely lead to the consolidation of the American AI sector into a set of state-aligned national champions where ethical guardrails are treated as negotiable contractual terms rather than fixed architectural constraints.
The securitization of AI ethical frameworks has transformed the technology from a tool of productivity into a battleground for sovereign control. As the Department of War pursues a future of "violent effect" and "unfettered access," the standoff with Anthropic stands as the definitive moment when the "soul" of artificial intelligence was placed on the national security auction block.
-Assisted by AI-
Works cited
1. Pentagon assault on Anthropic sends shock waves across Silicon Valley, https://www.washingtonpost.com/technology/2026/02/28/pentagon-anthropic-fight-silicon-valley/ 2. President Trump bans Anthropic from use in government systems, https://www.wunc.org/2026-02-27/president-trump-bans-anthropic-from-use-in-government-systems 3. ‘We don’t want it!’ Why did Trump ‘ban’ Anthropic AI? Impact to be felt across AI companies, https://www.financialexpress.com/world-news/us-news/silicon-valley-vs-washington-trumps-anthropic-ai-ban-pushes-tech-rivals-to-hold-the-line-against-pentagon/4157411/ 4. Pentagon declares Anthropic a threat to national security, https://www.washingtonpost.com/technology/2026/02/27/trump-anthropic-claude-drop/ 5. Trump vs Anthropic: Why Pentagon, other agencies have blacklisted Claude; ‘6-month phase out’, https://www.hindustantimes.com/world-news/us-news/trump-vs-anthropic-why-pentagon-other-agencies-have-blacklisted-claude-6-month-phase-out-101772226293611.html 6. "Retaliatory and punitive": Anthropic CEO Dario Amodei on, https://www.tribuneindia.com/news/world/retaliatory-and-punitive-anthropic-ceo-dario-amodei-on-pentagons-decision-to-label-company-supply-chain-risk/ 7. Donald Trump ‘bans’ Anthropic and gives it same label that China’s Huawei has in the US since the year 2018, https://timesofindia.indiatimes.com/technology/tech-news/donald-trump-bans-anthropic-and-gives-it-same-label-that-chinas-huawei-has-in-the-us-since-the-year-2018/articleshow/128873401.cms 8. What is Anthropic, and why has Trump gone to war with the AI firm, https://www.indiatoday.in/world/story/why-donald-trump-banned-anthropic-ai-firm-claude-2875802-2026-02-28 9. 'Never before publicly applied to an American company': Anthropic to challenge Pentagon's 'supply chain risk' designation in court, https://www.businesstoday.in/world/us/story/never-before-publicly-applied-to-an-american-company-anthropic-to-challenge-pentagons-supply-chain-risk-designation-in-court-518414-2026-02-28 10. The Trump Administration Is Trying to Make an Example of the AI, https://www.americanprogress.org/article/the-trump-administration-is-trying-to-make-an-example-of-the-ai-giant-anthropic/ 11. Congress, Not the Pentagon or Anthropic, Should Set Military AI, https://www.lawfaremedia.org/article/congress-not-the-pentagon-or-anthropic-should-set-military-ai-rules 12. OpenAI says it shares Anthropic's 'red lines' over military AI use - OPB, https://www.opb.org/article/2026/02/27/openais-sam-altman-weighs-in-on-pentagon-anthropic-dispute/ 13. Pentagon Throws Down the Gauntlet: Anthropic Faces $200M AI, https://opentools.ai/news/pentagon-throws-down-the-gauntlet-anthropic-faces-dollar200m-ai-contract-showdown 14. Statement on the comments from Secretary of War Pete Hegseth, https://www.anthropic.com/news/statement-comments-secretary-war 15. Anthropic Refuses Pentagon Demand to Remove AI Security and, https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/Anthropic-Refusal/ 16. Anthropic vs Pentagon: Dario Amodei-led AI firm vows legal action, https://www.livemint.com/companies/news/anthropic-vs-pentagon-dario-amodei-led-ai-firm-vows-legal-action-against-pentagon-after-supply-chain-risk-designation-11772243384253.html 17. Elon Musk joins Pentagon in slamming Claude-maker Anthropic ..., https://timesofindia.indiatimes.com/technology/tech-news/elon-musk-joins-pentagon-in-slamming-claude-maker-anthropic-says-anthropic-hates/articleshow/128854327.cms 18. OpenAI CEO Sam Altman finalises deal with US government amid Trump’s war with Anthropic, https://timesofindia.indiatimes.com/business/international-business/openai-ceo-sam-altman-finalises-deal-with-us-government-amid-trumps-war-with-anthropic/articleshow/128873028.cms 19. Anthropic Supply Chain Risk Designation Triggers Lawsuit Against, https://www.mexc.co/en-PH/news/820208 20. Trump Renames DOD to Department of War, https://www.war.gov/News/News-Stories/Article/Article/4295826/trump-renames-dod-to-department-of-war/ 21. Trump Signs Order Rebranding DOD as 'Department of War' - TIME, https://time.com/7314840/department-of-defense-name-change-department-of-war-pete-hegseth-trump/ 22. Trump signs executive order rebranding Pentagon as Department of, https://www.theguardian.com/us-news/2025/sep/05/department-war-defense-trump-executive-order-pentagon 23. Restoring the United States Department of War - The White House, https://www.whitehouse.gov/presidential-actions/2025/09/restoring-the-united-states-department-of-war/ 24. President Renames DoD to Department of War, https://www.ngaus.org/newsroom/president-renames-dod-department-war 25. AP report: Hegseth warns Anthropic to let the military use ... - PBS, https://www.pbs.org/newshour/world/ap-report-hegseth-warns-anthropic-to-let-the-military-use-companys-ai-tech-as-it-sees-fit 26. Fact Sheet: President Donald J. Trump Prevents Woke AI in the, https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-president-donald-j-trump-prevents-woke-ai-in-the-federal-government/ 27. Preventing Woke AI in the Federal Government, Executive Order, https://business.gmu.edu/news/2025-08/preventing-woke-ai-federal-government-executive-order-14319 28. Preventing Woke AI in the Federal Government - The White House, https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/ 29. Preventing Woke AI in the Federal Government, https://www.federalregister.gov/documents/2025/07/28/2025-14217/preventing-woke-ai-in-the-federal-government 30. President Trump Signs Three Executive Orders Relating to Artificial, https://www.paulhastings.com/insights/client-alerts/president-trump-signs-three-executive-orders-relating-to-artificial 31. A Timeline of the Anthropic-Pentagon Dispute | TechPolicy.Press, https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/ 32. Experts raise questions and concerns about Pentagon's threat to ..., https://defensescoop.com/2026/02/27/pentagon-threat-blacklist-anthropic-ai-experts-raise-concerns/ 33. OpenAI announces Pentagon deal after Trump bans Anthropic, https://www.gpb.org/news/2026/02/28/openai-announces-pentagon-deal-after-trump-bans-anthropic 34. Read the full transcript of our interview with Anthropic CEO Dario, https://www.cbsnews.com/news/anthropic-ceo-dario-amodei-full-transcript/ 35. Sam Altman's Open AI says 'yes' to US Dept of War after it stops use of Anthropic AI, https://www.aninews.in/news/business/sam-altmans-open-ai-says-yes-to-us-dept-of-war-after-it-stops-use-of-anthropic-ai20260228092757 36. Pete Hegseth vs. Anthropic: Read Our Letter On AI Surveillance, https://www.commoncause.org/resources/pete-hegseth-vs-anthropic-read-our-letter-on-ai-surveillance/ 37. Hegseth demands Anthropic let military use AI however it wants, https://therealnews.com/hegseth-anthropic-military-ai-autonomous-killer-drones-spying-on-americans 38. In OpenAI deal, Trump admin agrees for condition Anthropic put, https://www.hindustantimes.com/world-news/us-news/no-domestic-mass-surveillance-whats-inside-openai-deal-dow-trump-admin-anthropic-ai-tussle-101772250629771.html 39. US Pentagon chief orders Anthropic retaliation designation and lays, https://www.mexc.com/news/819967 40. Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks, https://www.theguardian.com/us-news/2026/feb/26/anthropic-pentagon-claude 41. “AI's Red Line: Inside the Anthropic Pentagon Clash and the New, https://medium.com/modelmind/ais-red-line-inside-the-anthropic-pentagon-clash-and-the-new-era-of-national-security-ai-f5ef3c18334b 42. Palantir Wins Marine Corps Contract for Maven Smart System, https://www.executivegov.com/articles/palantir-marine-corps-mss-c2-contract-award 43. 'Growing demand' sparks DOD to raise Palantir's Maven contract to, https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/ 44. OpenAI's Altman: 'We reached a deal with the Pentagon', https://www.jpost.com/business-and-innovation/all-news/article-888247 45. Palantir's role in the conflict between Anthropic and the Pentagon, https://fastcompany.co.za/tech/2026-02-18-palantirs-role-in-the-conflict-between-anthropic-and-the-pentagon/ 46. r/AIGuild - Reddit, https://www.reddit.com/r/AIGuild/ 47. Pentagon head Pete Hegseth gives an ultimatum to Anthropic CEO Dario Amodei: Get on board or the government will, https://timesofindia.indiatimes.com/technology/tech-news/pentagon-head-pete-hegseth-gives-an-ultimatum-to-anthropic-ceo-dario-amodei-get-on-board-or-the-government-will-/articleshow/128776509.cms 48. 10 USC 3252: Requirements for information relating to supply chain, https://uscode.house.gov/view.xhtml?req=granuleid:USC-prelim-title10-section3252&num=0&edition=prelim 49. 10 U.S.C. § 3252 - U.S. Code Title 10. Armed Forces § 3252 | FindLaw, https://codes.findlaw.com/us/title-10-armed-forces/10-usc-sect-3252/ 50. CLEARED For Open Publication Jun 07, 2024, https://dodcio.defense.gov/Portals/0/Documents/Library/ICT-ServicesSupplyChain-RMA.pdf 51. Subpart 239.73 - REQUIREMENTS FOR INFORMATION RELATING, https://www.acquisition.gov/dfars/subpart-239.73-requirements-information-relating-supply-chain-risk 52. Congressional Record, Volume 170 Issue 99 (Wednesday, June 12, https://www.govinfo.gov/content/pkg/CREC-2024-06-12/html/CREC-2024-06-12-pt1-PgH3767.htm 53. Introduction to the Unified Agenda of Federal Regulatory and, https://www.federalregister.gov/documents/2014/01/07/2013-29627/introduction-to-the-unified-agenda-of-federal-regulatory-and-deregulatory-actions 54. Anthropic and the Department of War - GreaterWrong.com, https://www.greaterwrong.com/posts/rmYB4a7Pskw7DLpCh/anthropic-and-the-department-of-war 55. Amid Trump-Anthropic Feud, Sam Altman's OpenAI Inks Deal With, https://www.etvbharat.com/en/technology/hours-after-trump-slammed-anthropic-altman-openai-signs-deal-with-pentagon-to-deploy-ai-models-enn26022800940 56. Altman says agreement reached with Pentagon to deploy OpenAI, https://www.livemint.com/companies/news/sam-altman-openai-reach-agreement-deploy-models-chatgpt-us-pentagon-classified-networks-anthropic-claude-refusal-safety-11772249154934.html 57. Read memo that Sam Altman sent to employees hours before OpenAI signed deal with Pentagon and said that actually meant: Anthropic is 'overreacting', https://timesofindia.indiatimes.com/technology/tech-news/read-memo-that-sam-altman-sent-to-employees-hours-before-openai-signed-deal-with-pentagon-and-said-that-actually-meant-anthropic-is-overreacting/articleshow/128881513.cms 58. OpenAI strikes Pentagon deal with 'safeguards' as Trump dumps, https://witness.co.za/news/2026/02/28/openai-strikes-pentagon-deal-with-safeguards-as-trump-dumps-anthropic/ 59. Amazon commits $50 billion to OpenAI; announces multi-year partnership with AWS, https://m.economictimes.com/tech/artificial-intelligence/amazon-to-invest-50-billion-in-openai/articleshow/128853339.cms 60. Amazon's $50 Billion OpenAI Investment: Conditional Deal with AGI, https://www.roic.ai/news/amazons-50-billion-openai-investment-conditional-deal-with-agi-or-ipo-triggers-02-27-2026 61. Amazon Invests $50B in OpenAI: AWS Becomes Exclusive Cloud, https://www.novadata.io/resources/news/amazon-openai-strategic-partnership-50b-investment-2026 62. OpenAI secures $110bn funding round led by Amazon, Nvidia, https://capacityglobal.com/news/openai-amazon-partnership/ 63. Anthropic Pentagon Stand: Defiant Google and OpenAI Employees, https://cryptorank.io/news/feed/f3b7a-anthropic-pentagon-stand-employee-support 64. ‘We hope our leaders…’: Read letter by more than 200 Google, OpenAI employees backing Anthropic’s stand on military AI use, https://timesofindia.indiatimes.com/technology/tech-news/we-hope-our-leaders-read-letter-by-more-than-200-google-openai-employees-backing-anthropics-stand-on-military-ai-use/articleshow/128877604.cms 65. OpenAI strikes Pentagon deal after Trump blacklists rival Anthropic, https://www.trtworld.com/article/83b498420202/amp 66. Google AI scientist Jeff Dean may have just agreed with what Anthropic vs Pentagon fight started over; shares 7-year-old open letter, https://timesofindia.indiatimes.com/technology/tech-news/google-ai-scientist-jeff-dean-may-have-just-agreed-with-what-anthropic-vs-pentagon-fight-started-over-shares-7-year-old-open-letter/articleshow/128812167.cms 67. Anthropic plans to sue the Pentagon if designated a supply chain risk, https://www.reddit.com/r/singularity/comments/1rgq0ms/anthropic_plans_to_sue_the_pentagon_if_designated/ 68. Pentagon Designates Anthropic Supply Chain Risk Over AI Military, https://www.socdefenders.ai/item/c0c3d3a0-ef5f-4530-a552-b96dadefcbd5 69. Ethnonationalism by Algorithm, https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=3089&context=faculty_publications 70. Increasing Public Trust in Artificial Intelligence Through Unbiased AI, https://www.whitehouse.gov/wp-content/uploads/2025/12/M-26-04-Increasing-Public-Trust-in-Artificial-Intelligence-Through-Unbiased-AI-Principles-1.pdf 71. US military leaders pressure Anthropic to bend Claude safeguards, https://www.theguardian.com/us-news/2026/feb/24/anthropic-claude-military-ai 72. LessWrong, https://www.lesswrong.com/?ref=quantifieduncertainty.org
Comments
Post a Comment