WASHINGTON, D.C. — June 9, 2026: A bipartisan coalition of hundreds of experts, former officials, and public figures today released the Pro-Human Declaration, a comprehensive framework for responsible artificial intelligence development, as a Pentagon standoff with AI company Anthropic reveals the dangerous regulatory vacuum in Washington. The declaration’s publication follows Defense Secretary Pete Hegseth’s designation of Anthropic as a “supply chain risk” after the company refused unlimited military use of its technology, exposing what signatories call “costly Congressional inaction” on AI governance. MIT physicist Max Tegmark, who helped organize the effort, told TechCrunch that 95% of Americans now oppose an unregulated race to superintelligence, creating unprecedented public pressure for legislative action.
Pro-Human Declaration: A Fork in the Road for Humanity
The newly published Pro-Human Declaration opens with a stark observation that humanity faces two divergent paths. One path, labeled “the race to replace,” leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other path envisions AI that massively expands human potential through five key pillars. These include keeping humans in charge, avoiding concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. The document’s muscular provisions include an outright prohibition on superintelligence development until scientific consensus confirms safety and democratic buy-in exists. Additionally, it mandates off-switches on powerful systems and bans architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.
Former Joint Chiefs Chairman Mike Mullen, a signatory, emphasized the declaration’s timing. “This isn’t academic speculation,” he stated in a separate interview. “We’re seeing real-world consequences of unregulated AI development playing out between the Pentagon and private companies right now.” The coalition’s breadth spans former Trump advisor Steve Bannon, former Obama National Security Advisor Susan Rice, progressive faith leaders, and hundreds of technical experts—a rare bipartisan alignment on technology policy.
Pentagon-Anthropic Standoff Exposes Regulatory Vacuum
The declaration’s release coincided with a week that made its urgency far easier to appreciate. Defense Secretary Pete Hegseth designated Anthropic—whose AI already runs on classified military platforms—a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology. This label is ordinarily reserved for firms with ties to China. Hours later, OpenAI cut its own deal with the Defense Department, one that legal experts say will be difficult to enforce meaningfully. Dean Ball, a senior fellow at the Foundation for American Innovation, told The New York Times, “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.”
- Immediate National Security Implications: The Pentagon’s inability to secure unrestricted access to Anthropic’s technology reveals gaps in defense procurement for AI systems
- Contract Enforcement Challenges: Legal experts question how the Defense Department can meaningfully enforce agreements with AI companies lacking standardized oversight
- Industry Fragmentation: Different AI companies adopting varying ethical standards creates inconsistent access for government agencies
Expert Perspective: Max Tegmark on the FDA Analogy
MIT physicist and AI researcher Max Tegmark, who helped organize the declaration, reached for a healthcare analogy during his conversation with TechCrunch. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.” Tegmark sees child safety as the pressure point most likely to crack Washington’s current impasse. The declaration calls for mandatory pre-deployment testing of AI products—particularly chatbots and companion apps aimed at younger users—covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.
Comparative Analysis: Current AI Governance Approaches
The Pro-Human Declaration arrives amid fragmented global approaches to AI regulation. The European Union’s AI Act focuses on risk categorization, while China emphasizes state control and alignment with socialist values. The United States has relied primarily on voluntary company commitments and executive orders. This comparative vacuum creates what signatories call “a race to the bottom” in safety standards. The declaration’s framework represents the first comprehensive American proposal bridging technical safety requirements with democratic governance mechanisms.
| Governance Approach | Key Features | Primary Limitations |
|---|---|---|
| EU AI Act | Risk-based categorization, transparency requirements | Slow implementation, limited superintelligence provisions |
| China’s Framework | State control, ideological alignment | Lacks individual liberty protections, centralized power |
| U.S. Voluntary Commitments | Company-led safety standards | No enforcement, inconsistent application |
| Pro-Human Declaration | Human control, power distribution, legal accountability | Requires legislative action, bipartisan support |
What Happens Next: Legislative Pathways and Public Pressure
The coalition plans to introduce the declaration’s principles as draft legislation within 90 days, according to sources familiar with the strategy. Key congressional committees have already scheduled hearings on AI governance for late July 2026. Tegmark believes public pressure around child safety could drive initial regulatory steps. “If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,” he noted. “We already have laws. It’s illegal. So why is it different if a machine does it?” Once pre-release testing establishes for children’s products, Tegmark predicts the scope will widen inevitably to include bioweapon prevention and government stability protections.
Stakeholder Reactions and Industry Response
Initial reactions from the technology industry have been mixed. Anthropic CEO Dario Amodei has called OpenAI’s messaging around its military deal “straight up lies,” according to reports, highlighting tensions between companies adopting different ethical positions. Meanwhile, Nvidia CEO Jensen Huang says his company is pulling back from OpenAI and Anthropic, though his explanation raises more questions than it answers. Public response shows growing concern, with ChatGPT uninstalls surging by 295% after the Defense Department deal announcement, according to app store data. This consumer reaction suggests mounting public awareness of AI’s dual-use potential.
Conclusion
The Pro-Human Declaration represents the most comprehensive bipartisan framework for AI governance proposed to date, arriving at a critical moment when Pentagon-industry tensions reveal the costs of regulatory inaction. Its five pillars—human control, power distribution, experience protection, liberty preservation, and legal accountability—offer a roadmap diverging from what signatories call “the race to replace” humans with machines. As the Anthropic-Pentagon standoff demonstrates, the absence of coherent rules creates national security vulnerabilities and ethical uncertainties. The declaration’s broad coalition, spanning political divides, suggests growing consensus that democratic societies must shape AI’s trajectory rather than react to its consequences. Readers should monitor congressional hearings scheduled for July 2026 and watch for draft legislation translating these principles into enforceable law.
Frequently Asked Questions
Q1: What is the Pro-Human Declaration and who created it?
The Pro-Human Declaration is a bipartisan framework for responsible AI development signed by hundreds of experts, former officials, and public figures including MIT physicist Max Tegmark, former Joint Chiefs Chairman Mike Mullen, and figures from across the political spectrum.
Q2: Why was the declaration released now, in June 2026?
The release coincides with a Pentagon standoff with AI company Anthropic, which refused unlimited military use of its technology, exposing the regulatory vacuum in Washington and demonstrating urgent need for governance frameworks.
Q3: What are the five key pillars of the declaration?
The framework emphasizes keeping humans in charge, avoiding concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable.
Q4: How does the declaration address superintelligence development?
It proposes an outright prohibition on superintelligence development until scientific consensus confirms safety and democratic buy-in exists, plus mandatory off-switches and bans on self-replicating architectures.
Q5: What happens next after the declaration’s release?
The coalition plans to introduce draft legislation within 90 days, with congressional hearings already scheduled for July 2026 to address AI governance and translate these principles into law.
Q6: How does this affect ordinary citizens and technology users?
The framework prioritizes protections against emotional manipulation, mental health risks, and privacy violations—particularly for children—while ensuring AI development expands human potential rather than replacing human agency.