AI firms are starting to entertain the chance that they might stop to exist. This notion was, till just lately, extra theoretical: A few years in the past, an ex-OpenAI worker named Leopold Aschenbrenner wrote a prolonged memo speculating that the U.S. authorities would possibly quickly take management of the trade. By 2026 or 2027, Aschenbrenner wrote, an “apparent query” might be circling by way of the Pentagon and Congress: Do we want a government-led program for synthetic basic intelligence—an AGI Manhattan Undertaking? He predicted that Washington would determine to go all in on such an effort.
Aschenbrenner could have been prescient. Earlier this 12 months, on the peak of the Pentagon’s ugly contract dispute with Anthropic, Secretary of Protection Pete Hegseth warned that he may invoke the Protection Manufacturing Act (DPA), a Chilly Battle–period regulation that he reportedly recommended would permit him to drive the AI firm handy over its expertise on no matter phrases the Pentagon desired. The act is one among quite a few levers the Trump administration can pull to direct, and even commandeer, AI firms. And the businesses have been giving the administration loads of motive to think about doing so.
Future bots may assist design and perform organic, nuclear, and chemical warfare. They may very well be weaponized to take down energy grids, monitor congressional emails, and black out main media retailers. These aren’t purely hypothetical considerations: Earlier this month, Anthropic introduced it had developed a brand new AI mannequin, Claude Mythos Preview, able to orchestrating cyberattacks on the extent of elite, state-sponsored hacking cells, probably placing a non-public firm’s cyber offense on par with that of the CIA and NSA. In an instance of Mythos’s energy, Anthropic researchers described how the mannequin used a “reasonably refined multi-step exploit” to work round restrictions and acquire broad web entry, then emailed a researcher—a lot to his shock—whereas he was consuming a sandwich within the park.
Washington is getting antsy concerning the energy imbalance. Over the previous 12 months, a number of senators have proposed laws that might order federal companies to discover “potential nationalization” of AI. Murmurs of doable ways abound—together with extra discuss inside the administration of the DPA after Anthropic’s Mythos announcement, one individual with information of such discussions informed us. In the meantime, Silicon Valley is watching fastidiously. In current weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the opportunity of nationalization. Attorneys who signify Silicon Valley’s greatest AI companies are paying consideration.
So what if nationalization truly occurs?
In essentially the most excessive state of affairs, high researchers from throughout the AI firms could be compelled to work out of SCIFs within the basement of the Pentagon and report back to Hegseth. Computational capability, too, could be centralized underneath one nationalized mega-operation. The work could be locked down, and the main focus could be totally on protection functions, versus the merchandise made for companies and people—ChatGPT and the like—that dominate the market at present.
All of this may represent full nationalization, an absolute takeover of the trade that might hole out the business companies of its three main gamers: OpenAI, Anthropic, and Google DeepMind. Primarily based on a dozen conversations we’ve had with former Pentagon and Trump-administration officers, AI-policy specialists, and authorized students, such a scenario is, in all chance, not going to occur.
For starters, it’s in all probability unlawful, in line with Charlie Bullock, a senior analysis fellow on the Institute for Regulation & AI: The Structure usually prevents the federal government from seizing non-public property with out paying, and the federal government is unlikely to simply produce the trillions of {dollars} that the trade is collectively value. The highest American AI labs would possibly instantly lose a good portion of their analysis workers as properly, due to restrictions on foreigners who can work on essentially the most essential defense-related applied sciences.
If AI companies had been compelled to focus totally on protection functions, there could be the inevitable query of what to do with the huge shopper companies these firms run. Would individuals use ChatGPT.gov, like shopping for a sundae from Cuba’s state-run ice-cream parlor? And if the objective of nationalization is to maintain a aggressive edge over China, it’s onerous to think about that Hegseth’s Pentagon may run an AI firm extra effectively than Altman or Dario Amodei, the CEO of Anthropic.
However think about one other chance—barely much less excessive, although nonetheless able to remaking the trade as we all know it. The federal government may regulate AI firms prefer it does utilities. Within the 1900s, as electrical energy went from a luxurious good to a necessity, state and federal governments noticed a necessity to manage how a lot vitality firms cost and to impose necessities round service reliability. In a lot the identical manner, the federal government may cross new legal guidelines regulating AI companies’ business actions. The businesses may very well be prevented from charging greater than it prices to generate photos and textual content, for example, or required to supply a primary degree of mannequin velocity and capabilities to all clients, a form of AI web neutrality.
A tough pivot to authorities management would probably entail each new state and federal legal guidelines, in addition to heavy cooperation from tech firms—which, given the nation’s sclerotic politics and Silicon Valley’s libertarian leanings, may pose insurmountable boundaries. However the notion will not be so far-fetched. Some corners of Silicon Valley itself appear to be at the very least partially open to it. Altman has described a future wherein “intelligence is a utility like electrical energy or water and other people purchase it from us on a meter.” Jensen Huang, the CEO of Nvidia, just lately mentioned that simply as “each nation has its electrical energy, you’ve your roads, you must have AI as a part of your infrastructure.”
Such discuss serves AI firms’ personal pursuits—partly as a result of being categorised as a service supplier might be, because the period of social media has demonstrated, a wonderful manner for firms to keep away from legal responsibility for dangerous or inaccurate data on their platforms—however it’s definitely doable that AI may grow to be so entrenched that elected officers come to see it as a vital useful resource. Already, simply because the federal authorities makes use of regulatory incentives and funding to spur the development of recent energy vegetation and transmission strains, each the Biden and Trump administrations have undertaken initiatives which might be primarily industrial coverage for AI, utilizing federal {dollars} and regulatory authority to speed up the development of AI infrastructure on American soil.
OpenAI has already flirted with the notion of a “Proper to AI,” suggesting in a current coverage doc that the federal government ought to think about making a “baseline degree of functionality broadly obtainable, together with by way of free or low-cost entry factors.” Related laws already govern many facets of digital communication. “Your internet-service supplier, cable, phone companies, this stuff are thought-about so important that the federal government principally says how the suppliers” can do enterprise, Dean Ball, a former AI adviser to the Trump administration, informed us. AI may very well be subsequent.
For years, AI firms have insisted they should be regulated—however solely as they see match. Ought to the federal authorities ever take AI regulation critically, the utility route could be among the many most aggressive approaches obtainable. However, actually, the AI trade could be getting what it requested for.

Before we get into different conceivable futures, an necessary caveat. A full-blown nationalization effort could also be unlikely, however that adjustments if a significant world conflict breaks out or the financial system collapses. Throughout an emergency of historic scale, Ball reminded us—particularly an emergency underneath the Trump administration—something is feasible. Drastic measures grow to be simpler to justify, each legally and politically.
Think about that over the subsequent 12 months President Trump continues his recreation of imperialist roulette: America is additional eroding the belief of its worldwide companions, NATO retains crumbling, and a brand new geopolitical actuality continues to take form. Say that within the midst of this, China decides to invade Taiwan. The battle escalates quick, drawing within the U.S. and reluctant allies. The following conflict is a significant one. The Pentagon, already drastically brief on munitions after its forays in Iran, desires to use the most recent AI capabilities to its wartime efforts, and Hegseth calls for that Anthropic permit the Pentagon unrestricted entry to Claude, reigniting the dispute first set in movement earlier this 12 months.
As a result of there’s energetic battle, Anthropic is extra keen to interact with the federal government’s calls for than they had been beforehand, however the agency asserts that it requires steady oversight into how the Pentagon is utilizing Claude. The corporate fears that in an effort to crack down on espionage, the Protection Division would possibly create monitoring capabilities that supersede even the Chinese language Communist Celebration’s, sliding America into an autocratic AI regime. Lest this sound speculative, it’s merely a restatement of Anthropic’s personal place: Amodei has warned of a close to future the place “a strong AI” scans “billions of conversations from tens of millions of individuals” to “gauge public sentiment, detect pockets of disloyalty forming, and stamp them out earlier than they develop.”
The spat from earlier this 12 months seems to be delicate by comparability. Amodei stays stubbornly principled regardless of repeated requests from the Protection Division made underneath emergency legal guidelines. Hegseth responds by sending his troops to descend upon the corporate’s headquarters in San Francisco. Amodei is forcibly eliminated and changed with a respectful Military basic. The scenario is exceedingly unlikely, however not with out precedent: Troopers as soon as carried the chair of one among America’s largest retailers out from his Chicago workplace after he didn’t adjust to federal calls for throughout World Battle II.
All through American historical past, efforts to take management of trade have been uncommon, and restricted principally to instances of disaster: President Woodrow Wilson nationalized the railroads throughout World Battle I, and Fannie Mae and Freddie Mac had been positioned underneath conservatorship throughout the monetary disaster. At the moment, there are all types of doable emergencies. If a world monetary crash leads AI firms to insolvency, the administration would possibly swoop in to supply life help, because it did for a lot of banks and automotive firms throughout the Nice Recession. On the flip aspect, ought to AI fashions displace giant swathes of the labor market, such {that a} handful of firms run a lot of the financial system, “then some type of nationalization turns into probably crucial,” Samuel Hammond, the performing director of AI coverage and chief economist on the Basis for American Innovation, informed us—to distribute wealth and easily guarantee the right functioning of society. Each Anthropic and OpenAI have already recommended doable variations of such redistributive measures.
Advances in AI may very well be their very own type of disruptor: Think about a Sputnik 2.0 second the place the White Home decides that American firms have to consolidate assets if the U.S. desires to win the AI race in opposition to China. By exerting extra management, America turns into extra like China within the very race to beat it.
The factor about nationalization, although, is that it needn’t be all or nothing. Nationalization “has layers,” Hammond mentioned. “Like an onion.” Maybe the most definitely destiny for American AI firms is a way forward for tender nationalization—a world wherein the federal government doesn’t absolutely management AI labs and their fashions, however as a substitute enacts an escalating collection of insurance policies and established shut partnerships with non-public firms to form the expertise.
By some measures, tender nationalization has already begun. The Trump administration has already taken a ten p.c stake in Intel, a significant semiconductor producer, offering the White Home with (some) direct monetary leverage over the corporate. OpenAI has appointed the retired basic and former NSA director Paul Nakasone to its board. In the meantime, the Military just lately established a brand new detachment for senior tech leaders, and its first 4 recruits included executives from Meta, Palantir, and OpenAI.
The highest AI firms are coordinating with authorities officers as their merchandise’ navy and intelligence implications advance. OpenAI, which scooped up a contract with the Pentagon after Anthropic’s fell aside, has mentioned it should deploy its personal engineers to work alongside the navy. The agency has additionally been briefing governments—on the state, federal, and worldwide ranges—on the capabilities of a brand new OpenAI cybersecurity mannequin. Google is reportedly negotiating its personal Pentagon contract to permit Gemini for use in categorised settings. And even Anthropic is coming again round. The corporate is combating the Pentagon in courtroom over a “supply-chain threat” designation that Hegseth slapped on them amid their dispute. However after Anthropic introduced its Mythos mannequin, a gaggle of tech executives together with Amodei spoke with Vice President Vance and others to debate the dangers, and Amodei took a visit to the White Home. Final week, President Trump mentioned a doable Pentagon cope with Anthropic would possibly nonetheless be on the desk.
The White Home, OpenAI, and Anthropic all paid lip service to the worth of cooperation once we reached out to them. The Trump administration is “working with frontier AI labs to debate alternatives for collaboration,” a White Home official informed us. A spokesperson for OpenAI mentioned: “As AI programs grow to be extra succesful, it’s only going to grow to be extra necessary for trade to work with governments.” And an Anthropic spokesperson informed us that Amodei’s current go to to the White Home was “productive” and that the agency believes that governments should play a central position in addressing the expertise’s national-security implications. (Google DeepMind and the Pentagon didn’t return repeated requests for remark.)
This campfire ethos may simply disintegrate. And clearly, tensions exist. However as long as it’s in each the AI companies’ and Trump’s pursuits to please one another, we may even see the main AI firms partnering much more intently with the U.S. navy to speed up the event of protection functions, analogous to what contractors together with Palantir, Boeing, and Lockheed Martin have accomplished for years. As a protecting measure, the White Home would possibly ask AI firms to extend their safety practices to forestall espionage and exfiltration of essentially the most succesful variations of the expertise (think about {that a} handful of unauthorized customers have reportedly gained entry to Mythos). The federal government may even designate sure analysis as categorised and topic applied sciences to export controls, and federal workers may embed inside the businesses to supervise varied security measures and run their very own, impartial evaluations. Each nuclear energy plant in America has at the very least two on-site authorities inspectors who examine each day to substantiate compliance with federal security necessities. So why not AI firms too?
If such partnerships persist, one may think about non-public firms resisting sure authorities calls for. However even with out new laws, the White Home can simply exert better authority over trade. “There’s various energy that the federal authorities can wield,” Paul Scharre, an government on the Middle for a New American Safety who beforehand did coverage work on the Division of Protection, informed us. “And much more so if in case you have an administration that’s keen to stretch the bounds of government energy.” Anthropic’s supply-chain-risk designation—a label that successfully bars the navy from doing enterprise with the corporate, and that’s usually reserved for firms with ties to overseas adversaries—was a transparent instance of the federal government flexing its muscle mass. So was the Biden administration’s choice to dam Nvidia from promoting its most superior AI chips to China in 2022. (The Trump administration has since relaxed restrictions, claiming that promoting to China was the higher technique for successful the AI race.)
One of the salient instruments obtainable stays the Protection Manufacturing Act, the regulation that Hegseth threatened Anthropic with earlier than pursuing the supply-chain-risk designation. The act has been used over the many years to help the manufacture of navy tools akin to bombers and tanks, although in recent times, it has been used extra expansively. Each the primary Trump and the Biden administrations used it to speed up pandemic security measures, and Biden relied on the regulation in a since-repealed government order to compel AI firms to share sure details about mannequin coaching and evaluations with the federal government. Final week, Trump invoked the act to fund new vitality initiatives. Really pursuing the DPA as a basic device for controlling AI firms would increase a number of thorny authorized points, however that hasn’t precisely stopped the Trump administration previously.
Such reins on an trade that has billed itself as able to extinguishing humankind are, theoretically, in everybody’s curiosity. It will appear to obviously profit the American individuals to have democratically elected establishments—slightly than company executives—overseeing a set of applied sciences with big implications for the nation’s safety and well-being. It’s additionally traditionally anomalous for a non-public trade to dictate the deployment of such a strong, general-purpose expertise. With the announcement of Mythos, Anthropic has been successfully functioning as a geopolitical actor, briefing ally governments on the mannequin’s capabilities. The European Fee, for example, has met with Anthropic thrice since Mythos was introduced—though as of Wednesday, the corporate had not but given European Union officers entry.
The federal government ought to play a task in dictating the phrases of how AI transforms the world. However the ongoing fracturing of American politics, and particularly the capricious and authoritarian-leaning tendencies of the present administration, complicates every thing. Entrusting the way forward for generative AI completely to Altman and Amodei or Trump and Hegseth look like two very totally different and equally disastrous outcomes—a “Scylla and Charybdis” dynamic, as Bullock put it, between the large focus of energy in authorities or in a small cadre of firms.
The not possible fact is that no non-public firm ought to be trusted to unilaterally steer the way forward for AI growth, however People must also have severe questions on whether or not authorities management is of their finest curiosity—not least of all underneath an erratic and norm-shattering Trump administration. The Manhattan Undertaking coordinated the efforts of scientists, non-public firms, and America’s leaders. What if as a substitute, Boeing and DuPont had been racing in opposition to one another to develop the atomic bomb whereas Hegseth and Trump led the navy? We’re diving headfirst into the Twenty first-century equal of such a scenario. Our political dysfunction is about to ram into Silicon Valley’s immeasurable energy.