The Real Battle Over AI Isn’t About Technology — It’s About Power
When artificial intelligence becomes infrastructure, the line between corporate power and state authority begins to blur.
The Pentagon declared Anthropic a “supply-chain risk,” barring government contracts with the Department of War. It raises an interesting concern about who truly regulates the guidelines and usage of artificial intelligence: the creators or the government.
The debacle between the Pentagon and Anthropic developed over a clause that limited the government’s use of the AI technology for mass surveillance and autonomous weapons. This limitation strongly pushes against the government’s understanding that the military should be able to use technology for all lawful purposes.
The CEO of Anthropic, Dario Amodei, has specifically expressed that the technology is not up to standard to be used or accessed by the government as it currently stands. However, in the future, if the technology had the correct safety protocols and was deemed safe for military use, then they would consider it ready for use.
Furthermore, Amodei has expressed that Congress should develop laws and regulations that speak to the new technology and the complexity it presents to both society and governments as a whole. Absent those regulations, there are no checks on power or legal means of ensuring accountability as AI continues to aid in influencing decisions made at the military level in the use of Anthropic’s AI in both the strikes on Iran and the capturing of former President Maduro.
The Trump administration has been unwilling to engage with a private company influencing the way the government is allowed to utilize a specific technology to strengthen national defense. It is an argument used for a multitude of inventions, from the atomic bomb to the internet: the United States government has always been able to dictate how innovation alters and conforms society as a whole.
However, this moment feels less about technological advancement and integration and more about politics. Anthropic’s contracts in government defense had been signed in 2024 with the Biden administration and extended once the Trump administration came into office. As the pressure and threat of war in the Middle East began to ramp up, they needed partners who would further the interests of the government engaging in unsanctioned war. Anthropic refused to participate.
Interestingly, even though there has been some backtracking, the CEO of OpenAI, Sam Altman, was more than prepared to aid the Trump administration and fill the gap in the market—to be the company that would have sanctioned mass surveillance and autonomous weapons. However, public pressure forced him to reconsider his partnership. He further admitted the move was perceived as “opportunistic and sloppy.” The business move turned into a horrible public relations decision and once again created contestation with the Trump administration.
An interesting turn of events for AI companies, who have already had close ties with the Trump administration. Specifically, they were present at Trump’s commitment to invest $500 billion into AI. Sam Altman’s presence was known and felt. Over a year later, those same relationships continue to be tested.
More specifically, regarding the technology itself, the declaration of Anthropic as a “supply chain risk” is definitely unprecedented, as it is the first American company to be assigned that status. However, it is only in relation to the Department of War, formerly the Department of Defense. Commercial business and relationships are still deemed valid. There will be a deep legal battle that develops in the coming months.
It has caused some to speculate that in instances where the government no longer agrees with the political beliefs of integrated AI networks, or disagrees with the ethical standards and guidelines applied by the company that owns the technology, governments may attempt to unilaterally remove and implement a more politically aligned AI company.
This potentially is the first iteration of a developing issue for governments around the world. Anthropic framing themselves as the more ethical, principled, and well-intentioned AI company has not held strong against a government that seems to prioritize the implementation of its agenda above all else. The contest has no clear answer regarding who is more powerful.
The largest considerable threat, as AI companies integrate into government departments and access more information, is that they will continue to become more powerful than governments themselves, as they have both insight and understanding of the processes and biases that influence machine operations.
More threatening is the fact that they will have substantial amounts of data about private and public information concerning government operations. They have the capacity to overhaul and override government systems if they so see fit. To some degree, Anthropic’s contract with the government is believed to do so.
Furthermore, on a more hypothetical basis, in the instance where the CEO of an AI company is directly opposed to the government that has been installed, direct conflict could occur after an election. If the guiding principles between the elected president and the CEO are deemed unresolvable overnight, havoc could ensue, or key systems and infrastructure could be shut down.
AI technology not only becomes the infrastructure that governments depend on to optimize their function and ensure processes that would have once been deemed impossible due to limitations of human capital, or impractical due to lack of resources, but suddenly becomes the very system governments depend on to run the state, protect their borders, and sadly, wage war on other countries. It is no longer a technology that will merely progress society but one that will define society.
Evidently, this moment shows that regulations and solutions to the problem may not come about in time, as technology continues to influence all parts of society more visibly on a day-to-day basis. Countries will come to understand the impact of AI on elections, public support, media, and democracy long after AI has already implanted itself within humanity.
The world currently exists in a strange paradox where both world leaders and corporate leaders appear to be both trustworthy and self-interested. They continue to show that they are neither heroes nor villains, but individuals trying to make sense of the information as technology continues to challenge our assumptions about the rules that govern society. They too seem to be searching without a clear answer.
The following months will be defined by the story that is believed and held by the majority when it is all said and done. It may be that Amodei is seen as the moral voice and leader of the world of AI, or it may be the government that, in its time of war, tries to show the world that it is best positioned to ensure care and safety. It may end with no one being the hero and society collectively holding the failure together.
History seems to be repeating itself. Social media continues to face growing regulations around the world. Technology CEOs are forced to take accountability in public hearings, and policy continues to be developed to regulate social media and its influence on children. It happens well after damage has been done, when loneliness, anxiety, and depression have become symptoms among adolescents who developed habitual feelings of lack. The world seems to have responded far too late.
Now faced with AI, unaware of what the consequences may be, what the influences may do to society, and unclear as to what symptoms will unite and bind the next generation, it is important that politics does not blind the world from the true opportunity and/or threat that AI currently represents.
The truth of the matter is that AI needs to be regulated by everyone: consumers, creators, and governments as a whole. It affects everyone equally and defines whether or not world peace is an aspiration or a reality.
It is us, not our prompts, that make us whole.


