Government inches closer to controlling AI superintelligence

Silicon Valley is poised to develop an AI superintelligence that is likely to be commandeered by the federal government.

AI superintelligence, or artificial general intelligence (AGI), refers to AI technology that surpasses human capabilities. An AGI program could theoretically outperform humans in any task.

On Sunday, OpenAI CEO Sam Altman said his company has discovered how to build AI superintelligence.

“We are now confident we know how to build AGI as we have traditionally understood it,” Altman wrote in a personal blog post. “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

“We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word,” he continued. “We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.”

Altman added that OpenAI, which owns ChatGPT, will not be a “normal company.”

“This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.”

Altman begged government to regulate AI

Since the rollout of ChatGPT, Altman has expressed a willingness to work closely with the federal government on AI development. He even urged lawmakers to implement a licensing system whereby the government decides which companies may develop AI products, which would be subject to federal regulations, scrutiny, and control.

“It is vital that AI companies — especially those working on the most powerful models — adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results,” Altman told the Senate Judiciary Committee. “To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.”

A licensing system would be a boon for Altman but a barrier to competitors, thereby forming a powerful technocracy allied with the federal government. Altman signaled he knew this when he acknowledged that while there will be many machine-learning models, “there will be a relatively small number of providers that can make models at the true edge.”

Lawmakers were pleased with Altman’s proposal.

“We need to empower an agency that issues a license and can take it away,” agreed Senator Lindsey Graham (R-SC). “Wouldn’t that be some incentive to do it right if you could actually be taken out of business?”

“Clearly that should be part of what an agency can do,” Altman responded.

If the federal government can control who builds AGI technology and under what conditions, it would wield significant control over AI superintelligence.

Government seeks total control of AI

The federal government has been aggressive about commandeering AI, with Joe Biden stating in October 2023 that “we need to govern this technology and there's no other way around it.” He made the statement before signing an executive order requiring companies to report their AI projects to the federal government and comply with government standards.

Billionaire tech investor Marc Andreessen says the Biden-Harris administration has been blocking AI startups because it seeks total control of AI technology.

Andreessen told journalist Bari Weiss of The Free Press last month that he had “horrifying” meetings with Biden officials in May that led to his decision to endorse Donald Trump for president.

“[They told me] AI is a technology basically that the government is going to completely control,” he recalled. “This is not going to be a startup thing. They actually said flat out to us, ‘Don’t . . . do AI startups. Don’t fund AI startups. It’s not something that we’re going to allow to happen. They’re not going to be allowed to exist. There’s no point.’”

The White House said it intended to shape the AI industry into just two or three megacorporations, all under government control. Andreessen asked the officials how they planned to do this when the science behind AI is publicly available. They answered by saying the government would control the science. 

“They literally said, ‘During the Cold War, we classified entire areas of physics and took them out of the research community and like entire branches of physics basically went dark and didn’t proceed. And . . . if we decide we need to, we’re going to do the same thing to the math underneath AI,’” the investor said.