Government ramps up efforts to commandeer AI
A bipartisan agenda unveiled in Congress Friday aims to increase the federal government’s control over AI technology development.
According to the new framework, not all companies will be permitted to develop AI technologies. A licensing scheme will ensure that only companies approved by an oversight body will receive the requisite licenses to develop certain AI programs. This would apply to tech developers who employ facial recognition technology or use any AI applications considered “high risk.”
Obtaining a license would require companies to allow third-party audits of their technology and to meet certain testing requirements. They would also be required to disclose the training data used for AI products and be liable for harm caused.
Other parts of the legislative framework call for the government to ensure the interest of “national security” in AI development, as well as those of consumers and children.
The framework was introduced by Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), both of whom are ranking members of the Senate Judiciary Subcommittee on Privacy, Technology and Law.
Sen. Chuck Schumer (D-NY) has been holding closed-door briefings on AI for lawmakers. On Wednesday, Schumer will host an “AI Insight Forum” attended by tech industry leaders such as Tesla CEO Elon Musk, Meta CEO Mark Zuckerberg, Microsoft CEO Satya Nadella, former Google CEO Eric Schmidt and others.
These corporations have also pledged their “voluntary commitment” to a set of AI rules set by the White House. One of the White House’s chief demands is that AI systems avoid “harmful bias and discrimination” raising concerns that the government will continue censoring Americans to prevent “hate speech.” Another is that AI be used for certain purposes such as “fighting climate change.”
Other rules put forth by the administration include “technical collaboration” between tech giants and government officials and reporting on AI development. The corporations should also publish public reports on their AI technology’s “capabilities, limitations, and areas of appropriate and inappropriate use.” When AI content is generated, they should ensure it contains a watermark to notify users that it is AI-created.
In July the White House secured commitments from Microsoft, Google, Meta, Amazon Web Services, OpenAI, Inflection AI and Anthropic to its AI rules. They were joined by eight more corporations on Tuesday: Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability.
Legislation introduced in May in the Senate also aims to bring artificial intelligence under government control.
The Digital Platform Commission Act of 2023, sponsored by Democrat Senators Michael Bennett (D-CO) and Peter Welch (D-VT), would create a federal agency of “experts” with the power to govern artificial intelligence platforms down to their algorithms.
Without such regulation, says the bill, digital platforms produce “demonstrable harm” such as “abetting the collapse of trusted local journalism,” “disseminating disinformation and hate speech,” “radicalizing individuals to violence,” “perpetuating discriminatory treatment of communities of color and underserved populations,” “enabling addiction” and other maladies.
“We need to empower an agency that issues a license [to develop AI] and can take it away,” said Senator Lindsey Graham (R-SC), who has said more than once that Ukraine’s victory against Russia is “the most important thing in the world.” Graham concluded, “Wouldn’t that be some incentive to do it right if you could actually be taken out of business?”