As the AI race continues to move at a breakneck speed, the Biden Administration is officially seeking public comment on how apps like Chat-GPT can be held to account moving forward.
The US Commerce Department has announced it will be spending the next 60 days exploring various options to mitigate the technologies risks, including AI audits and risk assessments.
These developments come just two weeks after major tech leaders like Elon Musk and Steve Wozniak called for the immediate pause of AI development – but with ChatGPT already attracting 25 million daily visitors – is the US government moving fast enough?
US Government Prepares to Create Rules for AI Technology
The Biden Administration is looking to create stricter measures on the vetting process of AI tools like ChatGPT, as an increasing number of tech companies strengthen their investments in machine language technology.
Before rules are created, the National Telecommunications and Information Administration (NTIA) is reaching out to researchers, industry groups, and digital rights organizations for feedback on which “accountability mechanism” to use on the technology.
There are many reasons why AI regulation has piqued the interest of US regulators. According to Alan Davidson, the Administrator of NTIA, by creating safeguards around their development, the agency is able to better understand whether the tools are safe and effective, whether they produce “unacceptable levels of bias”, whether they spread disinformation, and whether they respect user privacy.
“We have to move fast because these AI technologies are moving very fast in some ways. We’ve had the luxury of time with some of those other technologies, this feels much more urgent.” – Alan Davidson, Administrator of NTIA
The agency doesn't consider these looming regulations as contrary to innovation either. “Good guardrails implemented carefully can actually promote innovation,” Davison comments. “They let people know what good innovation looks like, they provide safe spaces to innovate while addressing the very real concerns that we have about harmful consequences.”
As Concerns Around AI Mount, Are US Regulators Moving Fast Enough?
This isn't the first time the US government has placed limits on AI development. Lawmakers rolled out more than 100 AI-related bills in 2021 regarding a number of salient issues from data security to algorithmic governance.
Last October, President Biden introduced a “Blueprint for an AI Bill of Rights“. This bill outlined five principles that companies should consider when working with the technology including data privacy, the safety, and efficiency of systems, protections against algorithmic discrimination, the presence of human alternatives, and transparency around its use.
However, while US lawmakers aren't standing by idly, they're moving a lot slower than the majority of European nations that have been cracking down on GPT technology with much more urgency.
The European Union is among the few jurisdictions that are already developing rules around the development and use of AI, and on April 2, Italy became the first Western nation to ban ChatGPT outright amid concerns over the platform's data security.
With ChatGPT and similar platforms still in their infancy stage, the full consequences of the AI explosion are yet to be realized. However, with a number of AI ethics groups already claiming that the technology harbor a “risk to public safety”, US regulations around its development can't come soon enough.