As investment and use of AI-powered platforms like ChatGPT continue to skyrocket, another open letter has been released begging the tech industry to consider the risks before further unleashing this technology on the world.
A few short months ago, tech professionals like Elon Musk and Steve Wozniak penned an open letter asking for a six-month pause on the development of generative AI platforms like ChatGPT.
Now, even more tech professionals have gotten on board to pen another letter than has a more serious tone about the threat of AI to life itself.
Statement on AI Risk
The open letter, titled Statement on AI Risk, is as short as it is powerful, aiming to overcome the difficult nature of the discussion. Here is the entirety of the statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Considering the world has just gone through a devastating pandemic and continues to deal with the threat of nuclear war, this statement is nothing if not a ringing endorsement for more regulations regarding AI technology. It's even more concerning when you see who signed it.
Who Signed the Open Letter?
While the statement in the open letter has a gravitas all its own, the real story is who signed it. It's a veritable who's-who of the tech industry, more specifically, those involved with the development of the AI technology in question. Here's a list of some of the major signees of this open letter.
- Demis Hassabis (CEO of Google DeepMind)
- Sam Altman (CEO of OpenAI, makers of ChatGPT)
- Bill Gates (Former CEO of Microsoft)
- Ilya Sutskever (Co-Founder and Chief Scientist at OpenAI)
- Shane Legg (Co-Founder of Google DeepMind)
Suffice to say, the industry is taking this threat seriously, but clearly more needs to be done.
The Risks of Generative AI
There's a lot of talk out there about the risks of generative AI platforms like ChatGPT, but is it actually that bad? The problem with this technology is that it's so powerful and so new that the potential is virtually unlimited. As a result, a wrong turn could in the development could cause some serious problems. After all, it's not like we knew in 2008 that Facebook was going to become a source of misinformation on a massive scale.
As for the specifics, AI has a lot of potential risks that need to be considered before further development. For one, AI could potentially destabilize the global economy by threatening more than 80% of current jobs, many of which are disproportionately held by women. On top of that, generative AI has led to a wide range of new AI scams that seek to steal money and information from users.
All that to say, the risks of AI are very real, and if we have any hope of making it work for us rather than against us, it's important to take the threats and recommendations seriously before it's too late.