AI has developed unchecked and tremendously over the last five years. AI researchers and developers do not have a complete and reliable understanding of how AI works. In fact, AI has a tendency of learning odd behavior from data. After the public release of ChatGPT by OpenAI there has been a flood of generative AI tools.
The concept of generative AI can lead us to the path of mass misinformation, advanced deep fake, and a flourish of toxic content. While it is established that AI is not self aware, it can be extremely dangerous in the wrong hands.
The who’s who of the AI industry have issued a one sentence statement confirming the fears around AI voiced earlier by the likes of Elon Musk and Steve Wozniak. The statement reads,
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Some of the key signatories are Sam Altman, the chief executive of OpenAI, Demis Hassabis, the head of Google DeepMind, Geoffrey Hinton, and Yoshua Bengio, known as the Godfathers of AI.
Interestingly Geoffrey Hinton who played a pivotal role in the advancement of AI by creating a neural network model for object recognition back in 2012, has recently resigned from the position of VP at Google to openly voice his concerns about AI. He thinks we will “not be able to know what’s true anymore.” He is definite that AI will not just eliminate drudge work but destroy the relevance of thousands of jobs across industries.
Amidst the ratrace of generative AI between companies like OpenAI, Google, and Microsoft, the US President Joe Biden said, “tech companies have a responsibility, in my view, to make sure their products are safe before making them public… AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security.”