Search module is not installed.

Experts in tech sector, scientists and professors issue new warning on AI

01.06.2023

Experts in the tech sector, scientists, and professors issued a fresh warning on the risks associated with artificial intelligence.

Mitigating the risk of extinction from AI should be a worldwide priority alongside other risks such as pandemics and nuclear war, the Center for AI SafetyAI Safety said.

While experts in the field, policymakers, journalists and the public are increasingly discussing risks from the technology, it can be difficult to voice concerns about some of advanced AI's most severe risks. The succinct statement below seeks to overcome this obstacle and open up discussion. It also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously.

Among the signatories are OpenAI co-founder Demis Hassabis, OpenAI co-founder Sam Altman, the godfather of AI Geoffrey Hinton and environmentalist Bill McKibben.

The hundreds who signed the statement are professors, researchers and people in positions of leadership, as well as the singer Grimes.

More than 1,000 researchers and tech leaders, including billionaire Elon Musk, signed a letter earlier this year that called for a six-month pause on training of systems more advanced than OpenAI's GPT - 4, saying it poses profound risks to society and humanity. Altman, who has said the letter calling for the moratorium was not the optimal way to address the issue, has said that the letter wasn't the optimal solution. In a blog post last week, he and company leaders said AI needs an international watchdog to regulate future superintelligence.

The trio believes that AI systems will overtake experts skill levels in most domains within the next decade.

Hinton has said artificial general intelligence may be just a few years away, warning NPR that it's not possible to stop the research.

The research will happen in China if it doesn't happen here, he said.