Elon Musk’s AI letter sparks firestorm, some fake signatures

101
3
Elon Musk’s AI letter sparks firestorm, some fake signatures

A letter signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out of their support.

On 22 March, more than 1,800 signatories including Musk, the cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak called for a six-month pause on the development of systems more powerful than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.

GPT 4 has developed the ability to hold human-like conversations, compose songs and summarise lengthy documents, a company co-founded by Musk and now backed by Microsoft. Such AI systems with human-competitive intelligence pose serious risks to humanity, according to the letter.

The letter said that AI labs and independent experts should use this pause to develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.

The Future of Life institute, the thinktank that coordinated the effort, cited 12 pieces of research from experts including university academics, current and former employees of OpenAI, Google and its subsidiary DeepMind. The research was used to make such claims, according to four experts cited in the letter.

When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it.

Critics accused the Future of Life Institute FLI, which is primarily funded by the Musk Foundation, of prioritizing imagined apocalyptic scenarios over more immediate concerns about AI such as racist or sexist biases being programmed into the machines.

Among the research cited was On the Dangers of Stochastic Parrots, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, saying it was not clear what would be more powerful than GPT 4 By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Some of us don't have the privilege of not doing active harms right now. Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as unhinged Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She co-authored a paper last year arguing that the widespread use of AI already posed serious risks.

Her research found that the use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.

She told Reuters: AI does not need to reach human-level intelligence to exacerbate those risks. There are non-existential risks that are really important, but don't get the same Hollywood attention. Asked to comment on the criticism, FLI president Max Tegmark said that both the short-term and long-term risks of AI should be taken seriously. If we cite someone, it means we claim that they are endorsing that sentence. He told Reuters that it doesn't mean they endorse the letter, or that they don't mean they endorse everything they think.