As of this writing, more than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Says the letter:
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
It continues on to argue that there is a “level of planning and management” that is “not happening” and that instead, in recent months, unnamed “AI labs” have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The letter’s signers say the pause for which they are asking should be “public and verifiable, and include all key actors.” If said pause “cannot be enacted quickly, governments should step in and institute a moratorium,” the letter continues.
We’re still digesting this one (while others are already tearing it to shreds).
Certainly, it’s as interesting for the people who have signed — which includes some engineers from Meta and Google; Stability AI founder and CEO Emad Mostaque; and people not in tech, including a self-described electrician and an esthetician — as for who have not. No one from OpenAI, the outfit behind the large language model GPT-4, signed this letter, for example. No one from Anthropic, whose team spun out of OpenAI to build a “safer” AI chatbot, did either.
In the meantime, you can read it in full here.