Some of the world’s leading tech figures are calling for artificial intelligence labs to halt training of powerful AI systems for at least six months, due to “profound risks to society and humanity.”
Elon Musk was among the dozens of tech titans, professors, and researchers who signed a letter published by the Future of Life Institute – a nonprofit founded by Musk and funded by him.
Two weeks prior, OpenAI unveiled GPT-4, an even more potent version of their popular AI chatbot tool, ChatGPT. In early tests and a company demo, this technology was demonstrated drafting lawsuits, passing standardized exams and creating a functioning website from nothing more than a hand-drawn sketch.
The letter stipulated that the pause should extend to AI systems “more powerful than GPT-4,” and independent experts should use the proposed break to collaborate on developing and implementing a set of shared protocols for AI tools that are secure “beyond any reasonable doubt.”
Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, spreading misinformation and impacting consumer privacy. These technologies also raise questions around how AI could upend professions, allow students to cheat, and alter our relationship with technology.
The letter illustrates the growing unease both inside and outside the industry with the rapid advancements in AI. Some governing agencies such as China, Europe and Singapore have already implemented early versions of AI governance frameworks.