The company that created ChatGPT, OpenAI, has called for regulation of artificial intelligence policies, arguing that it is needed to protect people from accidentally creating something with the power to destroy humanity.
In a brief note posted on the company's website, co-founders Greg Brockman and Ilya Sutskever and chief executive Sam Altman called for an international regulator to start "checking systems, asking for audits, testing for compliance with standards of security" in order to reduce the "existential risk" that such systems can present.
"It is conceivable that within the next 10 years, artificial intelligence systems will surpass the skill level of experts in most fields and perform as much productive activity as one of the largest corporations today," they write.
Researchers have been warning about the potential dangers of superintelligence for decades. They describe the risk as "catastrophic" and "existential", in the sense that humanity may lose the ability to govern itself and become completely dependent on machines.