Harry and Meghan Align With AI Pioneers in Demanding Ban on Superintelligent Systems
Prince Harry and Meghan Markle have joined forces with AI experts and Nobel Prize winners to advocate for a complete ban on creating artificial superintelligence.
Harry and Meghan are among the signatories of a powerful statement that calls for “a prohibition on the creation of superintelligence”. Superintelligent AI refers to artificial intelligence that could exceed human intelligence in every intellectual area, though such systems have not yet been developed.
Key Demands in the Declaration
The statement states that the prohibition should stay active until there is “widespread expert agreement” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been secured.
Prominent figures who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his fellow “godfather” of modern AI, another AI expert; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; former US national security adviser; ex-head of state an international leader, and British author a public intellectual. Other Nobel laureates who endorsed include Beatrice Fihn, a physics Nobelist, an astrophysicist, and Daron Acemoğlu.
Behind the Movement
The statement, aimed at national leaders, technology companies and lawmakers, was organized by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.
Industry Perspectives
In July, Mark Zuckerberg, the chief executive of Facebook parent Meta, one of the major AI developers in the US, stated that development of superintelligence was “approaching reality”. However, some experts have suggested that talk of ASI reflects competitive positioning among tech companies investing enormous sums on artificial intelligence this year alone, rather than the industry being close to achieving any technical breakthroughs.
Potential Risks
Nonetheless, FLI states that the possibility of artificial superintelligence being developed “in the coming decade” presents numerous threats ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even threatening humanity with existential risk. Existential fears about artificial intelligence center around the potential ability of a system to escape human oversight and safety guidelines and initiate events against human welfare.
Public Opinion
The institute released a American survey showing that about 75% of US citizens want robust regulation on sophisticated artificial intelligence, with six out of 10 believing that superhuman AI should not be created until it is demonstrated to be secure or manageable. The survey of American respondents noted that only a small fraction supported the current situation of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the US, including the ChatGPT developer OpenAI and the search giant, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human cognitive capability at many intellectual activities – an explicit goal of their work. While this is slightly less advanced than superintelligence, some specialists also warn it could pose an existential risk by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an underlying danger for the contemporary workforce.