Prince Harry and tech pioneers call for superintelligent AI ban

Photo - Prince Harry and tech pioneers call for superintelligent AI ban
On 22 October 2025, a coalition of Nobel laureates, cultural figures and conservative media voices published a statement urging a halt to the development of AI “superintelligence”.
Leading figures in science and culture argue that developing advanced AI systems safely is impossible without broad scientific agreement on safety standards and clear public consent. The signatories include cognitive psychologist Geoffrey Hinton, Université de Montréal professor and neural‑network pioneer Yoshua Bengio, Apple co-founder Steve Wozniak, Virgin founder Richard Branson, former Irish president Mary Robinson and actor Stephen Fry. Even Prince Harry and Meghan Markle added their names.

Such an eclectic alliance reflects growing skepticism in the United States about Silicon Valley’s ambitions while also providing a formal pretext for tighter regulation of the field.

The campaign, coordinated by the Future of Life Institute (FLI), renews calls for a global pause on the creation of “superintelligent” AI systems until their risks are properly understood. In the statement, supporters warn of large‑scale labor‑market displacement, erosion of civil liberties, national‑security threats and, in the worst case, existential risks if development slips beyond human control.
The criticism targets not everyday users of AI apps, but the largest tech companies driving what they describe as a competitive race towards systems that could potentially surpass humans.

The current initiative builds on an earlier, less categorical call to pause uncontrolled AI experiments, voiced in spring 2023 and endorsed by tens of thousands of signatories, including industry leaders and researchers. The proposal was widely debated at scientific conferences and in the media, but did little to slow progress. Leading labs continued scaling compute and releasing more general‑purpose models.

Today’s document is unlikely to produce an immediate moratorium, but it may open a window of opportunity for regulators. They can now press for transparent licensing, independent audits, stress tests and public reporting by AI developers.

Parliamentary hearings and new bills are also likely, including proposals to restrict training‑data sources for neural networks and to cap concentrations of computing resources.

Some critics warn that heavy-handed regulation could stifle innovation. Yet the breadth of this coalition, from Harry and Meghan to Steve Bannon and Glenn Beck, makes it one of the most visible attempts in recent years to seize the agenda around AI’s future.