The Door That Cannot Be Closed: Warnings from AI Godfather Geoffrey Hinton

Geoffrey Hinton 2024

Geoffrey Hinton, the revolutionary British-Born Canadian computer scientist and Professor Emeritus at the University of Toronto, warned an audience of thousands about the dangers of AI at Collision 2024. The title of the panel was “Can We Control AI?” and was moderated by Canadian author and essayist Stephen Marche.

National-Level AI Utilization as a Top Priority

cybercrimes

Hinton started his panel by going through the major AI-related threats to humanity including cybercrimes, surveillance, autonomous weapons, misinformation and falsified photos, audio and videos, the application of AI in biotech and bioterrorism, job losses and “the existential threat that it will go rogue and take over.”

Of these various threats to humanity, his growing concern is the applications of AI by the government – namely in surveillance and the military.

“The existential threat is the one I’ve talked about the most, but that’s not the most urgent part,” Hinton admitted to the crowd of thousands, emphasizing that the application of generative AI by the government should be the most pressing concern.

Hinton further emphasized his fears by pointing out how even countries that do have some regulations on AI research will have exemptions for the development of military technology.

“The EU legislation has a clause for none of the restrictions to apply to the military application of AI,” he explained. 

Although Hinton is worried about a dystopia surveillance state, he is hopeful that even dictatorships and oligarchies will be able to unite around restricting the development of lethal autonomous weapons, which he says are coming “very soon.” He believes that the global threat posed by such weapons will compel diverse governments to collaborate on establishing stringent regulations, citing how governments rallied around the restriction of chemical weapons.

“It will be like [the development of] chemical weapons. The Geneva Conventions mostly held after the Great War,” he argued.

Related post: Dreaming of an AI Utopia: Vinod Khosla at Collision

Ideas for Combating Fake Videos and Images Created by Generative AI

 Videos and Images Created by Generative AI

Related to Hinton’s concerns with governments is how bad faith actors can use AI technology to help propagate misinformation.  He warns that AI can be harnessed to create convincing yet false narratives, which can spread rapidly and undermine the truth. However, Hinton does have an interesting solution to this particular problem.

“Pay for a lot of advertisements where you have a very convincing fake video, and then right at the end of it, it says, ‘This was a fake video,’” he said.

“This way you can inoculate the public.”

Will We Ever See a World Dominated by AI, Like in Sci-Fi?

Sci-Fi

However, the topic that Hinton focused the latter part of his talk on is the existential threat of AI, which he has spent much of his recent career discussing. Hinton envisions where humanity may be pushed to the point of extinction by an AI that turns on its creators, or one where a single AI “runs everything and keeps [humans] like pets.”  

Both of these scenarios sound like stories from post-apocalyptic science fiction, but Hinton believes that the dangers of AI are real and imminent. Hinton’s outline for a plausible scenario starts with the creation of an AI capable of autonomously formulating its own subgoals.  

“By giving AI agents the ability to create subgoals, they will quickly realize the best subgoal is giving themselves more control,” he explained.

“[AI agents] can achieve what they want much more efficiently without humans.”

Hinton’s solution to this problem is to have the governments of the world forcibly regulate tech companies to ensure strict oversight over the development of AI technologies.

“I think governments should be involved in forcing the big AI companies to do lots of safety experiments,” he stated.

Hinton ended the talk by discussing some of the good things that can come out of the development of generative AI, such as how it can support healthcare workers.

“For all the negatives there are an equal number of good things,” he admitted.

However, Hinton’s overall pessimistic sentiment can be summarized with his final statement.

“The problem is us.”

Editor’s Opinion

The editor would like to suggest some of the logical fallacies in Hinton’s thinking. He says that his primary concern is government overreach in using artificial intelligence to enforce a surveillance state, the creation of unaccountable weapons and the spread of misinformation.

At the same time, his solution is that a large government should be allowed to implement safeguards that heavily restrict the development of AI. Holding these two opinions is inherently contradictory, because entrusting a large government with the power to regulate AI may inadvertently lead to the very overreach Hinton fears.

Hinton himself acknowledged that even governments that do heavily restrict AI do not impose the same restrictions on domestic organizations that study and develop military technology. The situation assumes that governments are inherently good faith, rational actors working in the public interest. One needs simply look at the many active dictatorships to see the inherent fallacy.

So, is the solution perhaps to entirely deregulate the entire AI industry? Not exactly. Deregulating the system will likely lead to an unchecked acceleration of AI development, which will in turn exacerbate all of Hinton’s worries. Corporations in the technology sector have a rational interest in pursuing the development of the most powerful AI technologies, as it will give them an advantage over their competitors in the industry.

The question therefore becomes where to put the safety measures in place to effectively management the development of AI. This is not something that is easily solvable. Balancing the public interest with innovation is something that will require a nuanced approach.

It is the opinion of the editor that policymakers in Japan should concern themselves with developing laws and frameworks that promote ethical standards that aim to preserve the public good, while not overreaching into overt authoritarian territory.

At the same time, Japanese companies in the technology sector should look to find ways to self-regulate. This may require the development of an AI regulatory body that is semi-accountable to the government but is made up of industry professionals and experts. Companies could also sign on to a code of ethics that commits them to responsible AI practices that foster a culture of accountability and ethical responsibility.

Additionally, there should be some effort made by Japanese diplomats and Japanese companies operating globally to attempt to foster these sentiments around AI abroad. As one of the world’s foremost leaders in the technology sector, Japan can lead by example and begin a global movement to ensure the responsible development of AI and to avoid the future that Hinton envisions. 

Related posts
Unveiling Innovation: Understanding the Hottest Business Trends at Collision 2024
Japanese Startups Excel at Collision 2024