A.I. Godfather Geoffrey Hinton Believes Near-Disasters May Spur Regulation

Side profile of man in suit sitting in crowdSide profile of man in suit sitting in crowd

Geoffrey Hinton has spent much of the past few years warning about the ways that A.I. could harm humanity. Autonomous weapons, mass misinformation, labor displacement—you name it. Even so, he suggests that a non-catastrophic disaster caused by A.I. might actually prove useful in the long run.

“Politicians don’t preemptively regulate,” Hinton said while speaking at the Hinton Lectures, an annual series on A.I. safety, earlier this month. “So actually, it might be quite good if we had a big A.I. disaster that didn’t quite wipe us out—then, they would regulate things.”

The British-Canadian researcher has worked in the field for decades, long before A.I. broke into the mainstream in late 2022. Hinton, a professor emeritus at the University of Toronto who spent ten years working at Google, has earned numerous accolades for his contributions, including the Nobel Prize in Physics last year and the Turing Award in 2018.

More recently, however, Hinton has grown concerned about A.I.’s existential threats and the lack of regulation holding major tech companies accountable for testing such risks. Legislation such as California’s SB-1047 bill, for example, failed last year in part due to pushback over its stringent standards for A.I. model developers. A less sweeping bill was ultimately signed into law by Governor Gavin Newsom in September.

Hinton says more urgent action is needed to address emerging issues, such as A.I.’s tendency to self-preserve. A study published in December showed that leading A.I. models can engage in “scheming” behavior, pursuing their own goals while hiding objectives from humans. A few months later, another report revealed that Anthropic’s Claude could resort to blackmail and extortion when it believed engineers were attempting to shut it down.

“With an A.I. agent, to get stuff done, it has got to have a general ability to create subgoals,” said Hinton. “It will realize very quickly that a good subgoal for getting stuff done is to stay alive.”

Building a “maternal” A.I.

Hinton’s solution? Build A.I. with “maternal instincts.” Since the technology will eventually surpass human intelligence, he argues, machines must “care about us more than it cares about itself.” A mother-child dynamic, he added, is “the only system in which less intelligent things control more intelligent things.”

Adding maternal feelings to a machine might seem far-fetched. But Hinton argues that A.I. systems are capable of exhibiting the cognitive aspects of emotions. They might not blush or sweat, but they could attempt to avoid repeating an embarrassing incident after making a mistake. “You don’t have to be made of carbon to have emotions,” he said.

Hinton concedes that his mother-child theory is unlikely to win favor among Silicon Valley executives, who are more likely to view A.I. as a “very smart secretary” that can be dismissed at will.

“That’s not how the leaders of the big tech companies look at it,” said Hinton. “You can’t see Elon Musk or Mark Zuckerberg wanting to be the baby.”

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.