Henry Kissinger Warns That AI Will Fundamentally Alter Human Consciousness

Henry Kissinger Warns That AI Will Fundamentally Alter Human Consciousness

Speaking in Washington, D.C. earlier today, former U.S. secretary of state Henry Kissinger said he’s convinced of AI’s potential to fundamentally alter human consciousness—including changes in our self-perception and to our strategic decision-making. He also slammed AI developers for insufficiently thinking through the implications of their creations.

Kissinger, now 96, was speaking to an audience attending the “Strength Through Innovation” conference currently being held at the Liaison Washington Hotel in Washington, D.C. The conference is being run by the National Security Commission on Artificial Intelligence, which was set up by Congress to evaluate the future of AI in the U.S. as it pertains to national security.

Kissinger, who served under President Richard Nixon during the Vietnam War, is a controversial figure who many argue is an unconvicted war criminal. That he’s speaking at conferences and not spending his later years in a cold jail cell is understandably offensive to some observers.

Moderator Nadia Schadlow, who in 2018 served in the Trump administration as the Assistant to the President and as Deputy National Security Advisor for Strategy, asked Kissinger about his take on powerful, militarized Artificial Intelligence and how it might affect global security and strategic decision-making.

“I don’t look at it as a technical person,” said Kissinger. “I am concerned with the historical, philosophical, strategic aspect of it, and I’ve become convinced that AI and the surrounding disciplines are going to bring a change in human consciousness, like the Enlightenment,” he said, adding: “That’s why I’m here.” His invocation of the 18th-century European Enlightenment was a reference to the paradigmatic intellectual shift that occurred during this important historical period, in which science, rationalism, and humanism largely replaced religious and faith-based thinking. 

Though Kissinger didn’t elaborate on this point, he may have been referring to a kind of philosophical or existential shift in our thinking once AI reaches a sufficiently advanced level of sophistication—a development that will irrevocably alter the way we engage with ourselves and our machines, not necessarily for the better.

Kissinger with moderator Nadia Schadlow.
Image: DVIDS

Kissinger said he’s not “arguing against AI” and that it’s something that might even “save us,” without elaborating on the details.

The former national security advisor said he recently spoke to college students about the perils of AI and that he told them, “‘You work on the applications, I work on the implications.’” He said computer scientists aren’t doing enough to figure out what it will mean “if mankind is surrounded by automatic actions” that cannot be explained or fully understood by humans, a conundrum AI researchers refer to as the black box problem.

Artificial Intelligence, he said, “is bound to change the nature of strategy and warfare,” but many stakeholders and decision-makers are still treating it as a “new technical departure.” They haven’t yet understood that AI “must bring a change in the philosophical perception of the world,” and that it will “fundamentally affect human perceptions.”

A primary concern articulated by Kissinger was in how militarized AI might cause diplomacy to break down. The secret and ephemeral nature of AI means it’s not something state actors can simply “put on the table” as an obvious threat, unlike conventional or nuclear weapons, said Kissinger. In the strategic field, “we are moving into an area where you can imagine an extraordinary capability” and the “enemy may not know where the threat came from for a while.”

Indeed, this confusion could cause undue chaos on a battlefield, or a country could mistake the source of an attack. Even scarier, a 2018 report from the RAND Corporation warned that AI could eventually heighten the risk of nuclear war. This means we’ll also h
Read More

Leave a reply