The New Political Matrix

Civil war in Syria. China’s authoritarian ambition. Russian chicanery. Iranian proxies interfering with the international fuel supply.

Most global threats facing America are relatively straightforward: they’re country-based, tangible dangers to U.S.-enforced global stability. The bad actors are clear enough: Putin, Erdogan, Jinping, Khamenei. The weapons are conventional: guns, bombs, drones, shackles, concentration camps, Facebook memes. The values at stake are classical: human rights, commercial freedom, the integral dignity of all mankind.

That current threats are easy to identify doesn’t make them simple to solve. But, for students of the history of warfare, they aren’t de novo. The one exception are the digital didos of Muscovite netizens -- election interference is an old practice, but manipulating social media with Jesus versus Hillary graphics is new. 

The press likes to make outrage hay out of Russia’s attempted manipulation of U.S. political contests, but the effect is frequently overstated. However, chiliastic memes meant to sway elections may be small-bore for now, but they presage something more diabolical down the road. The age of digital warfare is about to get worse. 

The New Yorker recently covered the rise of artificial intelligence in basic composition. Back in the spring of 2018, Google introduced Smart Compose, a “predictive text” feature that recommends appendixes to close sentences for Gmail users. The words it suggests are basic at first (“Have a great weekend!”), but the tool learns from users over time, adjusting the syntax, coming ever closer to mimicking the way real people to speak to one another. 

The technology isn’t perfect. Yet. There is still a staleness to artificially generated text. But that staleness is also present in everyday discourse, particularly in business, where canned phrases, sanitized daily by H.R., are ubiquitous.

Still, AI tech is fast catching up, breezing past Moore’s Law, increasing processing power by a factor of ten each year. Last February, the nonprofit company OpenAI refused to make public its own AI text software known as GPT2. The reason? It’s straight out of a James Cameron-directed dystopia: the machine was too good at replicating human communication. The doyens of OpenAI -- public-minded Silicon Valley grandees like Sam Altman and Elon Musk -- want to ensure artificial intelligence isn’t just used responsibly, but its benefits are egalitarian, not hoarded by the very few.

That’s a nice sentiment, but the road to autocracy is paved with high-minded intentions. GPT2 was released in attenuated form to prevent it from generating too-easy-to-believe “fake news.” We’re supposed to believe this preventative measure will stave off malefactors flooding the web with shedloads of false narratives. Do the farsighted operators of OpenAI think nobody’s heard of nuclear fission’s humble beginnings?

We’re already too willing to believe news that boosts our priors. As Warren Buffet once said, “What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact.” Learned elites are by no means immune to the bias -- in some cases, they are more susceptible, given their insular living. It’s why, despite being put paid to by a multi-million-dollar federal investigation, it’s still common belief that Donald Trump and Vladimir Putin, clad in all-black raiment, went around rigging ballot machines in Kenosha County on the night of November 8th, 2016.

Unchecked confirmation bias, encouraged by social-media use, makes us more vulnerable to mass-produced fake news. Combined with the surge of “deepfake” videos, which purport to show real-life people engaged in unseemly behavior, including the President of the United States, the increase of AI-generated content on the internet threatens our very conception of reality. 

It’s doubtful that our laws can begin to meet this challenge. Some states have already banned the circulation of deepfake videos before elections. How legal code, written on reams of physical paper and stored in file cabinets in concrete basements, will compete with the trillions of data bits flitting around the internet is an open question. Innumerable videos and text files are uploaded and deleted online within minutes every day. All that’s needed to sow discord is to make a brief impression on a few thousand eyeballs.

America’s representative democracy only works with a base of shared values. But if what’s real and what’s a simulacrum of reality become too mixed up to tell apart, our bonds of commonality become more tenuous, almost to the point of breakage.

One more reason to fear the rise of communication verisimilitude: our political leaders are a bunch of schlemiels. They’re easily fooled by telephonic pranksters, imparting sensitive information to basement-dwelling funsters. Just imagine if Adam Schiff was actually contacted by SVR agents masquerading as friendly CSIS desk jockeys. Are we supposed to sleep soundly with such a thought?

Conservatives rightly fear being shut out of the internet’s most trafficked forums. Soon that feeling will dissipate into something less than fear but just as disconcerting: bewilderment at the effaced line between what’s real and unreal. 

Social-media censorship won’t matter when nothing on the internet can be trusted.

Civil war in Syria. China’s authoritarian ambition. Russian chicanery. Iranian proxies interfering with the international fuel supply.

Most global threats facing America are relatively straightforward: they’re country-based, tangible dangers to U.S.-enforced global stability. The bad actors are clear enough: Putin, Erdogan, Jinping, Khamenei. The weapons are conventional: guns, bombs, drones, shackles, concentration camps, Facebook memes. The values at stake are classical: human rights, commercial freedom, the integral dignity of all mankind.

That current threats are easy to identify doesn’t make them simple to solve. But, for students of the history of warfare, they aren’t de novo. The one exception are the digital didos of Muscovite netizens -- election interference is an old practice, but manipulating social media with Jesus versus Hillary graphics is new. 

The press likes to make outrage hay out of Russia’s attempted manipulation of U.S. political contests, but the effect is frequently overstated. However, chiliastic memes meant to sway elections may be small-bore for now, but they presage something more diabolical down the road. The age of digital warfare is about to get worse. 

The New Yorker recently covered the rise of artificial intelligence in basic composition. Back in the spring of 2018, Google introduced Smart Compose, a “predictive text” feature that recommends appendixes to close sentences for Gmail users. The words it suggests are basic at first (“Have a great weekend!”), but the tool learns from users over time, adjusting the syntax, coming ever closer to mimicking the way real people to speak to one another. 

The technology isn’t perfect. Yet. There is still a staleness to artificially generated text. But that staleness is also present in everyday discourse, particularly in business, where canned phrases, sanitized daily by H.R., are ubiquitous.

Still, AI tech is fast catching up, breezing past Moore’s Law, increasing processing power by a factor of ten each year. Last February, the nonprofit company OpenAI refused to make public its own AI text software known as GPT2. The reason? It’s straight out of a James Cameron-directed dystopia: the machine was too good at replicating human communication. The doyens of OpenAI -- public-minded Silicon Valley grandees like Sam Altman and Elon Musk -- want to ensure artificial intelligence isn’t just used responsibly, but its benefits are egalitarian, not hoarded by the very few.

That’s a nice sentiment, but the road to autocracy is paved with high-minded intentions. GPT2 was released in attenuated form to prevent it from generating too-easy-to-believe “fake news.” We’re supposed to believe this preventative measure will stave off malefactors flooding the web with shedloads of false narratives. Do the farsighted operators of OpenAI think nobody’s heard of nuclear fission’s humble beginnings?

We’re already too willing to believe news that boosts our priors. As Warren Buffet once said, “What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact.” Learned elites are by no means immune to the bias -- in some cases, they are more susceptible, given their insular living. It’s why, despite being put paid to by a multi-million-dollar federal investigation, it’s still common belief that Donald Trump and Vladimir Putin, clad in all-black raiment, went around rigging ballot machines in Kenosha County on the night of November 8th, 2016.

Unchecked confirmation bias, encouraged by social-media use, makes us more vulnerable to mass-produced fake news. Combined with the surge of “deepfake” videos, which purport to show real-life people engaged in unseemly behavior, including the President of the United States, the increase of AI-generated content on the internet threatens our very conception of reality. 

It’s doubtful that our laws can begin to meet this challenge. Some states have already banned the circulation of deepfake videos before elections. How legal code, written on reams of physical paper and stored in file cabinets in concrete basements, will compete with the trillions of data bits flitting around the internet is an open question. Innumerable videos and text files are uploaded and deleted online within minutes every day. All that’s needed to sow discord is to make a brief impression on a few thousand eyeballs.

America’s representative democracy only works with a base of shared values. But if what’s real and what’s a simulacrum of reality become too mixed up to tell apart, our bonds of commonality become more tenuous, almost to the point of breakage.

One more reason to fear the rise of communication verisimilitude: our political leaders are a bunch of schlemiels. They’re easily fooled by telephonic pranksters, imparting sensitive information to basement-dwelling funsters. Just imagine if Adam Schiff was actually contacted by SVR agents masquerading as friendly CSIS desk jockeys. Are we supposed to sleep soundly with such a thought?

Conservatives rightly fear being shut out of the internet’s most trafficked forums. Soon that feeling will dissipate into something less than fear but just as disconcerting: bewilderment at the effaced line between what’s real and unreal. 

Social-media censorship won’t matter when nothing on the internet can be trusted.