Is It Time to Break Up Facebook?

New Zealand, a country with a population of 3.5 million, whose indigenous people are Maori, has rarely been the source of important political proposals or taken the lead for global action.  However, in May 2019, its prime minister, 40-year-old Jacinda Adern, the youngest prime minister in the country for 150 years, played a central role in international diplomacy by initiating a plan to eliminate terrorist, hateful, and violent extremist content online and to stop the internet from being used for such comments.  She has entered the modern debate on how and to what extent government should exercise controls over large tech companies that can disseminate and have disseminated hateful content and misinformation, while preserving civil liberties.

In its broader form, the issue of control has long been controversial, qualified, and open to judgment.  The Universal Declaration of Human Rights (1949), Article 19, stated that "[e]veryone has a right to freedom of opinion and expression."  Yet this was not imperative. The article was amended so that the exercise of these rights — freedom to seek, receive, and impart information and ideas of all kinds — carries with it "special duties and responsibilities."  Similarly, the 1789 French Declaration of the Rights of Men and of the Citizen, Article 11, says, "Every citizen may, accordingly, speak, write, and print with freedom, but shall be responsible for such abuse of this freedom as shall be determined by law."

Ardern, New Zealand's P.M. since October 2017, joined with French president Emmanuel Macron in Paris on May 15, 2019 to sign the "Christchurch Call."  It was named after the two simultaneous terrorist attacks on two mosques in Christchurch, New Zealand that killed 51 Muslims on March 15, 2019.  After the first attack, the extremist terrorists livestreamed a 17-minute video of the massacre on Facebook, in effect an unprecedented use of social media as a weapon.  Ardern immediately announced a ban on military-style semiautomatic weapons, assault rifles, and high-capacity magazines in her country.  She was critical of the availability of weapons of this kind in the U.S. and, separately, also condemned white nationalist ideology.

The central concern of the Call, a version of a historic issue, was the need for government policing of the internet and social media in curbing the use of extremist content and terrorist propaganda, but it was inevitability related to possible conflict over the role of government in controlling expressions of free speech and press freedom.  The issue is one more illustration of the difficult line between freedom of speech and control over hate speech, fake news, and the abuse of social media.  Is free speech in today's democratic society unlimited, or is to be balanced by considerations of safety and privacy?

In the U.S., the First Amendment to the Constitution, "Congress shall make no law ... abridging the freedom of speech, or of the press," is still controversial, as shown in Supreme Court cases from Schenck v. United States (1919) to recent cases such as McCullen v. Conkley (2014).  A new problem in the U.S. and elsewhere is regulation and oversight of social media companies.  It is a particularly difficult problem because these companies are widely used but are also divisive and the subject of consumer and citizen dissatisfaction.

What is clear in the U.S. is that there are no government regulations for the large tech companies, Twitter and Facebook, to ensure that a particular point of view, political or economic, is fairly expressed.  The companies can allow expression of different, even conflicting positions, but it has become increasingly clear that they should not allow their facilities to be used by individuals, such as Louis Farrakhan, or groups who promote or engage in violence and hate, regardless of ideology.

The problem is made more difficult because of two factors: the reality that the giants like Facebook and Twitter may encourage sensationalist material because it gets people's attention and the fact that some outlets such as Google's Top Stories by a considerable majority feature stories that come from left-wing sources. 

The non-binding Christchurch Call was adopted on May 15, 2019 by 18 countries, including the U.K., France, Germany, Italy, and Canada, and eight technical companies, but not by the U.S., to suggest rules that reduce the internet services from disseminating extremist content without undermining the principle of free expression.  The Call is intended be the start of a stronger effort to deal with the use of the internet to spread violent and extreme ideologies. 

Some countries were already prepared to act.  The U.K., troubled by the case of the 14-year-old Molly Russell, who committed suicide, allegedly as the result of watching glorifying images of self- harm and suicide on Instagram, and France have both proposed new laws that require companies to end harmful content.  Russia in March 2019 passed bills that make it a crime to "disrespect" the state and spread "fake news" online. 

All, governments and technology companies agree that the internet and social media have been misused, transformed to some extent into a propaganda machine fostering division and hostilities.  Media companies have understood the need to identify and remove extremist content.  They have begun to share databases of extreme posts or images to see they don't spread across multiple platforms.  Perhaps the most surprising statement came on May 10, 2019, from Mark Zuckerberg, CEO of Facebook, who said internet companies should be accountable for enforcing standards on harmful conduct.

It is gratifying to learn that the major internet companies have stated they will monitor material that facilitates violence more intensely.  All companies will condemn advocacy of terrorism.  The problem is that they may differ on issues such as hate speech, anti-Semitism, and misinformation.

The division is clear between government bodies and citizens urging more government controls over social media one the one hand and those like the Trump administration, on the other, who argue that the "best tool to defeat terrorist speech is productive speech."  Thus, President Donald Trump emphasizes the importance of promoting credible alternative narratives as the primary means by which we can defeat terrorist messaging.  Nevertheless, the Trump administration stated that it supports the "overall goals" of the Christchurch Call and would "continue to engage governments, industry, and civil society, to counter terrorist content on the internet."

Yet the Call is in effect modest.  Many countries have already strengthened legislation imposing penalties on companies that do not remove offensive content once it is noticed.  Germany passed a law imposing fines on companies that do not remove hate speech.  One can assume that government regulation alone will not solve the social media problem, but pressing tech companies to use their creative powers to find solutions, while maintaining internet freedom, and protecting the internet as a force for good is a significant international and cooperative objective.

The U.S. should join in this effort whole-heartedly.  All Americans, believers or skeptics of the Mueller Report, know that Russia took advantage of U.S. social media to spread disinformation in 2016.  The Christchurch Call is a good compromise to reconcile government attempts to pressure tech companies to eliminate extremist and harmful content and to preserve free speech and expression.  It is true that the U.S. has tried to pressure tech companies to do this, in effect a qualified implementation of the objective of the Christchurch Call, but it remains to be seen what is the "productive speech" of which President Trump speaks.

If you experience technical problems, please write to