Social media regulation – a proposition for a social media arbitration mechanism
Dr. Bissera Zankova, Media 21 Foundation
Dr. Valeriy Dimitrov, professor at the Legal Faculty of the University of National and World Economy, Sofia
The issue of how to regulate social media platforms, including social networks, is gaining momentum among stakeholders. It is not an exaggeration to state that sometimes the arguments in favour of the regulatory option turn into regulatory obsession based on the claim that social platforms have a dramatic impact upon our lives and the lives of future generations. In these efforts, some specialists discern attempts to impose “overregulation” on social media without solid guarantees for freedom of expression and freedom of enterprise.
No doubt the impact of social networks is paramount today but just such an idea was also considered about the impact of broadcasting during the last century, provoking similar discussions. However, one cannot be sure how media landscape will evolve in the upcoming years and how or whether at all social media giants will maintain their powerful positions. Our purpose here is not to make a review of the opinions concerning Internet intermediaries’ regulation but to build on some ideas and suggest a practical solution for good social media regulation that does not affect freedom of expression and freedom of private undertaking.
The OECD Observer emphasizes “it is one thing to have regulation, it is quite another to have good regulation.” Smart regulation efforts in the EU, for instance, aim at reducing regulatory burdens in EU legislation. The objective is to make European business activities easier and to contribute to growth and strengthened competitiveness on the EU’s Single Market. In the same vein the new OECD report “Better regulation practices across the EU” (https://www.oecd-ilibrary.org/sites/9789264311732-n/index.html?itemId=/content/publication/9789264311732-en) says that “regulatory policy is one of the main government policy levers for improving societal welfare. It must not only be responsive to a changing environment, but also proactively shape this environment. It is also important to engage citizens and all stakeholders in the development of laws.” The ten point plan for EU smart regulation suggested by UK back in 2012 and supported by twelve other member states drew attention specifically to alternatives to EU-regulation (https://www.gov.uk/government/publications/10-point-plan-for-eu-smart-regulation).
By and large good regulation means in our view a well thought out and effective model of regulation, non-intrusive and unbiased, which can reconcile different conditions and requirements. Ideally, we should consider that better regulation practices enhance both the life of citizens and business.
- Social media regulation – a brief overview of the most recent sources
Recently, various ideas regarding Internet intermediaries’ regulation have been thrown into the public space, expanding the debate between more liberal and more conservative minds.
Most experts claim Internet intermediaries cannot self-regulate or regulate properly their platforms. Such are, for instance, the inferences in the report on intermediary liability (“Intermediary liability 2.0. A shifting paradigm, https://sflc.in/intermediary-liability-20-shifting-paradigm). The report discusses the complexity of contemporary online communication by analyzing a variety of legal and journalistic sources. Some of the conclusions in the report agree that “as these platforms grew, it became increasingly difficult for them to self-regulate the large volume of content flowing through their pipelines. The misuse of data available on platforms, coupled with the growing menace of disinformation and misinformation online, increases calls for imposition of greater liability on intermediaries for third party copyright infringemen. Access assistance to law enforcement agencies and the rampant harassment and abuse of women and other vulnerable groups have highlighted the failures of these tech companies in regulating their channels.” The report deals with intermediary liability practices in India, in particular, which are rooted in a law showcasing a comprehensive and broad definition of intermediary, the intermediaries’ liability rules and the abundant case law of the Supreme Court of India. In 2018, the Draft Information Technology [Intermediaries Guide-lines (Amendment) Rules] (“Draft Rules”) was proposed by the government to fight ‘fake news’, terrorist content and obscene content, among others. These new rules placed more stringent obligations on intermediaries to pro-actively monitor content uploaded on their platforms and enable traceability to determine the originator of information. This serves as an example that governments are striving to implement regulations that can effectively combat the new challenges on platforms. However, these attempts raise hard questions concerning predominantly the acceptable limits on freedom of speech on the Internet. In 2017, in a ‘Joint declaration on freedom of expression and ‘Fake News’, disinformation and propaganda’, United Nations Special Rapporteur on Freedom of Opinion and Expression, David Kaye, stated that “general prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible with international standards for restrictions on freedom of expression, and should be abolished.”(https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=21287&LangID=E)
In its final report on disinformation and fake news, alongside human rights protection, the UK House of Commons, Digital, Culture, Media and Sports Committee recommended expansion of digital literacy and greater transparency of social media companies (https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/179102.htm).
In a recently published book (The Social Media Upheaval, 2019, Kindle edition), G. H. Reynolds shares his concern that “to police content of social media speech beyond a very basic level of blocking viruses and the like is a bad idea.” https://granta.com/nadine-at-forty/?fbclid=IwAR2OW80VqCtiw8yGFsvxwhc6tT9X4Z-IThZ5jaDT8npdb1PmtYxjAX6Xg8Uhe the idea being that the more involved and granular the policing becomes, the more it will look like censorship, “which is what it will inevitably become”. Better according to Reynolds is to police collusion among platforms, i.e., to apply antitrust scrutiny. As the pressure for regulation will inevitably grow, it is better to regulate in a way that preserves free speech and does not empower additionally tech oligarchs.
Interesting proposals about concrete legal actions are furnished by another acclaimed report that tackles the implementation of national laws online and cross border legal challenges (Internet and jurisdiction. Global status report 2019. Key findings, https://www.internetjurisdiction.net/publications/paper/internet-jurisdiction-global-status-report-key-findings). The authors reach the conclusion that “the regulatory environment online is characterized by potentially competing or conflicting policies and court decisions in the absence of clear-cut standards. The resulting complexity may be detrimental on numerous levels and creates “high levels of legal uncertainty in cyberspace”.
Regulation on the net and especially social media regulation represents one of the many intertwined problems brought forth in digital reality. Apparently, efficient solutions to Internet governance issues and working jurisdictional decisions can create the necessary safe and free environment that will allow regulation to produce tangible results.
As a conceptual basis of our paper we shall use the libertarian theory of economic freedom because in our understanding it permits a future-oriented, just human rights based-on innovation, encouraging regulation to be created. That is why we turn to the publications of the renowned Cato Institute which has published a series of articles discussing intermediaries’ liability from a libertarian perspective. What is important about such approach is that it makes possible for policy makers to elaborate frameworks that protect freedom of enterprise online without touching on freedom of expression. Further in our discussion, though it explicitly states that its focus is primarily on potential policies for USA, we shall outline some of the points in the article, “Why the government should not regulate content moderation of social media” by John Samples (https://www.cato.org/publications/policy-analysis/why-government-should-not-regulate-content-moderation-social-media#full), as some insights discussed are of more universal nature.
- The libertarian approach to social media – what is Cato institute’s opinion of social media regulation?
Tom Standage, deputy editor of The Economist, thinks two features of social media stand out – the shared social environment established on social media and the sense of membership in a distributed community in contrast to publishing. In addition the Cato article underlines the fact that social media represent an economic institution that has “to generate revenue beyond the costs of providing the service.” However, each group of people involved: users, consumers, advertisers and managers are related to speech and their relationships create “the forum in which speech happens” and that is why concerns about speech on social media are central to any regulatory effort. Similarity to publishers may prompt policymakers to hold social media companies liable for defamation but that is not the case in the US due to section 230 of the Communications Decency Act (CDA), which explicitly exempts social media platforms from liability by stating that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. The aim of the Congress was to encourage unfettered expression online, to further economic interests on the Internet and to promote the protection of minors by making interactive computer services and their users self-police the Internet for obscenity and other offensive materials.
It is interesting to clarify the stand the US Supreme Court has taken towards private forums of speech during years. In this direction, John Samples points out that “the history of public values and social media suggests a strong presumption against government regulation. The federal government must refrain from abridging the freedom of speech, a constraint that strongly protects a virtual space comprising speech.” The government has also generally refrained from forcing owners of private property to abide by the First Amendment. The conclusion is that “those who seek more public control over social media should offer strong arguments to overcome this presumption of private governance.”
Other arguments supporting the principle of free, private initiative can also be found in the article. One of the more important questions is whether big tech companies enjoy a monopoly position. Although a few tech companies dominate some markets, that does not mean these firms are leaders for good and can never be displaced. Samples cites David S. Evans, Matchmakers: The New Economics of Multisided Platforms (Cambridge: Harvard Business Review Press, 2016, Kindle edition) who explains that due to the contemporary network effect, previously dominant firms are declining rather than continuing as monopolies:
“Victory is likely to be more transient than economists and pundits once thought. We have reasons to doubt that these firms will continue to dominate their markets.”
In addition, it is not certain that governmental regulation will produce more competition in the online marketplace of ideas. It may simply protect both social media owners and government officials from competition. According to economist Thomas Hazlett, when FCC carefully planned the structure and form of television service in the last century, it also severely limited the number of competing stations, which resulted in the soaring value of the licenses. Hazlett also quotes an expert who claims that “the effect of this policy has been to create a system of powerful vested interests, which continue to stand in the path of reform and changes.” In our opinion, nobody wishes this system to be perpetuated on social media today.
Terrorism, disinformation and hateful speech can be seen as strong grounds for governmental regulation of social media. However, John Samples stresses the fact that American courts have consistently refused to hold social media platforms liable for terrorist acts. In Fields v. Twitter (Fields v. Twitter Inc., 2018 WL 626800 (9th Cir. Jan. 31, 2018) and similar cases, plaintiffs failed to demonstrate that ISIS’s use of Twitter played an instrumental role in the attacks against them. Though they cannot be seen as uniquely instrumental in the realization of terrorist plans, any standard of liability that might implicate Twitter in terrorist attacks can prove to be overbroad (and inconsistent with the First Amendment or with any legal standard of certainty) and also encompass other services that are frequently used by terrorists. On the other hand, public social media provides opportunities for counterspeech and intelligence gathering. Samples recalls that sometimes state security services have asked social media platforms to refrain from removing terrorist accounts, as they provide valuable information concerning the aims, priorities, and sometimes the locations of terrorist actors.
There can be two other potentially compelling reasons for government action preventing the harms caused by “fake news” and “hate speech.” The terms may prove vague, and their use may lead to legal confusion. The term “fake news” has come to public agenda relatively recently and different definitions have been put forward including variations as mis-, dis- and malinformation with their respected consequences. In United States v. Alvarez, the court refused to recognize a general exception to the First Amendment for false speech: “The Court has never endorsed the categorical rule the Government advances: that false statements receive no First Amendment protection.” United States v. Alvarez, 567 U.S. 709 (2012).
In conclusion, Samples considers social media moderation to be more effective than the increases in government power in such cases. The companies that are among the most successful American companies are technically equipped and far more capable of dealing with instances of dangerous speech. Samples’ suspicion is that “government officials may attempt directly or obliquely to compel tech companies to suppress disfavored speech,” which may result in “public-private censorship”.
While in Europe scales tip towards more regulation and additional requirements for social media platforms, including the threat of huge fines being imposed. In cases of allowing illegal expression, the implementation of the agreed Code of Conduct against harmful content online has not produced the expected results to the full. Concerning fake news, the Commission suggests a complex of measures but still considers that self-regulation can contribute to policy responses, provided it is effectively implemented and monitored. Actions such as the censoring of critical, satirical, dissenting or shocking speech should strictly respect freedom of expression and include safeguards that prevent their misuse. They should also strictly respect the Commission’s commitment to an open, safe and reliable Internet. (https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0236&from=EN)
Regarding hate speech there is no universally accepted definition across Europe or the rest of the world of what constitutes hate speech. The European Commission intensified its work on fighting hate speech. Following consultations with the leading social media companies (Facebook, Twitter, YouTube and Microsoft), the EC published a Code of Conduct including an array of public commitments, that was voluntarily accepted by tech giants aimed at fighting online hate speech. However, it is debatable whether the competent EU bodies and national authorities should impose censorship and public control, as long as “the EU’s broad concept of “hate speech” covers many forms of expression which are varied and complex: therefore, the approaches must also be appropriately differentiated.” (Pana,2018 at http://www.mondaq.com/x/633648/Social+Media/EU+Steps+For+Fighting+Online+Hate+Speech+Possible+Censorship+Of+Social+Media).
In 2018, the ECA proposed a new EU law requiring platforms to take down any terrorism-related content within an hour of a notice being issued. The law additionally forces platforms to use a filter to ensure it’s not reuploaded. Should they fail in either of these duties, governments are allowed to fine companies up to 4 % of their global annual revenue. For a company like Facebook that could mean fines of as much as $680 million (around €600 million).This is widely proclaimed as necessary measure, though it is not without its opponents. Critics say that the instrument relies on an overly expansive definition of terrorist content, and that an upload filter could be used by governments to censor their citizens, while removing extremist content could prevent non-governmental organizations from being able to document human rights crimes in zones of conflict and tension. (https://www.theverge.com/2019/3/21/18274201/european-terrorist-content-regulation-extremist-terreg-upload-filter-one-hour-takedown-eu)
In our view, such governmental initiatives and the elusiveness of terms will always provoke protests from more libertarian oriented persons and groups.
- Establishment of an arbitration mechanism at social media platforms
We now come to the crux of our work to propose an internal body that can practically resolve disputes among participants and between participants and the social media platform. Such a body can also support the effective application of the media codes of conduct without governmental involvement. Social media serve as organizations that provide a space for the creation and exchange of information among a huge number of users performing as intermediaries or organizers of an information forum. They cannot be held responsible for the content of the information created and exchanged by third persons; however, since they facilitate debate, they should take steps to settle properly disputes related to the debate.
We have to distinguish the roles of interested parties in this process. Within the sovereignty of states, in order to protect citizens, the obligation to defend national security and counter terrorism lies within the scope of states. In such cases, governments can adopt special laws protecting high public interests based on internationally recognized principles. States can also adopt multilateral conventions supported by enforcement mechanisms (as in the case of money laundering, cyber crime, drug trafficking, trafficking in human beings, etc. legislation). The elaboration of these pieces of legislation and conventions should be transparent, based on shared human rights principles and include the efforts of various stakeholders. Outside these legitimate interests, it is not justified states to impose burdensome administrative requirements on platforms, to curb freedom of private entities and meddle in business. Regulatory measures have to abide by the proportionality test the first part of which represents the principle of minimal impairment of the right or liberty. The attempts of a number of nation-states to set controlling, even censoring functions on social platforms, generate problems related both to the right to freedom of expression and the right to free initiative. On the one hand, government interference can suppress certain types of speech and have a chilling effect on expression in general or affect the economic independence of companies. Yet, on the other hand, there are disputes between the participants in the information forum, as well as between the participants and the social media concerning content, and accordingly with claims for the removal of harmful and offensive content in which the state should not step in.
A possible solution to these issues can be the establishment of an arbitration mechanism (tribunal) for resolving disputes through its institutionalization by the social media themselves. Inspiration for this idea was introduced by UNCITRAL Model Law on International Commercial Arbitration (1985), with amendments as adopted in 2006. The purpose of the Model Law is to entrench modern, fair, and harmonized rules on commercial transactions and to promote the best commercial practices worldwide. The law is designed to assist states in modernizing their laws on arbitral procedure. It reflects universal consensus on key aspects of international arbitration practice having been accepted by states of all regions and systems.(https://uncitral.un.org/en/texts/arbitration/modellaw/commercial_arbitration) According to eminent Prof. Roy Goode “arbitration is a form of dispute resolution in which the parties agree to submit their differences to a third party or a tribunal for binding decisions.” (Commercial law, third edition, 2004.Lexis/Nexis, UK and Penguin books).
4.1 The model of stock exchange arbitration mechanism
Arbitration tribunals being institutionalized units of private, non-governmental adjudication are inherent in such self-governing and self-regulating business organizations such as regulated markets for securities and other financial instruments. The most typical representative of these markets is the stock exchange. A stock exchange represents a club organization based on membership of securities traders. The stock exchange creates and enforces rules that regulate both the membership and the trade. Disputes shall be settled by special arbitrators organized at the stock exchange arbitration tribunal (court). The membership of the club is contractual and it is mandatory for any member to accept and abide by the so-called “arbitration clause”. The clause requires any dispute regarding financial instruments trading and club membership to be decided by the listed arbitrators chosen by the parties accordingly. The arbitrators included in the public list are persons of high professional and moral standing. The stock exchange itself is not responsible for the arbitration decisions, since it is often involved in the disputes. The costs of the arbitration decisions (awards) shall be borne by the parties to the dispute. It is also a principle that the dispute settlement rules are created by the stock exchange itself.
4.2 Social media and the arbitration model
Social media is a business and club-like organization (see the opinion of Tom Standage on p. 3) and its rules are binding for the participants in the information forum. In this sense, it can be viewed as an institution similar to a stock exchange. This similarity allows the transposition of the arbitration model to social media and the setting up of such unit at social media platforms. Exchange underpins the operation of both entities (in the one case it is about exchange of information and ideas, while in the other it is about exchange of special goods such as securities and financial instruments) and their organization is rooted in the principle of membership of participants (terms and conditions acceptance). In the context of this similarity, the specific features of the stock market and of social media cannot be an obstacle to the establishment of an arbitration tribunal at the social media platforms. Arbitration is initially a mechanism for adjudication of commercial disputes but at the stock exchange traders represent many non-commercial persons. The users of social media services also comprise numerous non-commercial persons. In our view, there is no fundamental obstacle to using this method by non-traders, if there is a contractual agreement for its implementation. The terms and conditions can bind users of their services through the incorporation of an arbitration clause.
By the arbitration procedure disputes about the content of the information on social platforms could be resolved in an impartial and professional manner by unbiased and professional arbitrators selected by the participants themselves. These arbitrators should be recognized media lawyers and professionals with high personal integrity.
The arbitration process for resolving disputes is significantly faster and cheaper than litigation. We shall quote again Prof. Goode who stresses that due to its “consensual nature the arbitration mechanism avoids unnecessary delay or expense” (Commercial law, third edition, p. 1174 – 1175).
Arbitration cases are in principle one-instance cases and in exceptional and rare instances only a court can challenge the arbitration awards.
Renowned Professors Loss and Seligman draw attention to the fact that under US securities’ legislation “courts have limited power to review arbitration awards (at the stock exchanges – B.Z., V.D.) on such grounds as an award being made in “manifest disregard of the law”, or its being “completely irrational”, or “arbitrary and capricious”. A court can also void an arbitration agreement if it finds that there was fraud in the inducement of the arbitration clause itself.” ”(Loss, L. & Seligman, J. Fundamentals of securities regulation, third edition.1995. Little, Brown and Company. Boston, New York, Toronto, London, p. 1139). Therefore the court is not completely isolated in the process but can interfere to protect parties’ interests in exceptional cases when the arbitration threatens the stability of the legal order.
The arbitration settlement of disputes can consolidate the mediating function of social media and liberate them from the tasks of censors and controllers of content imposed by legislation in some countries.
The adoption of an arbitration clause may restore public trust in social media and their capability to self-regulate.
The recognition of this method by the national states on whose territories the social media operates may be accomplished either by the adoption of appropriate legislation or by concluding multilateral international treaties.
The logic of creating and implementing such a model requires as a first step an arbitration unit to be established in nation states where social media operate. The arbitration institutionalization depends on the creation of a representative office in the territory of each state in which arbitration units can be set up.
The proposition of an arbitration model of settling disputes at social media platforms comprises an approach that assures a wide space for self-regulation of social media. It can better safeguard both freedom of expression and free business initiative. At the same time, this model is also a form of media protection against unjustified and arbitrary state regulatory interventionism, which may easily jeopardize freedom of expression and economic freedom.
It is commendable for social media to organize and try out the form of dispute settlement offered here, and establish and follow good practices in this regard. One should recall that the UN Guiding principles on business and human rights (2011) require”business enterprises should establish or participate in effective operational-level grievance mechanisms for individuals and communities who may be adversely impacted.” (https://www.ohchr.org/documents/publications/GuidingprinciplesBusinesshr_eN.pdf)
These mechanisms should be people-centred, easy to implement and generate mutual trust. It is worth remembering the advice of the ECtHR that “the Internet is an information and communication tool particularly distinct from the printed media, especially as regards the capacity to store and transmit information. The electronic network, serving billions ofusers worldwide, is not and potentially will never be subject to the same regulations and control.” (Węgrzynowski and Smolczewski v. Poland (2013) and Editorial Board of Pravoye Delo and Shtekel v. Ukraine (2011) Therefore stake-holders have to discuss various options.