How Should Governments Regulate Facebook and other Social Media Platforms? Proposing A New Paradigm to Regulation.

Governments and Social Media companies are in the midst of a heated debate on how to regulate social media platforms. This can often fall into finger-pointing and mutual suspicion. For example, many Governments believe that social media companies like Facebook, Twitter and YouTube cannot be trusted to act in the public interest because they will always prioritise business interests. In my previous article “Policy Issues Facing Social Media Companies: The Case Study of YouTube”, I argued that social media companies are often not trading-off public interests for business interests. They are more often trading-off competing public interests, which creates many dilemmas that Governments may not understand.

This article goes a step further and argues that Governments must fundamentally shift their paradigm towards regulating social media companies, recognizing that social media companies, like Governments, are representations of public interests. Here it goes:

==

Proposing a New Paradigm for Regulating Social Media Companies

By enabling anyone to produce and share content, social media platforms like Facebook and YouTube have decentralized how information and opinions are shared in society. This has brought tremendous public value, such as freedom of speech and enabling access to education. However, it has also enabled individuals to spread hate speech, terrorist agendas and fake content, which can threaten national security and social harmony.

Some argue that the social media space should be completely free and left to the discretion of users. Users will rise up to counter offensive or fake material, or judge for themselves that these should be ignored.

This anti-regulation approach is irresponsible towards public interests. Targeted defamations and incitements to racist violence can easily go viral on social media platforms. Without swift actions by authorities, consequences to personal wellbeing and national security could be irreparable.

Some regulation is necessary to strike the balance between advancing free speech and protecting public interests such as national security and social harmony – the question is how.

“Co-regulation”: A New Paradigm In Regulation

I propose a new paradigm for how Governments regulate social media companies, which I term ‘Co-regulation’.

In the media space, Governments have traditionally seen themselves as guardians of public interest, enacting regulation to prevent content which violates standards of public decency. Governments must recognize that unlike traditional media companies, where content is generated by small group of individuals, social media platforms represent a broad base of content producers and users. Social media platforms, like Governments, are avenues for public interests to be represented.

Hence, Governments cannot see themselves as enforcers of public interest against social media companies. Instead, Governments and social media companies are joint stewards of public interests on social media platforms. This is the paradigm which undergirds ‘Co-regulation’.

‘Co-regulation’ has three components:

First, content standards should be interpreted and operationalized on social media platforms through an inclusive mechanism. When it comes to interpreting content laws, the scale and speed of the digital world make court decisions impractical. While it would be expedient to assign responsibility to social media companies to interpret and operationalize content laws, this would be unrepresentative of public interests. One idea is for Governments and social media companies to co-develop a swift mechanism which allows a spectrum of public voices to influence the interpretation of content laws in grey cases.

Second, Governments and social media companies should establish a system of public accountability. A good example is the Code of Conduct on Countering Illegal Online Hate Speech, established by the European Commission and four major social media platforms in 2016. It sets public goals for how quickly illegal hate speech should be reviewed and removed. Results are published on a regular basis.  

Third, Governments and social media companies should both make commitments, and be held jointly accountable, to public goals. For example, while social media companies invest in systems to detect and review potentially illegal content, Governments should engage the public on what constitutes ‘hate speech’ and ‘fake news’, so that user-flagging is more effective.

Why Not Legislate the Problem Away?

By implementing a law which enables hefty fines for social media companies which fail to take down ‘obviously illegal content’, Germany has argued that without legislation, social media companies will not take their responsibilities seriously.

In my view, the costs of legislation generally outweigh the benefits. The upside – better enforcement – is limited. Business incentives to remove objectionable content are already in play: advertisers are social media platforms’ main source of revenue, and none want their ads to be associated with objectionable content. An advertisers’ boycott on YouTube earlier this year suggests that market forces are alive and well.

On the other hand, legislation can have dangerous effects. Placing legal responsibility on social media companies to identify the lawfulness of content on their platforms creates an incentive to err on the side of greater caution, i.e. more censorship. Beyond undermining the right to free speech, companies may inadvertently censor important public feedback, for example, on Governmental corruption. Besides, enacting legislation sends a signal that social media companies cannot be trusted to act in the public interest, which is inimical to the principles of co-regulation.

Conclusion

Governments worldwide should recognise social media platforms as legitimate representations of public interests. As co-stewards of public interest, Governments and social media companies hold joint responsibility and accountability for regulating the social media space in a way that best represents public interests.  It is about time Governments and Social Media Companies work collaboratively under this new paradigm of co-regulation.

 

<Just like all the articles on http://www.techandpublicgood.com, this article represents my personal views and not the view of my organization> 

 

facebookregulation
source: https://www.thelocal.de/20170314/proposed-law-would-fine-facebook-up-to-50-million-for-hate-speech