How Should Governments Regulate Facebook and other Social Media Platforms? Proposing A New Paradigm to Regulation.

Governments and Social Media companies are in the midst of a heated debate on how to regulate social media platforms. This can often fall into finger-pointing and mutual suspicion. For example, many Governments believe that social media companies like Facebook, Twitter and YouTube cannot be trusted to act in the public interest because they will always prioritise business interests. In my previous article “Policy Issues Facing Social Media Companies: The Case Study of YouTube”, I argued that social media companies are often not trading-off public interests for business interests. They are more often trading-off competing public interests, which creates many dilemmas that Governments may not understand.

This article goes a step further and argues that Governments must fundamentally shift their paradigm towards regulating social media companies, recognizing that social media companies, like Governments, are representations of public interests. Here it goes:

==

Proposing a New Paradigm for Regulating Social Media Companies

By enabling anyone to produce and share content, social media platforms like Facebook and YouTube have decentralized how information and opinions are shared in society. This has brought tremendous public value, such as freedom of speech and enabling access to education. However, it has also enabled individuals to spread hate speech, terrorist agendas and fake content, which can threaten national security and social harmony.

Some argue that the social media space should be completely free and left to the discretion of users. Users will rise up to counter offensive or fake material, or judge for themselves that these should be ignored.

This anti-regulation approach is irresponsible towards public interests. Targeted defamations and incitements to racist violence can easily go viral on social media platforms. Without swift actions by authorities, consequences to personal wellbeing and national security could be irreparable.

Some regulation is necessary to strike the balance between advancing free speech and protecting public interests such as national security and social harmony – the question is how.

“Co-regulation”: A New Paradigm In Regulation

I propose a new paradigm for how Governments regulate social media companies, which I term ‘Co-regulation’.

In the media space, Governments have traditionally seen themselves as guardians of public interest, enacting regulation to prevent content which violates standards of public decency. Governments must recognize that unlike traditional media companies, where content is generated by small group of individuals, social media platforms represent a broad base of content producers and users. Social media platforms, like Governments, are avenues for public interests to be represented.

Hence, Governments cannot see themselves as enforcers of public interest against social media companies. Instead, Governments and social media companies are joint stewards of public interests on social media platforms. This is the paradigm which undergirds ‘Co-regulation’.

‘Co-regulation’ has three components:

First, content standards should be interpreted and operationalized on social media platforms through an inclusive mechanism. When it comes to interpreting content laws, the scale and speed of the digital world make court decisions impractical. While it would be expedient to assign responsibility to social media companies to interpret and operationalize content laws, this would be unrepresentative of public interests. One idea is for Governments and social media companies to co-develop a swift mechanism which allows a spectrum of public voices to influence the interpretation of content laws in grey cases.

Second, Governments and social media companies should establish a system of public accountability. A good example is the Code of Conduct on Countering Illegal Online Hate Speech, established by the European Commission and four major social media platforms in 2016. It sets public goals for how quickly illegal hate speech should be reviewed and removed. Results are published on a regular basis.  

Third, Governments and social media companies should both make commitments, and be held jointly accountable, to public goals. For example, while social media companies invest in systems to detect and review potentially illegal content, Governments should engage the public on what constitutes ‘hate speech’ and ‘fake news’, so that user-flagging is more effective.

Why Not Legislate the Problem Away?

By implementing a law which enables hefty fines for social media companies which fail to take down ‘obviously illegal content’, Germany has argued that without legislation, social media companies will not take their responsibilities seriously.

In my view, the costs of legislation generally outweigh the benefits. The upside – better enforcement – is limited. Business incentives to remove objectionable content are already in play: advertisers are social media platforms’ main source of revenue, and none want their ads to be associated with objectionable content. An advertisers’ boycott on YouTube earlier this year suggests that market forces are alive and well.

On the other hand, legislation can have dangerous effects. Placing legal responsibility on social media companies to identify the lawfulness of content on their platforms creates an incentive to err on the side of greater caution, i.e. more censorship. Beyond undermining the right to free speech, companies may inadvertently censor important public feedback, for example, on Governmental corruption. Besides, enacting legislation sends a signal that social media companies cannot be trusted to act in the public interest, which is inimical to the principles of co-regulation.

Conclusion

Governments worldwide should recognise social media platforms as legitimate representations of public interests. As co-stewards of public interest, Governments and social media companies hold joint responsibility and accountability for regulating the social media space in a way that best represents public interests.  It is about time Governments and Social Media Companies work collaboratively under this new paradigm of co-regulation.

 

<Just like all the articles on http://www.techandpublicgood.com, this article represents my personal views and not the view of my organization> 

 

facebookregulation
source: https://www.thelocal.de/20170314/proposed-law-would-fine-facebook-up-to-50-million-for-hate-speech

Policy Issues Facing Social Media Companies: The Case Study Of YouTube

One of the goals of www.techandpublicgood.com is to bridge the worlds of Government, tech and business, which often hold a degree of suspicion towards each other. This article dives deep into controversial policy issues surrounding social media companies.

As a case study, it elucidates the challenges, considerations and dilemmas behind YouTube’s policies. This is me, a Government policy-maker, putting myself in the shoes of a YouTube policy-maker. I figure our considerations are similar despite our different contexts. If you know better than me on any of these issues, feedback is much, much welcomed.

The Unexpected Responsibilities of Social Media Companies

We live in an increasingly divided world. The forces driving these divisions, for example, rising income inequality, geopolitical, racial and religious tensions, were in play long before the advent of social media.

However, social media has provided a channel for divisions to widen. Lowering the barriers for individuals to share and ‘viral’ their knowledge and opinions has brought tremendous benefits, such as spreading education and freedom of speech. On the other hand, it has given greater voice and reach to malicious or ‘fake’ content. Algorithms designed to push us to what we will most likely click create an echo chamber, reinforcing our beliefs and biases.

When a flurry of social media companies took to the scene in the 2000s, their intention was to create platforms for people to find what they wanted – friends, funny videos, relevant information, roommates or hobbyist items. Very few would have imagined that their platforms would completely change how everyday folks conversed and debated, shared and consumed information.

Policy issues facing social media companies

Today, social media companies are adjusting to the new responsibilities that this influence entails. Here is an overview of the issues at stake.

  1. Free speech and censorship

It is important to recognize the role of social media in democratizing how information is generated, shared and consumed. At the same time, not everything is appropriate to be shared online. Social media platforms recognize that they must have a moral view on harmful content that should be taken down, for example, content which aims to instigate violence or harm to others.

However, censorship cannot be overused. Social media platforms cannot become arbiters of morality because many issues are subjective, and it is not the platform’s role to make a judgment on who is right: The same LGBT content can be affirming for some, but offensive for others. When is it fake news, or merely a different interpretation? Here’s a real dilemma: let’s say someone reports an outbreak of disease on Facebook. The Government requests to take down the report until their investigations are completed because it will incite unnecessary fear in their population. Is Facebook best placed to assess who is right?

In general, a social media platform’s policy must identify and take down of content that is inherently harmful, while catering to subjectivity by providing choice – to users, on the content they receive, and to advertisers, on the content their brands are associated with. It is an intricate balance to strike, requiring nuanced, consistent policy backed up by a strong and coherent detection, enforcement and appeals regime.

  1. Copyright infringements

Another policy area surrounds copyright. Individuals sharing content online may inadvertently or intentionally infringe on others’ copyrights. On one level, better detection of copyright infringements is needed. YouTube invested $60m in a system called ContentID, which allows rights holders to give YouTube their content so that YouTube can identify where it is being used.

What to do about copyright infringements is another issue. Should they be taken down immediately, or should the platform provide choice to copyright owners? Paradigms have shifted over the years in recognition that copyright owners may have different preferences: to enforce a take down, seek royalties or take no action.

  1. Privacy

A third category of policy issues surrounds managing users’ privacy rights.

First, how can the platform generate advertising revenues and keep their user base engaged, while respecting different preferences for personal privacy? This typically pertains to the practice of combining personal information with search and click history to build up a profile of the user, which enables targeted advertising. Information is sometimes sold to third parties.

Second, what does it mean to give people true ‘choice’ when it comes to privacy? Many argue that long privacy agreements which do not give people a choice other than quit the app do not provide people a real choice in privacy.

Third, should individuals have the right to be forgotten online? The EU and Google have been in a lengthy court battle on the right of private citizens to make requests for search engines to delist incorrect, irrelevant or out of date information returned by an online search for their full name, not just in their country of residence but globally.

  1. Children

Children bring these policy issues into sharper focus based on notions of age-appropriateness, consent, manipulation and safety. Platforms like Facebook do not allow users below 13. YouTube introduced ‘Restricted Mode’ as well as YouTube Kids, which filter content more strictly than the regular platform.

Similarly, higher standards apply to children’s privacy. Should companies be allowed to build profiles on children, and potentially manipulate them at such a young age? Should people be allowed to remove posts they made or online information about them while they were children?

Safety for children is also a huge issue particularly on interactive platforms where children can be groomed by predators. Taking into account privacy considerations, how can we detect it before harm is inflicted, and what is the right course of action?

The YouTube Case Study

I have not scraped the bottom of the barrel on the range of policy issues that social media companies deal with, but the broad categories are in place. Now let’s get into specifics of how social media companies have answered these questions through policy, implementation and resource allocation.

To put some meat on this, here’s a quick case study of YouTube’s approach. There are at least four components:

  1. Product differentiation
  2. Enhancing user choice within existing products
  3. Closing the policy-implementation loop
  4. Strategic communications and advocacy

1. Product differentiation

Product differentiation is one way to cater to different appetites for content and privacy. In 2015, YouTube has launched ‘YouTube Kids’ which excludes violence, nudity, and vulgar language. It also provides higher privacy by default through features such as blocking children from posting content and viewing targeted ads, and enabling them to view content without having to sign up for an account. ‘YouTube Red’ offers advertisement-free viewing.

However, product differentiation has its limits because significant resources are required for customization. There is also a slippery slope to avoid: if YouTube rolled out “YouTube China” with far stricter content censorship, imagine the influx of country requests that would ensue!

2. Enhancing user choices within existing products

Providing users choice in their settings is another way to cater to varying preferences within a given product. For example, advertisers on YouTube may have varying appetites for types of videos their advertisements are shown against. Enabling choice, rather than banning more videos, is key: earlier this year, YouTube introduced features that enabled advertisers to exclude specific sites and channels from all of their AdWords for Video and Google Display Network campaigns, and manage brand safety settings across all their campaigns with a push of a button.

Concerning privacy, users who do not want their personal data and search/click history to be linked can go to the activity controls section of their account page on Google, and untick the box marked “Include Chrome browsing history and activity from websites and apps that use Google services”. For particular searches, you can also use “incognito mode”, which ensures that Chrome will not save your browsing history, cookies and site data, or information entered in forms. These are ways to provide real choices in privacy.

3. Closing the Policy-Implementation Loop

A robust policy defines clear principles which determine when content should be taken down or excluded from monetization opportunities and Restricted Mode. Implementation policy then becomes critical. With the large volume of content coming online every minute, it is impossible for YouTube employees to monitor everything. YouTube has to rely on user flagging and machine learning to identify copyright infringements or offensive content.

However, algorithms cannot be 100% accurate and often cannot explain why decisions are made. A robust appeals and re-evaluation process with humans in the loop is needed to ensure the integrity of the policy. More importantly, the human touch is needed to positively engage content producers (who hate to be censored).

In my previous jobs, we often quipped: “policy is ops”. It is no point having a perfect policy if enforcement and implementation simply cannot support it. Policy teams need a constant feedback loop with implementation teams, to bridge the ideal with the possible.

4. Strategic communications and advocacy

Finally, robust policy is necessary, but insufficient for social media companies. Strategic communications and advocacy are an absolute must.

  • Public criticism of a company’s policies can negatively impact business. Boycotts and greater Government regulation are examples. YouTube is swimming against a common but simplistic narrative that tech companies are simply trading of public interests in privacy and security for business interests such as the growth of advertising revenue.
  • Misperceptions about policies can also have dangerous impacts. A few years ago, Israel’s Deputy Foreign Minister met with YouTube executives, raising the issue of Palestinians leveraging YouTube videos to incite violence against Israel. She later released a statement which inaccurately suggested that Google would collaborate with Israel to take down this content. Google refuted this, but the nuance could have already been lost with segments of the public. YouTube’s policy of neutrality must come across clearly, even as lobby groups try to drag it into their agendas.

The purpose of Strategic Communications is to create a wide circle of advocates around YouTube’s policy stance so that negative press and misperceptions are less likely to take off. Elements of Strategic Communications include:

  • Going beyond the ‘what’ of policy, to the ‘why’. It is important to illuminate the consistent principles behind YouTube’s policy stances, as well as the considerations and trade-offs entailed. Channels such as blog posts enable this, since mainstream media is unlikely to provide the level of nuance needed.
  • Building strategic relationships and advocates. This includes entering into conversations and debates with your most strident critics, and building alliances with third parties who advocate your views.
  • Strong internal communications. Since social media companies themselves are run by an aggregation of people with different beliefs, it is essential that employees do not feel disenfranchised by the company’s policy stance.
  • Providing an alternative narrative. In addition, an important point for YouTube to make is that more is at stake than taking down offensive video content. Ultimately, we are all fighting against greater divisiveness and polarization in society. Although some elements of YouTube exacerbate this, YouTube can also make a huge dent in bridging divides.  Hence, I love what YouTube is doing with “Creators for Change”, a program that cultivates creators who aim to counter xenophobia, extremism and hate online. These creators are working on web series on controversial issues, as well as educational workshops for students. They are using the YouTube platform to close divides.

Conclusion

It is far too simplistic to say that companies only pursue business interests, leaving Governments to protect public interests. Every new product, including social media platforms, is a double-edged sword, with the potential to bring us closer to or further from where we want to be as a society.

Both Governments and Social Media companies are trying to push us towards the first scenario. However, Governments will tend to advocate for more conservative policies as their primary objective is to minimize downside on issues such as national security, privacy and Government legitimacy. On the other hand, private businesses are simultaneously managing downsides while pushing the boundaries on issues such as free speech and revenue generation models.

A natural tension between these two positions is healthy as we decide, as countries and global communities, where we collectively fall on issues. This is how democracy works, after all.

 

policyissuesyoutube
Source: http://www.thewindowsclub.com/ultrasurf-review-risk-blogging