Protection in Perpetuity for the Online Giants?

Earlier this year, social media companies, such as Facebook and Twitter, started putting up warnings on comments made on their platforms. The warnings were an initial effort to slow down the spread of misinformation. As 2020 progressed, they started taking down entire articles and video content deemed misleading.

One example is the cascade of measures that Twitter announced around the protection of the integrity of the US 2020 election process. Twitter said that “between Oct. 27 and Nov. 11, it had labeled about 300,000 tweets as containing ‘disputed and potentially misleading’ information about the election.” That represented 0.2% of all tweets related to the U.S. election in that time frame. 

Facebook also updated its terms of service and announced that it would now authorize the removal or restriction of content from its social media platforms, if that content may have regulatory or legal impacts on the company. “Effective October 1, 2020, section 3.2 of our Terms and Service will be updated to include: We also can remove or restrict access to your content, services or information if we determine that doing so is reasonably necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook

Even YouTube also removed certain videos of the World Health Organization covering COVID-19. YouTube justified this action by claiming that the removed videos were misleading and that the WHO was contradicting itself as more information was learned about the virus through 2020.

By interfering with the content that is published on their platform and censoring certain stories, these companies are no longer acting solely as distribution platforms but effectively as editorial authorities on a variety of topics of matter.

AI powered feeds already imply editorial decisions

We already agreed that these companies “decide” which content we consume by accepting the ordering of our daily feeds based on an algorithmic redistribution amplification method as opposed to traditional real time feed distribution. AI powered data “ranking” is what made Facebook, Twitter and others’ experiences so great and addictive to its users and profitable to its shareholders. 

We’ve agreed to this arrangement assuming these companies would be neutral speech platforms and not attempt to influence public opinion by putting their thumb on the scale and unfairly benefitting one side or another. But now, we increasingly see them taking sides on a variety of topics.

A little bit of history

Over a decade ago, user generated content became a big trend. Companies no longer needed to rely on publishers since users were creating tons of relevant content. Platforms played a fundamental role in syndication and distribution of users’ content. The online internet companies positioned themselves as distribution channels with a mission to provide a platform for anyone to publish their content: pictures, videos, blogs, comments, etc.

Despite their continuously growing influence of what is distributed, read and consumed, the online internet companies’ responsibility remained minimal.

This was made possible because of Section 230

Section 230 of the Communications Decency Act was passed in 1996 and is a law that is designed to protect common carriers and web hosters of any legal claims that may come from hosting third party information. Section 230 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This law is intended to protect “interactive computer services” from being sued over what users post, effectively making it possible to run a social network, a website like Wikipedia, or have a comment section on a news website.

Simply said:

  • If you leave a comment on this blog, I am not responsible for your comment. 
  • Same applies for online medium companies. If they are hosting a blog they are not responsible for the comments. 

Without question, this law greatly aided the development of the Internet by enabling companies and websites the ability to leverage user-generated content.

Should these protected rights remain when user-generated content is pushed by advanced AI technology? 

Section 230 was passed in 1996. Back then, we had no conception of social distribution and AI based feeds that raise specific content targeted to users’ preferences and greatly increases a users engagement on those platforms. 

These companies have mastered algorithms and amplification methods, based on AI to promote or demote the content that will keep us clicking and staying on their site. Make no mistake, the data and signals they use to help promote specific content is the choice of those companies. They build their own algorithms, assemble which dataset will feed these algorithms and even the weight of the influence, i.e our click history, what is likely clicked on, our profile, our preferences, etc. 

For years, they have been “choosing” what content we, our family and our children are being served.

From Content Distribution to Content Editing platforms

The advanced AI techniques, coupled with the recent prescriptive interference and censorship of what is published on their platforms confirms that these online internet companies are no longer acting solely as distribution platforms but more so as an editorial authority.


Does the End Justify the Means?

With the social media companies’ change of tone, confirmed by the changes in their policies, they are signaling to the world that they are fully prepared to take a position on what is right and wrong for their platform. 

Understanding that they may be doing so in an attempt to reduce the spread of misinformation and encourage people to reconsider if they want to amplify certain posts and tweets, we need to make sure that they are not abusing their legacy privileges (Section 230) and dig deeper on the consequences of having distribution platforms manipulate the content it allows or disallows.

As Machiavelli famously said, “the end justifies the means.” 

In a world that is more and more digital, online communities of influencers are emerging as sources of truth. Their voices will likely only become louder. We need to be careful of the echo chambers of opinions, amplified by AI who are likely to tighten our positions. The future of media isn’t new companies but a billion unique voices. 

Who will monitor is becoming as important as what is created. Removing hate speech, vulgar content and reducing disinformation is noble, even necessary. But Beauty is in the Eye of the beholder. Culture, education, political beliefs naturally infer bias in opinions. 

The days of having those companies shielded under Section 230 are probably counted…

One thought on “Protection in Perpetuity for the Online Giants?

Leave a comment