The global internet allows people to post content or engage in activity that may be considered unlawful or offensive, a result of the scale and openness of the platform. The responsibility often falls to internet intermediaries to control or police user activity in wide range of situations, including responding to claims of defamation, obscenity, intellectual property infringement, invasion of privacy or because the content is critical of government.
This raises a number of important questions, including: Should technological intermediaries be held liable for content posted by their users and other third parties? Under what circumstances – if any – is it appropriate to require or encourage intermediaries to curtail access to such content?
What are “intermediaries”?
Internet intermediaries include entities such as telecommunications carriers, internet service providers (ISPs), websites, social networks and a wide range of other technological entities that play important roles in transmitting information and ideas across the online world. Such entities help facilitate access to content created by others and often provide valuable tools and forums for expression by users.
Intermediaries may be involved in the following ways:
- Network operators and mobile telecommunications providers – Provide physical and technical infrastructure for transmission of information.
- Access providers (e.g. ISPs) – Provide the service of connecting end users to the Internet (usually use their own transmission infrastructure).
- Registrars and registries – Operate top-level domains and sell domain names, respectively, to individuals and businesses.
- Website hosting companies – Rent website space to users for web pages.
- Online service providers – For instance, blog platforms, email service providers, social networking websites, or video/photo hosting sites.
- Internet search engines and portals
- E-commerce platforms and online marketplaces – For instance, eBay and Amazon.
The Issue of Liability
To the government, these intermediaries represent a possible point of control over the content and unlawful behavior that takes place over the internet. As it currently exists, the internet allows relatively anonymous or pseudonymous speech, which makes it difficult or extremely time-consuming to identify individual users who post illegal or offensive content. Individual perpetrators may also be located outside the jurisdiction of the governments attempting to police them.
By contrast, intermediaries that host or transmit such content are easier to identify and may already be subject to various licensing or regulatory requirements and are more likely to have local operations that make them subject to the government’s jurisdiction. Intermediaries may also present attractive targets for private litigation, as they are far easier to identify and reach than individual users. Many intermediaries may also be more able to pay damages than the individuals who post the content. If the law exposes intermediaries to liability in the form of civil damages, intermediaries will be forced to review and limit user content, as they would if subject to direct government fines.
Approaches to Intermediary Liability
Who is liable for harmful or illegal content? Often, governments looking to maximize growth of information and communication technologies tend to limit civil and criminal liability for technological intermediaries. On the flip side, the governments in the most internet-restrictive countries will often hold intermediaries responsible for illegal content posted by users. This causes the intermediaries to act as content gatekeepers and results in the hindrance of innovation.
There are three main approaches to intermediary liability:
- Expansive protections against liability for intermediaries
- Conditional safe harbor from liability
- Blanket or strict liability for intermediaries
These approaches are discussed in the following sections of this article.
Section 230 of the US Communications Act essentially protects intermediaries from liability for a wide range of content posted by third parties, though it does not address intellectual property infringement. This has facilitated the vibrant and innovative internet industry in the US.
Section 230 grants intermediaries broad immunity and removes most of the risk of potentially significant liability for illegal behavior by users. It protects online services against a wide variety of claims (e.g. negligence, violation of federal civil rights laws and defamation). Without Section 230, open-ended liability risks would dramatically raise entry barriers for new internet services and applications that allow user-generated content, possibly blocking innovation in interactive media. This would also negatively impact free expression.
Section 230 also promotes the ability of intermediaries to engage in voluntary and good-faith content removal, permitting content platforms and social networking sites to experiment with user-driven flagging systems for identifying and removing content that violates community guidelines (e.g. harassment, bullying and sexual content). It also supports intermediaries that voluntarily identify, block and remove obscene material or apparent child abuse images.
Conditional Safe Harbor
With this approach, an intermediary receives protection against liability for user conduct, but only if the intermediary meets certain criteria. This approach attempts to find a middle ground that recognizes the benefits of liability protection, such as in the expansive protections approach above, while at the same time, defining certain roles for intermediaries dealing with unlawful content.
The EU’s E-Commerce Directive (ECD) includes a conditional safe harbor, which applies a wide range of content and legal claims. India has also adopted a similar framework. Conditional safe harbor regimes usually distinguish between several types of intermediaries, with conditions for safe harbor eligibility varying, depending on the category of service an intermediary provides.
The ECD identifies the following categories of service providers:
- Providers of transmission/”mere conduit” functions
- Caching providers
- Hosting providers
- Information location tools
Conditional safe harbor regimes following the following general principles in order to promote online innovation and minimize risks to free expression:
- Protections must be broadly available to a variety of intermediaries
- Conditions should not be too burdensome
- Any notice-and-takedown regime needs to give intermediaries clear guidance regarding what constitutes a valid notice
- Actions required of intermediaries must be narrowly tailored and proportionate, to protect the fundamental rights of internet users
- Notice-and-takedown must be limited to contexts where illegality is straightforward
- Safeguards are necessary to mitigate risks of abuse
Blanket or Strict Liability
Certain countries broadly impose liability on intermediaries to restrict speech. The Chinese government imposes liability for unlawful content on entities at every layer of a communication, from the ISP to the online service provider, website and hosting company. Should any intermediary publish or distribute unlawful content, or fail to sufficiently monitor the user of its services, take down content or report violations, it could face fines, criminal liability, and revocation of its business or media license.
Blanket liability significantly limits the ability of intermediaries to offer innovative services, new platforms for expression and opportunities for participation and interaction among users. This approach also creates strong incentives to closely monitor user activity and to block content that carries any risk of complaint or controversy. Such a regime leads users and service providers to self-censor.
This article examines the issue of liability of technological intermediaries, in cases where their users or third parties post unlawful or inappropriate content. The article takes a look at three main approaches to intermediary liability: 1) Expansive protections against liability for intermediaries; 2) Conditional safe harbor from liability; and 3) Blanket or strict liability for intermediaries.
CIPP Exam Preparation
In preparation for the Certified Information Privacy Professional/Information Technology (CIPP/IT), a privacy professional should be comfortable with topics related to this post, including:
- Privacy intersections in the development process (I.B.a.)
- Global resourcing and outsourcing (I.F.b.i.)
- Privacy-enhancing techniques (III.D.)