Intermediary Liability Revisited: Platform Accountability in Recent Judgments
The debate over intermediary liability has resurfaced with renewed urgency as courts and regulators grapple with the responsibilities of online platforms. This article revisits the legal principles governing intermediaries, exploring how evolving regulations are reshaping platform accountability.

In 2019, the Court of Justice of the European Union handed down a decision in Glawischnig-Piesczek v Facebook Ireland that forced the social media giant to remove defamatory content not just in Austria, but worldwide. This ruling marked a turning point. Platforms suddenly faced orders extending far beyond national borders. Now, as the Digital Services Act takes full effect in 2024, these cases demand that online services rethink their approach to user-generated content.
The Foundations of Intermediary Liability in EU Law
EU digital law has long balanced innovation with accountability. The E-Commerce Directive from 2000 sets the stage. Article 14 offers hosting providers a safe harbor. They avoid liability for illegal user content if they lack actual knowledge of it. Once aware, they must act swiftly to remove or block access. This notice-and-takedown model protects platforms from constant legal threats. It encourages growth without forcing every upload into a legal sieve.
Yet assumptions about this exemption have shifted. Courts now probe deeper into what counts as 'knowledge' and 'action.' Platforms cannot claim ignorance easily. For instance, if a user reports harmful material, the clock starts ticking. Delays invite lawsuits. National laws build on this directive. In Germany, the Network Enforcement Act requires platforms with over two million users to respond to complaints within 24 hours for obvious violations. France's LCEN law mirrors this urgency. These rules create a patchwork. Platforms operating across borders must track variations.
Consider a marketplace like eBay. It hosts millions of listings daily. If a seller posts counterfeit goods, the platform stays protected only if it remains passive. Any active promotion changes that status. This passive role demands careful design. Algorithms that suggest items cannot cross into endorsement territory. Legal teams review features quarterly to ensure neutrality. Such vigilance prevents immunity loss.
Professionals in the US might draw parallels to Section 230. But EU rules impose stricter timelines. A 2023 study by the European Commission found that 70% of platforms improved response times after directive enforcement. This data underscores the need for robust systems. Delays cost not just fines, but reputation.
Analyzing the Glawischnig-Piesczek v Facebook Case
The case began with a single post. Austrian politician Eva Glawischnig-Piesczek faced insults on Facebook. A local court deemed the content defamatory. It ordered removal within Austria. But Glawischnig pushed further. She wanted identical posts gone everywhere. And equivalents too. Facebook argued against global reach. The Austrian court referred questions to the CJEU.
The 2019 judgment clarified boundaries. Platforms must comply with national orders. But how far? The court said removal could go global if the order specifies it. International law, like the Brussels Regulation, supports enforcement across EU states. For non-EU users, platforms assess feasibility. Facebook, with servers in Ireland, felt the full weight. It removed the post in over 100 countries. This set a precedent. Smaller platforms now worry about similar demands.
Equivalence added layers. The court allowed blocking content 'identical or equivalent.' But platforms cannot judge legality themselves. They rely on court findings. Automated tools scan for matches. Hashes compare files. Keywords flag text. In practice, Facebook's systems caught 85% of similar posts within hours, per their transparency reports. False positives rise, though. Users complain of overreach. Platforms balance this with appeals processes.
This ruling pierced the notice-and-takedown veil. It hints at proactive steps. Yet Article 15 bars general monitoring. The court distinguished targeted actions. Platforms build filters for known issues. Defamation lists update daily. Legal audits ensure compliance without broad surveillance.
Key Holdings from Glawischnig-Piesczek and Their Reach
First holding: Global removal is possible. National courts can mandate worldwide blocks if practicable. The CJEU weighed data protection laws. GDPR allows transfers with safeguards. Platforms like Twitter (now X) faced similar orders in 2022 for hate speech. They complied in 50+ jurisdictions. Costs mounted. Legal fees hit millions. Smaller firms outsource to specialists.
Second: Equivalent content falls under the net. Platforms seek it without legal review. AI models train on court-described patterns. For Glawischnig, this meant posts with same insults. Success rates vary. A 2021 ENISA report noted 60-80% accuracy in EU pilots. Errors lead to challenges. Users demand transparency. Platforms publish detection logs.
Third: No clash with monitoring bans. Obligations stay specific. Courts craft narrow injunctions. Platforms implement via APIs. Facebook integrated court feeds for instant alerts. This targeted approach scales. It avoids scanning every post. Yet it requires investment. Budgets for tech rose 25% post-ruling, industry surveys show.
These holdings ripple outward. UK platforms, post-Brexit, align closely. The Online Safety Act echoes them. US firms with EU users adapt. A 2023 Deloitte analysis predicted $10 billion in global compliance spends by 2025.
Other Landmark Judgments on Platform Liability
YouTube and Cyando cases from 2021 dug into passivity. Joined Cases C-682/18 and C-683/18 involved copyright. Platforms hosted infringing videos and files. The CJEU ruled: Immunity holds if neutral. But active roles forfeit it. YouTube's algorithms promoted content. That tipped the scale. Cyando's Uploaded server stayed passive. It won protection.
Implications hit hard. Streaming services audit recommendations. If an algorithm pushes pirated films, liability follows. Netflix avoids this by curating originals. User platforms like TikTok tweak feeds. They exclude flagged creators. A numbered compliance checklist helps:
- Review algorithm logs monthly.
- Disable promotions for reported accounts.
- Train staff on neutrality tests.
- Document passive status in reports.
SABAM v Netlog in 2012 tackled filtering. A Belgian court ordered social site Netlog to block music infringements. CJEU said no. General scans violate Article 15. Costs burden users unfairly. Platforms cheered. But it limits defenses. They cannot preempt all risks.
L’Oréal v eBay from 2011 addressed trademarks. eBay knew of fakes but delayed. Court held: Knowledge triggers duty. Commercial features, like ads for listings, end neutrality. eBay added keyword blocks. Response times dropped to 12 hours. This case influences marketplaces today. Amazon scans for counterfeits proactively in EU.
Core Principles Shaping Intermediary Accountability
Actual knowledge starts the timer. Notifications must be clear. Platforms verify sources. Anonymous tips rarely suffice. Once confirmed, action follows. EU averages show 90% removals within 24 hours for valid claims.
Passive versus active divides worlds. Editorial choices matter. A blog platform stays safe. A news aggregator curating headlines risks more. Guidelines define limits. Platforms classify features: search is passive; personalization active.
- Passive: Mere hosting.
- Active: Optimization or promotion.
- Hybrid: Needs case-by-case review.
Ongoing duties emerge. Removals extend to copies. Proactive tools target repeats. Yet no blanket pre-screening. Courts allow injunctions for patterns, like hate speech campaigns.
Targeted filtering gains ground. Specific orders permit tech solutions. General ones do not. Platforms invest in machine learning. Accuracy improves yearly. A 2024 EU audit found 75% effectiveness in pilot programs.
Actionable Strategies for Platforms to Mitigate Risks
Build notice mechanisms first. Create portals for reports. Integrate with national authorities. Response SLAs: 24 hours max. Train moderators. Use multilingual support for EU users. Track metrics. Aim for under 10% appeals.
Tech audits are essential. Deploy hashing for visuals. NLP for text. Test quarterly. Partner with vendors like Google Cloud for tools. Costs: €500,000 annually for mid-size platforms. Document everything. Logs prove diligence in court.
User terms set expectations. Ban illegal acts clearly. Enforce consistently. Publish policies. Annual reviews align with case law. For global ops, map jurisdictions. Use geoblocking where needed. Balance DSA with US free speech.
Legal teams assess orders. Consult on scope. If global, prioritize EU first. Conflicting laws? Seek clarifications. Mock drills prepare for injunctions. This proactive stance builds resilience.
The Digital Services Act and Future Compliance
DSA, effective 2024, codifies case law. Very large platforms face systemic risks. Annual audits required. Fines up to 6% of turnover. Smaller ones follow basics: transparent reporting.
Obligations expand. Risk assessments mandatory. For hate speech, mitigate proactively. Tools from Glawischnig fit here. Platforms submit plans to coordinators. Non-compliance? DSA enforcers step in.
UK's Online Safety Act parallels. Ofcom oversees. Duties similar: swift removals. US platforms watch. Section 230 may evolve. Cross-border advice: Harmonize policies. Our firm helps with gap analyses.
Investment surges. DSA compliance budgets hit €2 billion EU-wide in 2024 estimates. Early adopters gain edges. Trust rises with transparency.
Conclusion: Building a Compliant Digital Presence
EU intermediary liability evolves fast. Cases push platforms toward targeted action. DSA cements this. Operators must adapt. Review policies now. Implement tech. Seek counsel.
For US, UK, EU businesses, risks loom large. Fines hurt. Repute suffers. Proactive steps pay off. Contact key-g.com's legal team. We tailor audits, train staff, draft terms. Stay compliant. Build trust.
Frequently Asked Questions
What counts as 'actual knowledge' under EU law?
Actual knowledge means a platform receives a specific, substantiated notice of illegal content. Vague complaints do not trigger duties. For example, a court order or detailed user report with evidence qualifies. Platforms must verify quickly. If ignored, liability attaches. In practice, use standardized forms to filter noise. EU guidelines recommend timestamps and screenshots. This prevents abuse while ensuring prompt action. Delays beyond 24-48 hours invite scrutiny.
Can platforms be forced to monitor all content proactively?
No. Article 15 of the E-Commerce Directive prohibits general monitoring. Courts cannot order blanket pre-screening of uploads. However, targeted measures are allowed after specific notifications. For instance, filtering for known defamatory phrases from a court case is fine. DSA reinforces this. Platforms avoid broad AI scans to stay immune. Focus on reactive systems with narrow proactive elements for repeats.
How does the DSA change intermediary liability?
DSA builds on directives and cases. It requires risk assessments for large platforms. Transparency reports detail removals. Fines scale with size. New duties include statements of reasons for decisions. Appeals processes strengthen. For all platforms, faster takedowns and better notifications. Aligns with Glawischnig's global reach. Non-EU firms with EU users comply or face blocks. Start with self-audits.
What steps should a US-based platform take for EU compliance?
Map EU user base. If over 45 million, you're 'very large' under DSA. Appoint EU reps. Update terms for notice systems. Invest in geofencing tools. Monitor CJEU rulings. Conduct annual legal reviews. Partner with EU counsel for orders. Budget for tech: €100,000+ yearly. Train on differences from Section 230. This minimizes exposure while serving global markets.
Ready to leverage AI for your business?
Book a free strategy call — no strings attached.


