Blog
La legge UE sull'IA e la governance algoritmica sui marketplace online

L'AI Act dell'UE e la governance algoritmica nei marketplace online

Alexandra Blake, Key-g.com
da 
Alexandra Blake, Key-g.com
6 minuti di lettura
Consulenza legale
Aprile 09, 2025

In the digital corridors of the EU, regulators are sharpening their pencils—and their legal teeth. At the center of this regulatory wave is the EU Artificial Intelligence Act, the world’s first major legislation aimed at taming the wild west of artificial intelligence. And if you think this is just a matter for robot developers in labs, think again. Online marketplaces—yes, those slick platforms that serve you oddly perfect product recommendations and price suggestions—are front and center in this legal evolution.

The EU AI Act (AIA) aims to create a future where AI is safe, transparent, and respectful of fundamental rights. That’s a noble goal. But for platform operators, it translates to a host of new obligations, especially when algorithms make decisions that impact users, sellers, or markets. In this article, we explore what the AI Act means for online marketplaces, how it intersects with existing laws like the DSA and GDPR, and how to stay on the compliant—and competitive—side of this fast-evolving landscape.

What is the EU AI Act?

Il EU AI Act is a landmark piece of legislation proposed by the European Commission in 2021 and expected to enter into force soon. Its aim? To regulate AI systems based on their risk levels:

  • Unacceptable risk AI systems (e.g., social scoring by governments) are banned.
  • High-risk systems (e.g., credit scoring, CV screening) face strict requirements.
  • Limited and minimal risk systems (e.g., spam filters, chatbots) must meet transparency standards.

Marketplaces that use AI to rank search results, match products to users, or detect fraud may fall into the limited or high-risk categories, depending on how deeply those systems influence users’ rights or livelihoods.

How AI Shows Up on Marketplaces

If your platform uses machine learning to:

  • Suggest products based on past behavior
  • Adjust prices dynamically
  • Filter or demote low-quality listings
  • Moderate reviews or detect fake accounts

Then congratulations: you’re using AI—and you may need to rethink how it’s governed. The more automated your decision-making process, the closer regulators will look.

AI governance doesn’t only apply to humanoid robots or self-driving cars. It also applies to the recommendation system that nudges users to buy one phone case over another or to the fraud detection tool that quietly suspends a seller’s account overnight.

Key Obligations for Online Marketplaces Under the AI Act

  1. Risk Classification Platforms must assess whether their AI tools qualify as high-risk systems, especially if they impact consumers’ legal rights, financial stability, or access to economic opportunities. An AI that filters or blocks seller accounts based on automated behavioral patterns may very well qualify.
  2. Transparency Requirements Even if an AI system is considered low risk, platforms must inform users that they’re interacting with or being affected by AI. This includes product recommendations, price changes, and ranking mechanisms. No more hiding the algorithm behind the curtain like it’s the Wizard of Oz.
  3. Data Governance and Quality High-risk AI systems must be trained on high-quality, relevant, and bias-free datasets. That means platforms will need to audit their training data and eliminate patterns that could lead to discriminatory outcomes. If your product recommendation engine thinks women only want pink gadgets or assumes sellers from one country are more fraudulent, it’s time for a rethink.
  4. Human Oversight Automated systems must include human checks—especially before making impactful decisions like delisting a seller, rejecting a listing, or flagging user behavior. Regulators are increasingly wary of “black box” systems that can’t be explained or challenged. You can’t just blame it on the algorithm anymore.
  5. Robust Documentation Platforms using high-risk AI systems must maintain detailed technical documentation, risk assessments, and logs of system performance. Think of it as a user manual for regulators—and yes, it better be more helpful than the one that came with your Wi-Fi router.

The Intersection with DSA and GDPR

If you’re thinking, “Wait, we already have to comply with the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR)—now this too?” You’re not alone. But the key is understanding how these three frameworks interact.

  • GDPR governs the processing of personal data, including profiling.
  • DSA governs platform accountability, including algorithmic transparency and content moderation.
  • AI Act governs the design, deployment, and risk management of AI systems.

In practice, a single algorithm may trigger all three laws. A recommendation system, for instance, might:

  • Collect behavioral data (GDPR)
  • Classifica i contenuti che influiscono sulla visibilità (DSA)
  • Prendere decisioni ad alto rischio che richiedono trasparenza e supervisione (AI Act)

Questo è un sandwich a tre strati legale.

Che dire dell'IA di terze parti?

Molte piattaforme integrano servizi di intelligenza artificiale di terze parti, ad esempio API per il rilevamento delle frodi o motori di personalizzazione creati da fornitori. L'AI Act ritiene le piattaforme responsabili non solo di ciò che costruiscono, ma anche di ciò che useSe il tuo strumento di terze parti si comporta male, è anche un tuo problema di conformità.

Ciò significa che le piattaforme devono:

  • Fornitori veterinari con attenzione
  • Rivedi la loro documentazione
  • Garantire contrattualmente la conformità e i diritti di audit

Solo perché non hai scritto tu l'algoritmo non significa che puoi girarti dall'altra parte.

Come Prepararsi: Una Checklist Pratica

  • Mappa i tuoi sistemi di IA: Identifica ogni luogo in cui l'IA viene utilizzata, direttamente o tramite terzi.
  • Classifica il rischio: Utilizza le categorie dell'AI Act per comprendere il tuo onere di conformità.
  • Verifica i tuoi dati: Elimina i pregiudizi, verifica la qualità e documenta le fonti.
  • Aggiungi avvisi di trasparenza: Fai sapere agli utenti quando vengono spinti dagli algoritmi.
  • Progettare processi human-in-the-loop: Consenti ricorsi, revisione manuale e sostituzioni.
  • Conserva i registri e la documentazione: Se le autorità di regolamentazione bussano alla porta, preparati a spiegare come funziona il tuo sistema.

Considerazioni finali

La legge UE sull'IA cambia le carte in tavola per i marketplace online. Richiede non solo la conformità, ma un cambiamento culturale: dall'ottimizzazione esclusivamente per il coinvolgimento e le conversioni alla progettazione di sistemi che siano spiegabili, equi e responsabili. Questo non significa che le piattaforme debbano abbandonare l'IA. Ma significa che devono trattarla non come una scatola nera di magia, ma come uno strumento regolamentato con conseguenze nel mondo reale.

Quindi, se il tuo algoritmo decide chi vede cosa, a chi vendere o chi bannare, è il momento di uscire da dietro le quinte. Il futuro di un'IA affidabile dipende non solo da un codice più intelligente, ma da una governance più intelligente.