Study: AI in Europe Is Gradually Becoming Over-Regulated (www.technologylaw.ai)

🤖 AI Summary
A study prepared for the European Parliament’s ITRE committee finds that the EU’s landmark Artificial Intelligence Act—while ambitious and rights‑based—is becoming entangled with existing EU digital laws (GDPR, Data Act, Cyber Resilience Act, DSA/DMA, NIS2), creating overlapping obligations that risk stifling innovation and disproportionately burdening smaller AI providers. The report warns that the AI Act’s tiered, risk‑based model (banned practices → high‑risk systems → transparency duties for low/medium risk → special rules for general‑purpose AI) mirrors product safety regimes but extends them into domains (human‑rights compliance, systemic risk) that are hard to audit and could produce duplicated assessments (e.g., FRIA vs DPIA), multiplied cybersecurity and data‑sharing duties, and uneven enforcement across Member States. Technically, high‑risk systems and GPAIs face heavy compliance: risk‑management, data‑governance, documentation, human oversight, robustness, cybersecurity, and CE marking; GPAI providers also get transparency, copyright and enhanced cybersecurity duties, with a compute‑based presumption of systemic importance. The study urges immediate practical fixes—joint guidance, mutual recognition of equivalent assessments, aligned sandboxes and cross‑authority cooperation—and modest legislative clarifications (definitions of high risk, interoperability of cybersecurity rules, clear supply‑chain responsibilities). Without better coordination and capacity building (notably for market surveillance bodies and the European AI Office), Europe risks enforcing a coherent values framework at the cost of competitiveness and practical AI experimentation.
Loading comments...
loading comments...