🤖 AI Summary
The European Parliament adopted a non‑legislative report (483 for, 92 against, 86 abstentions) urging ambitious EU action to protect minors online, including a harmonised minimum age of 16 for access to social media, video platforms and AI companions (13–16 allowed with parental consent). MEPs call for stricter enforcement of the Digital Services Act with fines, platform bans and possible personal liability for senior managers, plus a ban on the most harmful addictive practices (infinite scroll, autoplay, reward loops), engagement‑based recommender systems for minors, loot boxes and commercial exploitation like “kidfluencing.” The report also pushes urgent regulation of generative AI harms — deepfakes, AI nudity apps and companionship chatbots — and supports an EU age‑verification app/eID wallet that must be accurate and privacy‑preserving. It cites research showing 97% of youth go online daily and one in four minors exhibiting “problematic” smartphone use.
For the AI/ML community this signals major technical and product shifts: recommender systems and ad‑targeting pipelines will need redesigns or opt‑out modes for under‑16s, and platform UX must default‑disable addictive mechanics. Age‑assurance requires robust, privacy‑preserving verification (e.g., cryptographic eID, differential privacy or zero‑knowledge proofs) and verification/ moderation tooling will need to scale under tighter DSA enforcement. Generative‑AI deployments face new constraints—higher liability, mandatory detection/watermarking and stricter content policies—pushing development of watermarking, deepfake detection, safer dataset practices and compliance tooling as core engineering priorities.
Loading comments...
login to comment
loading comments...
no comments yet