🤖 AI Summary
LinkedIn announced it will begin using member data — including profiles, posts, resumes and other public activity — to train its AI models starting November 3, 2025. The change affects users in the EU, EEA, Switzerland, Canada and Hong Kong and will be enabled by default; members will need to opt out via the “Data for Generative AI Improvement” toggle in Settings. Crucially, opting out is prospective only: data collected before you opt out may still be retained and used in training. LinkedIn says under‑18s are excluded, and users can also submit a formal objection via a Data Processing Objection form. Microsoft/LinkedIn rely on a “legitimate interest” legal basis to justify default opt‑in.
For the AI/ML community this matters for both scale and governance. Access to large amounts of real user content can meaningfully improve domain‑specific models for recruiting, career guidance and professional content generation, but raises consent, provenance and bias concerns: training on public professional records may amplify existing inequities or expose sensitive information. The default opt‑in and retrospective data retention reduce individual control and could trigger regulatory scrutiny similar to recent cases around Meta’s user‑data training. Practitioners and privacy teams should note the legal framing, regions affected, and the limited nature of the opt‑out when assessing compliance, dataset curation, and model documentation.
Loading comments...
login to comment
loading comments...
no comments yet