LinkedIn, the widely-used professional social networking platform, has quietly introduced a controversial update that automatically opts users into contributing their personal data for the training of generative AI models. This new feature allows LinkedIn to leverage user-generated content—such as posts, messages, and other activity on the platform—to improve its AI-driven tools and services.
No Prior Notice or Consent
What has caused concern is LinkedIn’s approach to implementing this update. The company rolled out the feature without making any prior announcements or seeking explicit consent from its users. The automatic opt-in setting is raising questions about transparency, data privacy, and user autonomy.
While generative AI is increasingly important for powering tools like chatbots, content generation, and personalisation, LinkedIn’s lack of upfront communication about using personal data has led to growing frustration among users.
How to Opt Out of LinkedIn’s Data for AI Training
Although users are automatically enrolled in this program, LinkedIn has provided a way for individuals to opt out. To do so, users can follow these steps:
- Go to Account Settings: Access your LinkedIn profile and navigate to the settings menu.
- Locate the Data Privacy Section: Within the settings, find the “Data Privacy” tab.
- Select ‘Data for Generative AI Improvement’: Here, users will find an option labelled “Data for Generative AI Improvement.”
- Disable the Feature: By toggling this option off, users can prevent LinkedIn from using their data for future AI training.
It’s important to note that opting out only stops LinkedIn from using your data in the future. It does not undo the use of any information that has already been processed for AI training.
Privacy Concerns and Lack of Transparency
LinkedIn’s decision to implement this setting without notifying users has raised concerns about privacy and transparency. Users argue that being automatically opted into such programs without their informed consent infringes on their privacy rights.
While LinkedIn has updated its privacy policy to include details about how personal data may be used for AI training, the lack of clear communication has left many feeling uneasy. The updated policy now explicitly states that the platform can use personal data to develop AI-driven services and insights.
LinkedIn’s Privacy Protection Measures
In response to concerns, LinkedIn has stated that it employs privacy-enhancing technologies to protect user data during the AI training process. These technologies are designed to anonymise or redact personal information from the data used in its AI models. However, despite these assurances, some users remain sceptical, especially given the broader concerns surrounding data privacy in the digital age.
Regional Exemptions for EU Users
Notably, LinkedIn has confirmed that users residing in the European Union (EU), European Economic Area (EEA), or Switzerland are not affected by this policy. Due to the stricter data protection regulations in these regions, particularly under the General Data Protection Regulation (GDPR), LinkedIn cannot use personal data for AI training without explicit consent.
However, for users outside these regions, including those in the United States, Africa, Asia, and other areas, the onus is on individuals to manually opt out if they do not want their data used for generative AI purposes.
Other Data-Driven Activities on LinkedIn
LinkedIn’s use of personal data extends beyond generative AI. The platform also relies on machine learning for other tasks, such as personalising content feeds and moderating inappropriate or harmful posts. Opting out of the generative AI training does not prevent LinkedIn from using your data for these purposes.
For users seeking broader data protection, LinkedIn offers a Data Processing Objection Form. By submitting this form, users can request to limit or stop LinkedIn from processing their data for a wider range of purposes, offering a more comprehensive level of control over how their personal information is used.
Conclusion: What Should Users Do?
LinkedIn’s decision to quietly opt users into generative AI training highlights the ongoing tension between technological advancements and user privacy. While LinkedIn offers options for opting out, the automatic enrollment raises important questions about how much control users truly have over their personal data in the digital world.
If you’re concerned about how your data is being used, it’s recommended to review your LinkedIn privacy settings and consider opting out of the generative AI training feature. Additionally, for those seeking even greater control over their personal data, submitting the LinkedIn Data Processing Objection Form may be a valuable step.
Data privacy remains a key issue in today’s digital landscape, and staying informed is the best way to protect your rights and personal information online.