LinkedIn Under Fire for Sharing Private Messages to Train AI

By: Noah Plafker

 

If you are entering the workforce today, having a LinkedIn account is almost a necessity.  This professional social media platform, launched in 2003, has become the go-to space for job seekers, employers, and professionals looking to expand their networks.  Over the past decade, LinkedIn has evolved into one of the most powerful tools for career advancement, offering job opportunities, industry insights, and professional connections.  With over one billion members worldwide, it remains the leading platform for networking and career development.  However, despite its widespread influence, LinkedIn has recently faced criticism from its own users.  Some members have accused the company of using their private data to train generative AI, raising concerns about privacy and data security.

A LinkedIn user recently filed a class action lawsuit in the United States District Court for the Northern District of California, claiming that LinkedIn unlawfully disclosed its Premium customers’ private messages to third parties to train AI models.  The alleged suit claims that in August of 2024, LinkedIn quietly introduced a new privacy setting to its settings menu which allowed users to enable or disable the sharing of their personal data for the purpose of training generative AI models.  The setting enabled by default, which automatically opted users into a program that allowed LinkedIn and its affiliates to train generative AI with the users’ personal data.  Shortly after the release of the new setting, on September 18th, LinkedIn updated its privacy policy to reflect that the company may use customers’ data to help improve and train generative AI models.  Although LinkedIn updated its privacy policy, the damage had already been done, as Premium members’ data had already been integrated into the AI training models.

The plaintiff and members of the lawsuit are all Premium LinkedIn members, meaning they pay for their subscriptions to the social networking platform, and in return the membership is supposed to have heightened privacy protections.  According to the class action suit, LinkedIn Premium customers agree to an additional contract that free subscribers do not.  This contract known as the LinkedIn Subscription Agreement promises Premium customers enhanced privacy protections, including terms that specifically apply to the processing and disclosure of LinkedIn Premium customers’ personal information.  Given LinkedIn’s professional nature, Premium users fear that their private messages may contain sensitive information, including employment details, intellectual property, compensation, and other confidential matters.  Since Microsoft owns LinkedIn, the complaint raises concerns that private user data could be integrated into Microsoft’s broader AI product suite, “such as confidential job searches appearing in Word suggestions, business strategies in Teams chat completions, or salary discussions in Microsoft 365 features.”

AI is increasingly being integrated into industries worldwide, yet many people still do not fully understand what it means to train generative AI models.  At its core, AI “is both a set of selected algorithms and the data used to train those algorithms so that they can make the most accurate predictions.”  To improve, AI must continuously process enormous amounts of data, enabling it to learn and refine its capabilities.  However, in LinkedIn’s case, the platform allegedly used its own users’ data without their consent, raising serious privacy concerns.  This not only threatens the privacy rights of LinkedIn users but also highlights broader issues of data security across all social media platforms.

As technology continues to advance at a rapid pace, the need for strong laws and policies to protect privacy has never been greater.  With the rise of AI, data privacy laws have also evolved to address growing concerns over how personal information is collected and used.  An example of this is the European Union’s General Data Protection Regulation (“GDPR”), which sets forth several principles that companies must follow when handling people’s personal data.  If companies want to use their users’ data, they must convey the specific purpose to those users and only collect the minimum amount of data required for that purpose.  Beyond the United States, AI is regulated by policies such as the EU Artificial Intelligence Act, which enforces strict governance, risk management, and transparency in AI usage.  While several U.S. states have enacted their own data privacy laws, there is still no comprehensive federal legislation governing AI and data privacy nationwide.

This highlights the growing challenges society faces with the rise of AI.  While AI is undoubtedly a powerful and beneficial tool, the lack of a comprehensive regulatory framework surrounding such advanced technology is deeply concerning.  AI has the potential to revolutionize industries, but it also poses significant risks to individuals’ fundamental right to privacy.  In recent weeks, President Trump has signed executive orders upending the former Biden administration’s consumer and national security safeguards on AI.  Additionally, President Trump issued an executive order titled, “Removing Barriers to Executive Leadership in Artificial Intelligence,” aimed at strengthening America’s global dominance in AI.  To further this goal, President Trump announced a new joint venture between Oracle, OpenAI, and SoftBank, known as Stargate, which plans to invest up to $500 billion in AI infrastructure development.

As the new administration and major corporations invest heavily in AI development, the critical question remains: will Americans be protected from privacy breaches like the one alleged in the LinkedIn case?  While President Trump’s push to position America as the global leader in AI is a bold move, it must be accompanied by strong regulatory frameworks to safeguard individual privacy.  Currently, the U.S. lacks federal legislation specifically addressing AI and data privacy.  If America genuinely wants to lead in AI innovation, lawmakers must take proactive steps to ensure that individuals are protected from data misuse on platforms like LinkedIn.  AI is a powerful tool in the modern technological era, but at the end of the day, even the most advanced tools are only as effective—and ethical—as the people who wield them.

 

Student Bio: Noah Plafker is a second-year law student at Suffolk University Law School.  He is a staff member for the Journal of High Technology Law and received a Bachelor of Arts degree in Political Science from the University of Colorado Boulder.

Disclaimer: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.

Print Friendly, PDF & Email