TikTok Faces Legal Action Over Job Cuts Amid Safety Concerns
TikTok is facing potential legal challenges after announcing significant job cuts to its UK safety team. The decision, which impacts over 400 employees, coincides with allegations that the company intimidated workers ahead of a union vote.
Job Redundancies and Legal Threats
In August, TikTok disclosed plans to eliminate more than 400 positions within its UK workforce, attributing the layoffs to a restructuring process that incorporates artificial intelligence along with some roles being relocated overseas. This move has sparked outrage, particularly as it comes just days before employees were set to vote on forming a trade union.
Two moderators from TikTok have initiated legal proceedings against the social media giant, alleging unlawful determent and automatic unfair dismissal. Legal experts highlight that unlawful determent occurs when an employer adversely treats an employee due to their engagement in protected employment rights, such as pursuing collective bargaining or reporting company misconduct.
Allegations of Union-Busting
Stella Caram, legal head at Foxglove, a non-profit supporting the moderators, emphasized the troubling nature of TikTok’s timing. She stated, “In June, TikTok committed to hiring hundreds more content moderators, only to terminate them two months later. The only notable change seems to be the workers’ attempt to organise a trade union, suggesting an overtly unlawful attempt to undermine union efforts.”
Company Response and Safety Implications
In response to the mounting concerns, a TikTok spokesperson firmly rejected the allegations, characterising them as unsubstantiated. They reiterated that the layoffs are part of a broader global reorganisation focused on incorporating innovative technology to enhance user safety.
The safety of users has come into question, with former moderators expressing concern about TikTok’s reliance on technology in lieu of human oversight. Julio Miguel Franco, one of the moderators involved in the legal push, stressed, “When TikTok claims AI can ensure our safety, they are misleading the public. This approach jeopardises user safety and could result in a less secure environment as tasks are moved abroad to cut costs.”
Background
- Internal documents indicate that TikTok initially planned to maintain its human moderation team in London until at least 2025, highlighting the need for human oversight in moderating complex content.
- Past interviews with whistleblowers suggested that the moderation cuts would heighten risks for UK users.
- Concerns have also been voiced by MPs, including Dame Chi Onwurah, who warned about the implications for user safety amidst these cuts.
As the situation unfolds, TikTok has been granted one month to respond to the legal assertions made by the moderators. The outcome could have far-reaching implications not only for the company but also for worker rights and user safety within the evolving landscape of social media governance.
Source: Original Article






























