More than a dozen whistleblowers and insiders from Meta and TikTok have told the BBC that both companies made deliberate decisions to allow more harmful content into user feeds after internal research demonstrated that outrage and inflammatory material drove higher engagement, raising serious questions about how the world’s largest social media platforms have prioritised profit over user safety. The revelations, gathered for a BBC documentary titled Inside the Rage Machine, offer an unusually detailed look at the algorithm arms race that unfolded in the wake of TikTok’s explosive growth, and the choices made by platform leadership as they scrambled to compete for user attention.
A former engineer at Meta described being instructed by senior management to permit more borderline harmful content, including misogynistic posts and conspiracy theories, in user feeds in order to compete with TikTok, with the rationale given being that the company’s stock price was under pressure. Senior researcher Matt Motyl, who worked at Meta between 2019 and 2023, told the BBC that Instagram Reels was launched in 2020 without adequate safety safeguards, sharing internal research showing that Reels posts had significantly higher rates of bullying, harassment, hate speech, and violence compared to the main Instagram feed. Safety teams at the time were denied requests for additional specialist staff to handle child protection and election integrity, while 700 personnel were simultaneously allocated to grow Reels. Another former Meta engineer, referred to as Tim, described how a decision to stop suppressing borderline but legally permissible content was made at the senior vice-president level as the company sought to recover engagement and revenue ground lost to TikTok. Internal documents shared with the BBC by Motyl further revealed that Meta’s own research had identified its algorithm as rewarding content creators with a path that prioritised profits over audience wellbeing, and acknowledged that its financial incentives were not aligned with the platform’s stated mission.
On the TikTok side, a trust and safety employee referred to as Nick provided the BBC with access to the platform’s internal complaint dashboards, revealing instances where relatively trivial cases involving political figures were assigned higher review priority than serious harm cases involving minors, including a 16-year-old in Iraq who reported that explicit images purporting to be of her were being shared on the platform. Nick described a workplace culture in which raising safety concerns with management produced little response, and said staff had been told to continue handling cases according to the rankings assigned rather than reprioritising based on vulnerability of those affected. He attributed the pattern to the company’s desire to maintain strong relationships with political figures and governments in order to avoid regulatory action or bans, and his advice to parents with children using the platform was direct: remove the app and keep children away from it for as long as possible. A separate machine-learning engineer who built TikTok’s recommendation engine between 2020 and 2024 described the algorithm as a largely opaque deep-learning system that engineers had limited direct control over, with content treated as numerical identifiers rather than evaluated for its nature or impact. Both Meta and TikTok denied the substance of the whistleblowers’ claims, with Meta stating it does not amplify harmful content for financial gain and TikTok describing the allegations as fabricated misrepresentations of how its moderation systems operate.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.