Algorithms Don’t Care About Your Mental Health: What the TikTok Deal Reveals About Control and Profit
Politicians in the US are packaging the takeover as “safety.” But without accountability and incentive shifts, the system will still profit from our vulnerabilities.
The tension around the perpetually in-limbo TikTok deal with the U.S. government has begun to crystallize around one word: control. Who owns the algorithm? Who watches the watchers? Who determines what you see? And how deeply?
Not to worry, we’re told. The pieces are falling into place. Rather elegantly, in fact: a proposed U.S. takeover or restructuring, Oracle’s cloud as the custodian, Rupert Murdoch and Michael Dell in the mix. Somehow, Larry Ellison as the grinning beneficiary. What once sounded like abstract concerns are suddenly questions of power, identity, and even morality.
Let’s be really clear: TikTok’s For You page, which serves content based on their algorithm and user’s behavior, isn’t a mirror of choice. It is a system calibrated to engagement, designed to amplify whatever keeps the thumb in motion. Outrage, seduction, despair. Doesn’t matter. All are served up as entertainment. The company insists the algorithm is neutral, but neutrality is a myth. An engine that rewards anxiety, anger, and fixation will never be neutral.
The mental health consequences are no longer abstract. Study after study after study ties heavy TikTok use to depression, anxiety, sleeplessness, and addictive behaviors. These automated systems are not just curating information. They are determining whose pain is visible, and whose is quietly buried.
Of course, “transparency” is usually dangled as the cure to this diagnosis. But transparency without accountability is theater. You can open the hood for regulators and still drive the car off the same cliff.
I wrestled with this truth while writing my psychological thriller, The Salzburg Executioner. James Wohlmuth designs behavioral algorithms that, on paper, promise engagement and efficiency for advertisers on social networks. But when his wife, Sara, spirals into despair from content she never sought, James is forced to see the blunt reality: the technology he architected wasn’t neutral. It was too optimized. Optimized for engagement, not well-being. Sharpened for profit, even as it cut through the mental well-being of someone he loved.
That’s the unspoken core of the TikTok deal. The U.S. power brokers circling the company aren’t stepping in to protect users from harm. They are stepping in to inherit, and ultimately profit from, the well-documented harms. Oversight in this context doesn’t mean new guardrails; it means new landlords. Oracle and friends won’t dismantle the addictive mechanics. They’ll simply capture, and likely grow, the rent.
So the question persists: who decides what’s “safe enough”? Oversight isn’t care. Security isn’t empathy. Cloud data storage on US soil isn’t a guarantee of anything, really. Transparency without a fundamental re-alignment of incentives is like a magician’s sleight of hand. A flourish in one meant to distract from a knife in the other.
As users, content contributors, and everyday citizens, we should demand not just transparency of code but transparency of intent. Not just accountability of corporations but accountability of ourselves. The apps we download. What we click. What we share. And what we tolerate. Because the most dangerous myth is that algorithms are neutral. They are not.
The mental health cost of our digital lives should not be collateral damage. But today, anxiety, self-harm, polarization, and eating disorders are not side effects. They are just another part of the business model.
Algorithms can carve out connection, or they can hollow us out. But right now, they are doing exactly what we have asked them to do: keep us scrolling.
Not healthier. Not safer. Just scrolling.
For those holding the keys to these multi-billion dollar transactions, they appear to think that is enough.
—Salzburg, Austria, 22 September, 2025