top of page
Search

The Algorithm Knew!

 

In June 2025, OpenAI's automated systems flagged an 18-year-old's ChatGPT account after detecting conversations describing gun violence. The account belonged to Jesse Van Rootselaar, a teenager in Tumbler Ridge, a remote town of 2,700 in the Canadian Rockies. Open-AI Company employees debated alerting the Royal Canadian Mounted Police, reviewed the conversations and assessed threat levels. They then decided the activity did not meet their threshold for referral to law enforcement, and banned the account instead. Seven months later, on February 10, 2026, Van Rootselaar killed her (his) mother and 11 year old half brother at home, then went to Tumbler Ridge Secondary School and shot five students and a teaching assistant before killing themself.


Generally speaking, knowing something creates no automatic obligation to act on it, which is why you might know your neighbour drinks heavily and drives, yet feel no compulsion to report them. The obligation arises when belief turns into certainty about imminent harm. We do not police thoughts or possibilities, we only police danger that becomes real and immediate and most of our policing is retrospective after the event.


OpenAI staff followed this reasoning, because violent fantasies are not violence and disturbing conversations are not plans. Someone discussing gun violence with a chatbot might be writing fiction, processing trauma or exploring thoughts that they will never act on. If you report every such conversation you would flood the police with false alarms while criminalising imagination. Company employees drew a line, they decided that there wasn’t an imminent and credible threat and in fact on that account they were right. They did maintain a company policy, one that might be approved of by numerous philosophers, but the outcome was eight corpses, albeit some seven months later.


When we assess threats we accept fallibility because human knowledge is limited, uncertain and contextual, which means we cannot know what someone will do, but only what they might do. AI systems use automated tools and trained algorithms that identify cues for violent activities, processing conversational data beyond human capacity, while detecting linguistic patterns, identifying escalation markers and comparing behaviour against statistical models, built from thousands of previous cases.


When the AI systems flagged Van Rootselaar's account they were not guessing in the way humans guess, they were applying algorithms, measuring threat indicators against thousands of data points, reaching a confidence level that did not meet the threshold for certainty, but was high enough that culminated in employees debating whether to break user privacy. Somewhere between "normal user" and "call police" sat a human decision whether the AI was raising legitimate concerns. The error was, even in this example a human error.

We need to be careful that we are not ignoring the fact that machines function differently to us because they aggregate, correlate and predict in ways unlike human intuition. When an algorithm assigns a 35% threat probability, that represents something mathematical and pattern-based. It is  picking up patterns that we haven’t noticed. While it is hard to be sure that the algorithm is doing a better job of spotting risk, it is undeniable that it will be able to take more detailed data into consideration, and compare with more examples of behaviour than a human, and at least in that respect might come to a more reliable conclusion.

Handing decisions about reporting risk to machines means handing them authority over privacy itself. The probable outcome is deferring to a system that observes everything and understands nothing, it will report behaviour without grasping motive.


Locke argued that legitimate authority derives from consent and must remain limited. Mill insisted that society had no business interfering with individual liberty unless your actions directly harmed others. Neither anticipated a non-human agent working within guardrails set by big business, guardrails set up primarily to protect the company against corporate liability.

Would mandatory reporting at lower thresholds have saved lives in Tumbler Ridge? Perhaps, but it would also have triggered investigations into thousands of ChatGPT users who never harmed anyone. The innocent-to-guilty ratio would be staggeringly high. Even if OpenAI had alerted the police in June 2025 and the police investigated a teenager who had broken no law, they couldn’t arrest someone for conversations with a chatbot or confiscate legally owned firearms based on algorithmic suspicion. They might have flagged her for monitoring but that would not necessarily have prevented February's massacre, it would have simply documented that the authorities knew and could not act.


Prediction does not necessarily guarantee prevention and if action was routinely taken we would never know if prevention was taking place. It would set the scene for the state to over-reach its control on society. We would need to decide if that over-reach was proportionate in order to achieve a ‘happier’ and safer world.


We must ask whether powerful AI assistance is compatible with privacy at all. Machines that understand context, anticipate our needs, and engage meaningfully even with our darker thoughts must know us intimately. That same intimacy makes them useful and also gives them the capacity to predict when we might become dangerous. It is this very capability that is alarming. We routinely defer decisions to computer systems. The consequences of deferring to an AI that claims to be able to identify our emotional state is worrying.


We are choosing between societies. In one, AI companies monitor conversations and report concerning patterns at lower thresholds where privacy erodes. Fewer tragedies like Tumbler Ridge occur, offset by thousands of lives disrupted by false positives. In the other we maintain high thresholds for intervention, protecting privacy and expression while occasionally this fails catastrophically and people die in preventable tragedies simply noting it as the cost of liberty.

The machine is watching us because we built it to watch us. We must decide whether that knowledge triggers action or not. If we start allowing AI to be involved in our thoughts and actions how do we stop ourselves going down ‘a slippery slope’? Do we sacrifice privacy for safety, and could that sacrifice eventually create a world where big business and authoritarian governments watch, judge and administer justice based on what it knows about us and not what we do?



100 word summary for busy people :

OpenAI's systems flagged Jesse Van Rootselaar's violent conversations seven months before she killed eight people, including herself. Staff debated, assessed threat levels, then banned the account without alerting police. They followed a reasonable principle that violent fantasies aren't violence, disturbing chat isn't plans, but eight people died.

The problem is that lower reporting thresholds mean investigating thousands who'll never harm anyone. Higher thresholds mean missing the ones who will be called to action. However, AI doesn't guess. it correlates thousands of data points, many more than humans. Do we trust the AI or the human admins? We're choosing between societies: one where privacy erodes under algorithmic surveillance, another where we accept preventable tragedies as liberty's cost. This is an urgent and immediate problem but who is equipped to solve it?

 
 
 

1 Comment


The brain can trip anytime . Do we guess or play the odds . Ai has no emotion. It clever but is it actually smart

Like
bottom of page