Who is responsible for the invention?
- Simon Turpin
- 2 days ago
- 5 min read
In 1867 Alfred Nobel invented dynamite for mining and construction and the military applications followed almost immediately. He went on to develop ballistite, a smokeless propellant that spread across European armies, and built an armaments empire that spanned continents.
In 1888 his brother Ludvig died, and a French newspaper mistakenly ran Alfred's obituary by mistake. The headline was "The merchant of death," It is known that Alfred read his own obituary but it is not known exactly what he thought about it. However, what he did do was spend his final years rewriting his will, redirecting most of his fortune into prizes for those who benefit humanity, including, ironically a highly coveted Peace Prize.
At the Trinity nuclear test in July 1945 the first detonation of a nuclear weapon took place. Robert Oppenheimer watched the explosion and later recalled that a line from the Bhagavad Gita (a central text in Hinduism) came into his mind. The line was: "Now I am become Death, the destroyer of worlds."
After that day Oppenheimer argued against escalation, opposed the hydrogen bomb and pressed for international controls. Eventually the state he'd served stripped him of his security clearance and the weapons spread regardless.
Kalashnikov came at it from a different angle. He designed the AK-47 in 1947 as a defensive weapon for Soviet soldiers. It became the most produced firearm in history, and the standard equipment of insurgencies, civil wars and massacres across the globe. It was a tool so reliable and so cheap that almost anyone with a grievance could get hold of one. Shortly before his death in 2013 he wrote to a Russian Orthodox priest, asking whether the deaths could be laid at his door and described a pain in his soul that wouldn't settle. The priest absolved him by responding that responsibility belongs to the state.
What connects these men is that each of them reasoned carefully at the point of creation, believing that their work was for the common good. Nobel thought overwhelming destructive power would make war irrational and deterrence would do what diplomacy couldn't. Oppenheimer believed a scientifically literate community could govern what it produced and could have argued that the prospect of mutually assured destruction has made the world a safer place. Kalashnikov thought a defensive tool would stay defensive. None of these people were making unreasonable defence for releasing their creation to the world, but each of their defences collapsed once the invention met the world.
Sam Altman is building artificial general intelligence, not in the vague marketing sense of the phrase. He is referring to systems that could reason, plan and act across domains at or beyond human level, not narrow tools confined to specific tasks. The direction of travel is not unique to one company. Anthropic and others in the field have openly acknowledged that newer models are beginning to cross thresholds that are not fully understood in advance. In some cases, systems have been withheld or staged in release because their behaviour raised safety concerns.
Altman has been unusually direct about this tension and has spoken about both the potential upside and the risks, without trying to resolve the contradiction. The rationale he gives is straightforward: if such systems are going to be built anyway, it is better that they are developed under conditions where safety is taken seriously and iteration is controlled, rather than left to whoever gets there first.
It seems to be a familiar argument, Oppenheimer made the exact same argument when he rationalised that Hitler would eventually produce the bomb, so it was better that he did it first. If it's going to happen, it is better that it happens under responsible supervision. The problem with this internally consistent reasoning is that consistency doesn't equal containment. Germany didn’t destroy two cities in Japan, the responsible nation who crossed the finish line first did.
The harder question isn't about Altman's motives, he doesn't come across as reckless or indifferent. The question is whether responsibility at the point of creation is ever sufficient to govern what follows. The men who came before weren't careless, they weighed things up, considered consequences and decided the benefits outweighed the risks. They were often right about the upside. They were wrong, repeatedly and catastrophically, about their ability to control what happened next.
There's a philosophical tradition that draws a firm line between invention and use. The designer isn't responsible for every downstream outcome. The bridge builder isn't liable for every crash on the road. The analogy holds until the thing being built isn't a bridge. Artificial General Intelligence doesn’t operate within a system, it reshapes the system it enters. We are already using AI systems that provide results on which governments, business and individuals take action. This systems are already so advanced that no one can explain why they make the statements and recommendations that they do. We are already sub-contracting decision-making processes to the machine.
Which brings us back to the question that none of these men fully answered, and that we haven't answered either. Who is actually responsible for the invention? Is it the person who builds it, the state that funds it commercial organisations are now in front of the state), the system that deploys it, or the culture that demands it? Oppenheimer was consumed by the possibility that he was the bringer of death, at least to some extent taking personal responsibility, Kalashnikov was comforted by his priest that it was the responsibility of the state and Nobel gave the world a Peace Prize to remember him differently. Sam Altman is still building the most powerful machine ever known to man and the question isn't going away. The uncomfortable truth, the one nobody in this story has wanted to sit with for long, is that responsibility might belong to all of us, which in practice means it belongs to no one.
100 word summary
Nobel invented dynamite, Oppenheimer built the bomb, Kalashnikov designed the AK-47. Each believed their creation would serve a defensive or rational purpose. Each was wrong about what happened next. Now Sam Altman is building artificial general intelligence, using the same argument Oppenheimer used — if it's going to exist anyway, better it's built responsibly. History suggests that responsible creation and controllable outcomes are different things entirely. We already use AI systems whose reasoning nobody can fully explain. The question of who bears responsibility — inventor, state, market, or culture — has never been answered. Which probably means we all do. Which means nobody does.



Comments