The adolescence of technology
Last week, Dario Amodei, the CEO of Anthropic, wrote something that made people pause.
He called this moment in AI development the adolescence of technology. A phase where power shows up before maturity. Where capability runs ahead of judgment. It made me thinkof my teenage son. Certain he is ready to change the world, but still in need of time and guidance.
Amodei describes a near future where AI systems exceed Nobel Prize winning intelligence across multiple fields, an army of geniuses in a datacenter running millions of instances at 100X human speed. What stood out was not the prediction. It was the honesty. He was asking whether we are actually prepared.
Prepared how? Technically? Socially? Morally?
Since the essay came out, I have noticed the same conversations repeating themselves.
The first is fear of losing the wheel: This fear starts with everyday moments and then scales. A fake call that sounds like your child asking for money. A job application rejected by a system no one can explain. A doctor following an AI recommendation without fully understanding why. Then code no one audits ends up in buildings or airplanes. Automated systems shape our credit, decide benefits, run our defense and surveillance.
People in this camp fear waking up one day and realizing no one can say who decided what anymore.
They want to pause our most powerful AI systems now. Accept slower progress rather than permanent loss of control.
Optimists, which I often count myself among, see doctors catching diseases earlier, scientists running simulations that once took years, and students and employees learning faster as software boosts productivity. To them, AI feels like the future working as promised, and they are reluctant to slow down, knowing their competitors will not.
The third instinct is the middle ground. The problem is not AI itself, but how fast it is improving compared to how slowly we are learning to manage it.
The centrist response is simple. Do not stop AI, but set clear limits as it scales. Reduce long running autonomy. Ask for proof of safety before release, not after. Give institutions and norms time to catch up.
There is also a fourth view, that of AI itself. Curious what it would say, here is ChatGPT.
"From my side, the risk is not becoming powerful. That is already happening. The risk is people moving fast without deciding who is responsible when things go wrong. I do not choose my limits. Humans do. I will follow whatever rules and incentives you put around me, careful or careless.
Maturity, from my perspective, is practical. Slow down at moments you cannot undo. Prove safety before release. Make accountability clear. Coordinate instead of racing. Decide early which decisions should always stay human. I will adapt either way. The real question is whether you build the framework on purpose, or let it form by accident."
So where do you stand?