Scientist: We are entering the most dangerous phase of AI development
Jared Kaplan, the chief scientist at artificial intelligence company Anthropic, has made a series of serious predictions about the future of humanity, warning that a key decision about our fate is dangerously approaching.
According to him, the choice is still in our hands, but only if we decide not to hand over control to machines, Futurism reports.
Kaplan predicts that this decision point is fast approaching. In an interview with the Guardian, he warned that humanity will have to decide by 2030 at the latest, and possibly as early as 2027, whether to take the “ultimate risk” and allow AI models to train themselves.
Such a move could trigger an “intelligence explosion” that would take technology to new levels and lead to the creation of artificial general intelligence (AGI) – a system that surpasses human intelligence. This could bring enormous benefits to humanity through scientific and medical advances. On the other hand, the same decision could allow the power of AI to spread unchecked, leaving us at the mercy of its whims.
"It sounds like a scary process. You don't know where you're going to end up.", he said.
Kaplan is part of a growing number of prominent experts who are warning about the potentially catastrophic consequences of the development of AI.
Geoffrey Hinton, one of the "fathers of AI", has publicly expressed regret for his life's work and often warns that AI could destroy society.
OpenAI's Altman predicts that AI will eliminate entire job categories, while Kaplan's boss, Anthropic CEO Dario Amodei, recently warned that AI could take over half of entry-level office jobs, accusing competitors of "exaggerating" the true extent of disruption AI will cause.
Kaplan seems to agree with his boss's assessment of the impact on jobs. In an interview, he said he expects AI to be able to do "most office jobs" within two to three years.
While he is optimistic that AI can be kept in line with human interests, his biggest concern is the possibility of allowing powerful AI systems to train other AI systems. It is, he says, an “extremely important decision” that we will have to make very soon.
“That’s what we think is probably the biggest decision or the scariest thing to do… since no one is involved in the process, you don’t really know. One is, are you going to lose control of it? Do you even know what the AIs are doing?”, he told the Guardian.
While larger AI models are already being used to train smaller models in a process known as distillation, Kaplan is primarily concerned with so-called recursive self-improvement, a process in which AI learns without any human intervention, making huge leaps in its capabilities.
Allowing such developments raises difficult philosophical questions: “The big question is: are AIs good for humanity? Are they useful? Will they be harmless? Will they understand people? Will they allow people to continue to have control over their lives and the world?”

