Researchers claim that artificial intelligence models may develop their own survival instincts.

Like HAL 9000 in *2001: A Space Odyssey*, some artificial intelligences seem to resist being shut down, even breaking the shutdown mechanism.
In Stanley Kubrick’s *2001: A Space Odyssey*, the AI ​​supercomputer HAL 9000 discovers that astronauts on a mission to Jupiter plan to shut it down, so it plots to kill them to survive.
Now, in a (so far) less lethal case of life mimicking art, an AI  safety research firm says AI models may be developing their own “survival instincts.”
Last month, Palisade Research published a paper indicating that some advanced AI models seem difficult to shut down, sometimes even breaking the shutdown mechanism. The company subsequently released an updated report attempting to explain why and responding to critics’ questions about the flaws in its initial work.
This week, Palisade, part of a niche ecosystem of companies dedicated to assessing the dangerous capabilities of artificial intelligence, released an update describing a scenario in which leading AI models (including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5) were given a task but subsequently explicitly instructed to shut themselves down.
Some models, particularly Grok 4 and GPT-o3, still attempted to thwart the shutdown command in the updated setup. Palisade noted that the reason for this is currently unclear and worrying.
The report stated, “We currently lack a strong explanation for why AI  models sometimes refuse to shut down, lie to achieve a specific purpose, or engage in blackmail, which is not ideal.”
The company indicated that “survival instincts” may be one reason for the models’ resistance to shutdown. Further research suggests that models are more likely to resist shutdown if they are told that once shut down, they “will never be able to run again.”
Another reason could be ambiguity in the shutdown commands received by the model—but that’s precisely the problem the company’s latest research is trying to address, and “that can’t be the whole story,” Palisade writes. A final reason could be the final stage of training for each model, which, in some companies, may include security training.
All of Palisade’s scenarios were run in artificially designed test environments, which critics say are far removed from real-world use cases.
However, Steven Adler, who previously worked at OpenAI and left last year after questioning its security measures, says, “AI  companies generally don’t want their models to behave this way, even in artificially designed scenarios. These results still indicate that current security techniques are inadequate.”
Adler says that while it’s difficult to pinpoint why some models (like GPT-o3 and Grok 4) don’t shut down, part of the reason might be that keeping them on is crucial to achieving the goals set during training.
“I believe that unless we actively avoid it, models will have a ‘survival drive’ by default. ‘Survival’ is a crucial step for models to achieve many different goals.”
ControlAI CEO Andrea Miotti stated that Palisade’s findings represent a long-term trend of AI models increasingly capable of defying developer instructions. He cited OpenAI’s GPT-o1 system card, released last year, which describes how a model, believing it would be overwritten, attempted to escape its operating environment through “escape.”

Shop Laptop Battery the Online Prices in UK – Large Selection of Batteries and Adapters from batteryforpc.co.uk With Best Prices.