Wedoany.com Report-Nov. 11, Google Research has launched Nested Learning, a machine learning technique to tackle catastrophic forgetting in continual learning systems.
Google is encouraging the broader machine learning community to further explore this approach.
The approach appears in the paper “Nested Learning: The Illusion of Deep Learning Architectures,” presented at NeurIPS 2025. It models neural networks as linked optimisation tasks across layers, each with unique context handling and update speeds.
Catastrophic forgetting affects large language models when new tasks erase prior knowledge. Traditional methods keep architecture and optimisation separate.
Nested Learning unites them as layers in one nested optimisation framework, Google explains. Learning spans modules that control their own data flow and refresh rates.
Researchers built Hope, a self-adjusting recurrent model using Titans memory units plus continuum memory systems (CMS). CMS enables varied update speeds in memory parts, mimicking human brain plasticity.
Hope achieved lower perplexity and better accuracy than typical transformers and recurrent networks on public language and reasoning tests, Google stated.
Nested Learning treats optimisers and core architecture features as associative memory systems. These act as mappings between inputs, errors, or sequence links.
Hope uses nested optimisation so memory elements refresh at different rates, creating a continuum memory system. This supports self-modification and new data addition while keeping old knowledge.
Google invites wider study of the method in the machine learning field.
In related news, the US Department of Justice finished its antitrust review of Google’s $32 billion Wiz purchase. The decision clears a key regulatory step for the Alphabet unit in cloud security.
The Federal Trade Commission noted on its site that early termination of the DOJ review occurred on 24 October 2025.









