The research investigates how Large Language Models (LLMs) internalize new knowledge through a framework called Knowledge Circuits Evolution, identifying computational subgraphs that aid in knowledge storage and processing. Key findings highlight the influence of new knowledge relevance, the phase shift in circuit evolution, and a deep-to-shallow evolution pattern, which could enhance continual pre-training strategies for LLMs.
+ knowledge-circuits
continual-pre-training ✓
large-language-models ✓
circuit-evaluation ✓
data-science ✓