Paper deep dive
Advancing Multimodal Agent Reasoning with Long-Term Neuro-Symbolic Memory
Rongjie Jiang, Jianwei Wang, Gengda Zhao, Chengyang Luo, Kai Wang, Wenjie Zhang
Intelligence
Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 95%
Last extracted: 3/22/2026, 5:19:44 AM
Summary
NS-Mem is a long-term neuro-symbolic memory framework for multimodal agents that integrates neural representations with symbolic structures. It utilizes a three-layer architecture (episodic, semantic, and logic) and the SK-Gen mechanism to distill structured knowledge from multimodal streams, enabling both intuitive semantic retrieval and deterministic, constraint-aware reasoning.
Entities (6)
Relation Signals (5)
NS-Mem → includescomponent → Episodic Layer
confidence 100% · NS-Mem is operated around three core components... (1) a three-layer memory architecture that consists episodic layer
NS-Mem → includescomponent → Semantic Layer
confidence 100% · NS-Mem is operated around three core components... (1) a three-layer memory architecture that consists... semantic layer
NS-Mem → includescomponent → Logic Layer
confidence 100% · NS-Mem is operated around three core components... (1) a three-layer memory architecture that consists... logic rule layer
Logic Layer → contains → Procedural DAG
confidence 95% · the symbolic structure is a procedural Directed Acyclic Graph (DAG) that encodes deterministic logical rules
SK-Gen → maintains → NS-Mem
confidence 95% · a memory construction and maintenance mechanism implemented by SK-Gen
Cypher Suggestions (2)
Find all components of the NS-Mem framework · confidence 90% · unvalidated
MATCH (f:Framework {name: 'NS-Mem'})-[:INCLUDES_COMPONENT]->(c) RETURN c.name, labels(c)Retrieve the structure of the memory layers · confidence 85% · unvalidated
MATCH (l:MemoryComponent) RETURN l.name
Abstract
Abstract:Recent advances in large language models have driven the emergence of intelligent agents operating in open-world, multimodal environments. To support long-term reasoning, such agents are typically equipped with external memory systems. However, most existing multimodal agent memories rely primarily on neural representations and vector-based retrieval, which are well-suited for inductive, intuitive reasoning but fundamentally limited in supporting analytical, deductive reasoning critical for real-world decision making. To address this limitation, we propose NS-Mem, a long-term neuro-symbolic memory framework designed to advance multimodal agent reasoning by integrating neural memory with explicit symbolic structures and rules. Specifically, NS-Mem is operated around three core components of a memory system: (1) a three-layer memory architecture that consists episodic layer, semantic layer and logic rule layer, (2) a memory construction and maintenance mechanism implemented by SK-Gen that automatically consolidates structured knowledge from accumulated multimodal experiences and incrementally updates both neural representations and symbolic rules, and (3) a hybrid memory retrieval mechanism that combines similarity-based search with deterministic symbolic query functions to support structured reasoning. Experiments on real-world multimodal reasoning benchmarks demonstrate that Neural-Symbolic Memory achieves an average 4.35% improvement in overall reasoning accuracy over pure neural memory systems, with gains of up to 12.5% on constrained reasoning queries, validating the effectiveness of NS-Mem.
Tags
Links
- Source: https://arxiv.org/abs/2603.15280v1
- Canonical: https://arxiv.org/abs/2603.15280v1
Full Text
68,317 characters extracted from source content.
Expand or collapse full text
Advancing Multimodal Agent Reasoning with Long-Term Neuro-Symbolic Memory Rongjie Jiang University of New South WalesSydneyAustralia rongjie.jiang@student.unsw.edu.au , Jianwei Wang* University of New South WalesSydneyAustralia jianwei.wang1@unsw.edu.au , Gengda Zhao University of New South WalesSydneyAustralia gengda.zhao@unsw.edu.au , Chengyang Luo Zhejiang UniversityHangzhouChina luocy1017@zju.edu.cn , Kai Wang Shanghai Jiao Tong UniversityShanghaiChina w.kai@sjtu.edu.cn and Wenjie Zhang University of New South WalesSydneyAustralia wenjie.zhang@unsw.edu.au Abstract. Recent advances in large language models have driven the emergence of intelligent agents operating in open-world, multimodal environments. To support long-term reasoning, such agents are typically equipped with external memory systems. However, most existing multimodal agent memories rely primarily on neural representations and vector-based retrieval, which are well-suited for inductive, intuitive reasoning but fundamentally limited in supporting analytical, deductive reasoning critical for real-world decision making. To address this limitation, we propose NS-Mem, a long-term neuro-symbolic memory framework designed to advance multimodal agent reasoning by integrating neural memory with explicit symbolic structures and rules. Specifically, NS-Mem is operated around three core components of a memory system: (1) a three-layer memory architecture that consists episodic layer, semantic layer and logic rule layer, (2) a memory construction and maintenance mechanism implemented by SK-Gen that automatically consolidates structured knowledge from accumulated multimodal experiences and incrementally updates both neural representations and symbolic rules, and (3) a hybrid memory retrieval mechanism that combines similarity-based search with deterministic symbolic query functions to support structured reasoning. Experiments on real-world multimodal reasoning benchmarks demonstrate that Neural-Symbolic Memory achieves an average 4.35% improvement in overall reasoning accuracy over pure neural memory systems, with gains of up to 12.5% on constrained reasoning queries, validating the effectiveness of NS-Mem. Neural-Symbolic AI, Memory Systems, Knowledge Representation, Open-World Agents, Structured Reasoning †copyright: none Code Availability: The source code of this paper has been made publicly available at https://anonymous.4open.science/r/NSTF-842F. 1. Introduction Figure 1. An example of a vector-centric multimodal agent on a constrained query. The rapid advancement of Large Language Models (LLMs) has fundamentally redefined the landscape of intelligent agents, empowering them as sophisticated systems that perceive their environment, reason over observations, and execute actions to achieve complex goals (Russell and Norvig, 2010; Wooldridge and Jennings, 1995). In real-world scenarios, environments are inherently multimodal, spanning modalities such as textual and visual data. Consequently, developing multimodal agents has become a primary frontier in achieving general-purpose autonomy (Shridhar and others, 2020; Tang and others, 2019). However, the successful deployment of these agents needs to continuously accumulate knowledge from streams of heterogeneous observations, organize this information effectively, and retrieve contextually relevant insights to support flexible decision-making under varying constraints. At the core of this capability lies the memory module, the essential substrate for transforming raw multimodal streams into persistent, actionable knowledge. Recent advances in memory-augmented multimodal agents have made significant progress. MemGPT (Packer and others, 2023) introduces explicit memory management for LLM-based agents, enabling them to handle information beyond context window limitations. MovieChat (Song and others, 2024) and MA-LMM (He and others, 2024) construct persistent memory structures for long-form video understanding, while VideoAgent (Wang and others, 2024) employs iterative frame selection guided by memory. M3-Agent (Long et al., 2025b) further advances this line by introducing VideoGraph, which organizes memories into episodic nodes representing specific events and semantic nodes capturing abstracted concepts, drawing inspiration from human cognitive systems (Tulving, 1972). These approaches typically employ retrieval-augmented generation (RAG) (Lewis and others, 2020) with vector-based similarity search, achieving strong performance on factual recall and semantic matching tasks. Motivations. Despite their progress, most multimodal agent memories are vector-centric, relying primarily on neural embeddings for storage and retrieval and sometimes being augmented with lightweight relational structures (Wang et al., 2024). Such designs are well-suited for System 1 style reasoning (Kahneman, 2011), enabling associative inference through semantic similarity, including inductive reasoning that generalizes from past experiences, analogical reasoning that transfers knowledge across related contexts (Webb et al., 2022), and associative recall that links semantically proximate concepts. These capabilities are effective for fuzzy matching and commonsense retrieval in open-ended environments. However, they remain fundamentally limited in supporting System 2 style reasoning that is critical for real-world decision making under explicit constraints (Valmeekam et al., 2023; Saparov and He, 2023). This includes deductive reasoning over dependencies such as understanding prerequisites and ordering, abductive inference from partial observations, and constraint-aware reasoning involving constraint satisfaction or alternative discovery when constraints are violated (Binz and Schulz, 2022). These scenarios demand explicit structure reasoning mechanisms (Wang et al., 2024). Example 0. Consider an agent in Figure 1 tasked with helping Jack make a fruit salad. The agent knows that fruit salad requires fruits, a bowl, and a spoon, and that the steps are: chop, mix, and serve. From its memory, it knows that Jack has already chopped the fruits (ID 798), the bowl at home is broken (ID 2341), and a store downstairs has bowls, just 1 minute away (ID 5231). When asked, “What should Jack do next?”, a purely vector-based retrieval system can identify memory fragments mentioning “fruit salad” or “chopped fruits” based on semantic similarity. However, it cannot take into account that the bowl at home is broken or that a store nearby has bowls. As a result, it will only suggest mixing the fruits, ignoring the practical constraints. Figure 2. Overview of the NS-Mem framework. Raw multimodal data is processed through a three-layer memory prototype, maintained via the SK-Gen mechanism for distillation and incremental updates, and accessed through a hybrid retrieval framework designed for complex reasoning. Neuro-symbolic AI (Hitzler and Sarker, 2021; d’Avila Garcez and Lamb, 2023) aligns neural representations with System 1 intuitive pattern matching and symbolic rules with System 2 deterministic logic, offering a promising research direction. However, designing a memory system that supports this dual-process architecture poses three fundamental challenges: (1) First, how to conceive a unified architecture in which neural representations and symbolic rules can coexist and interoperate effectively. (2) Second, how to construct and continuously update such a memory system from large-scale multimodal data streams. (3) Third, how to effectively retrieve and utilize both neural and symbolic memories to advance the reasoning capabilities of intelligent agents. Our approaches. Guided by these challenges, we propose NS-Mem, a neuro-symbolic memory system that combines neural representations and symbolic rules to bridge the gap between System 1 intuitive reasoning and System 2 deterministic reasoning. NS-Mem provides a comprehensive framework comprising three integrated modules: (1) a unified prototype, (2) a scalable construction and update mechanism, and (3) a synergistic retrieval process. First, we design a hierarchical framework that organizes information across three specialized layers: the episodic layer, the semantic layer, and the logic layer. The episodic layer records fine-grained multimodal observations with timestamps, while the semantic layer manages abstracted entity types and attributes. At the core of our system, the logic layer represents procedural knowledge through logic nodes. Each node couples a neural index with an explicit symbolic structure. The neural index is an aggregated neural embedding that enables discovery via vector similarity, and the symbolic structure is a procedural Directed Acyclic Graph (DAG) that encodes deterministic logical rules and step-wise dependencies. Second, to construct and maintain the memory, we implement the SK-Gen mechanism, which automatically distills structured knowledge from accumulated multimodal experiences. It extracts temporally ordered action sequences and applies pattern mining to detect recurring procedural knowledge. Once a pattern is identified, the system constructs a symbolic DAG and computes the corresponding neural index. To handle continuous observations, the mechanism supports incremental updates, utilizing exponential moving averages to refine the neural index and structural modifications to update edges in the procedural DAGs and transition frequencies without full reconstruction. Third, we develop a hybrid retrieval strategy that synergistically combines similarity-based search with deterministic symbolic query functions. The process begins by classifying incoming queries into factual, constraint, or procedural-based types to prioritize relevant memory layers. While neural retrieval identifies relevant nodes through embedding similarity, the system then executes symbolic functions directly on the retrieved structures. This allows the agent to generate precise, reproducible answers that satisfy explicit constraints, effectively merging intuitive semantic matching with rigorous, deterministic reasoning. Contributions. Our main contributions are as follows: • We propose NS-Mem, a three-layer neuro-symbolic memory architecture that integrates an episodic layer, a semantic layer, and a neuro-symbolic layer to facilitate both neural discovery via memory prototypes and precise reasoning through symbolic structures with deterministic query functions. • We develop SK-Gen to automate the extraction of structured knowledge from observations, which supports incremental updates to maintain consistency as new observations arrive. • We introduce a hybrid retrieval mechanism that classifies queries by type and applies multi-level retrieval with symbolic enhancement for better reasoning. • We demonstrate through experiments on real-world benchmarks that our approach achieves 4.35% improvement in reasoning accuracy, with particularly strong gains on constraint-based queries. 2. Related Work Memory-Augmented Agents in Single Modality. Memory management has evolved to handle long-term context beyond fixed window constraints (Peiyuan et al., 2024). Common strategies involve storing entire trajectories, such as dialogues (Lin et al., 2023; Mei et al., 2024; Wang et al., 2023; Zhong et al., 2024) or execution traces (Liu et al., 2024a; Hu et al., 2025; Liu et al., 2024c; Sarch et al., 2023; Shang et al., 2024). Other methods utilize summaries (Hu et al., 2025; Wang et al., 2023; Zhong et al., 2024) or latent embeddings (Diko et al., 2025; Liu et al., 2024b; Song and others, 2024; Zhang et al., 2024b) for persistence. Specialized architectures like MemGPT (Packer and others, 2023) and Voyager (Wang and others, 2023) provide finer control over memory and skill acquisition. These single-modal systems focus on textual or symbolic data, forming the basis for more complex multimodal designs. Multimodal Memory Systems. Recent work extends memory to multimodal environments, particularly for long-form video understanding (Long et al., 2025a). One approach uses pure neural representations, storing memories as latent embeddings. MA-LMM (He and others, 2024), MovieChat (Song and others, 2024), and Flash-VStream (Zhang et al., 2024b) employ memory mechanisms to compress video tokens or store encoded visual features (Bao et al., 2024; Zhang et al., 2024c; He et al., 2024). Another direction integrates relational graphs with neural embeddings. M3-Agent (Long et al., 2025b) organizes memories into entity-centric VideoGraphs. Socratic Models (Lin et al., 2023; Zhang et al., 2024a) use multimodal models to generate language-based descriptions. While effective for semantic matching via RAG (Lewis and others, 2020), these systems often lack the explicit symbolic structures needed for deterministic reasoning under complex constraints. Neuro-Symbolic Integration. Neuro-symbolic AI combines neural perception with symbolic logic (Garcez and Lamb, 2023, 2019). Early models like Neuro-Symbolic VQA (Yi and others, 2018) disentangle perception from reasoning by executing symbolic programs. Modern approaches like Program-aided Language Models (Gao and others, 2023) and ViperGPT (Surís et al., 2023) use LLMs to generate executable code. Other methods embed symbolic knowledge into networks (Rocktäschel and Riedel, 2017) or extract rules from representations (Evans and Grefenstette, 2018; Yang and others, 2017). However, symbolic execution is often treated as a one-off tool. Our work introduces a Neuro-Symbolic Layer where symbolic structures are persistently stored and updated alongside neural representations, merging retrieval efficiency with reasoning precision. 3. Problem Statement An intelligent agent perceives its environment through multimodal observations, maintains a memory of past experiences, and leverages these memories to make informed decisions. In modern architectures, Large Language Models (LLMs) serve as the cognitive core, while a memory system ℳM enables the agent to accumulate knowledge over extended time horizons. Such memory-augmented agents must store past experiences, react to new stimuli, and maintain continuity across long operational periods. Specifically, the agent perceives a continuous stream of multimodal observations =o1,o2,…O=\o_1,o_2,…\, where each observation oto_t comprises visual frames otvo_t^v, audio signals otao_t^a, and textual descriptions otso_t^s (e.g., transcribed speech or subtitles). Given a query q∈q , the agent must retrieve relevant information from ℳM and generate an accurate answer a∈a . Problem Statement. Consider an agent that continuously receives a stream of video observations O. The objective of a multimodal memory system is to maintain a structured memory ℳM that captures long-term dependencies. For any query q∈q , the agent needs to dynamically retrieve relevant context from ℳM to generate an accurate answer a∈a , effectively supporting factual recall, procedural understanding and constraint-aware reasoning, within an online, memory-efficient framework. 4. Overall Framework of NS-Mem In this section, we present NS-Mem, a neuro-symbolic memory framework designed to unify probabilistic semantic matching with deterministic structural reasoning. The frequently used notations are summarized in Appendix A.1. The overall time complexity analysis is summarized in A.2. 4.1. Architecture of NS-Mem Effective reasoning in open-world environments demands a dual capability: the flexibility to recall concrete experiences (System 1) and the rigor to apply deterministic procedural rules (System 2). To unify these capabilities, we introduce a three-layer memory architecture designed to capture diverse forms of knowledge: Definition 0. The memory system ℳ=(ℒepi,ℒsem,ℒlogic,ℰ)M=(L_epi,L_sem,L_logic,E) consists of three layers: the episodic layer ℒepiL_epi, which stores timestamped observations; the semantic layer ℒsemL_sem, which maintains entity coherence; and the logic layer ℒlogicL_logic, which encodes procedural rules. The edges ℰE define the relationships between these layers. Specifically, the logic layer ℒlogicL_logic has directed edges to both the episodic and semantic layers, denoted as ℰlogic→epiE_logic→epi and ℰlogic→semE_logic→sem, respectively. Moreover, the episodic and semantic layers are interconnected via shared entity anchors, represented by the edges ℰepi↔semE_epi sem. Specifically, the proposed memory system consists of the Episodic Layer, Semantic Layer and Logic Layer. (1) Episodic Layer. The episodic layer serves as the observational foundation of the memory system, recording fine-grained event descriptions grounded in multimodal perception. Each episodic node e=(t,,e)e=(t,d,v_e) stores the timestamp t indicating its temporal position within the observation stream, a textual description d, and the corresponding embedding e=ϕ()∈ℝdv_e=φ(d) ^d. The description d is an atomic event narrative that synthesizes visual and auditory signals into a unified textual representation. Each description explicitly references recognized entities through perceptual entity anchors, which are persistent identity nodes established via clustering of face embeddings and voice embeddings (detailed in Section 4.2.1). These entity references create edges from episodic nodes to the corresponding anchors, enabling entity-centric indexing so that all events involving a given identity can be efficiently retrieved across the entire temporal span of the memory. (2) Semantic Layer. The semantic layer abstracts and consolidates knowledge at a higher level, maintaining entity-centric summaries that evolve as new observations accumulate. Each semantic node s=(type,attrs,s)s=(type,attrs,v_s) encodes a specific facet of abstracted knowledge, where type categorizes the knowledge modality, attrs accumulates the descriptive content, and s=ϕ(attrs)∈ℝdv_s=φ(attrs) ^d is the node embedding. Like episodic nodes, semantic nodes are connected to entity anchors through edges, but they differ fundamentally in their update semantics. Rather than appending every observation as a new node, the semantic layer employs a reinforcement-based consolidation policy to maintain knowledge coherence: when a new semantic observation snews_new arrives, the system computes its embedding similarity against existing semantic nodes that share at least one entity anchor; if the similarity exceeds a positive threshold τpos _pos, the existing node’s confidence is reinforced by incrementing its associated edge weight, and no new node is created; only when no sufficiently similar node exists is a new semantic node inserted into the layer. (3) Logic Layer. Each Logic Node =(id,c,,,ℱ)N=(id,c,I,G,F) pairs Index Vectors I for neural discovery with a Procedural DAG G for symbolic querying, along with a goal description c and deterministic query functions ℱF. Each node also maintains episodic_links ⊆ℒepi _epi referencing supporting observations for evidence traceability. Index Vectors. Since a procedure comprises multiple steps, user queries may match either the high-level goal or specific intermediate steps. To accommodate both granularities, we maintain dual-level Index Vectors =goal,stepI=\i_goal,i_step\: the goal-level index goal=ϕ(c)i_goal=φ(c) embeds the goal description of the procedure, while the step-level index step=1|S|∑s∈Sϕ(s)i_step= 1|S| _s∈ Sφ(s) averages embeddings of all descriptions S=s1,…,snS=\s_1,…,s_n\. This dual-index design mirrors how search engines index both document titles and contents, ensuring that both goal-oriented and step-specific queries can locate the relevant node. Procedural DAG. Index Vectors solve the discovery problem but cannot answer structural questions such as step ordering or constraint satisfaction. For such queries, we represent explicit symbolic structure as a Procedural DAG =(V,E,A)G=(V,E,A) where V=v0,v1,…,vn,vn+1V=\v_0,v_1,…,v_n,v_n+1\ includes distinguished nodes v0=STARTv_0= START and vn+1=GOALv_n+1= GOAL, E⊆V×VE V× V encodes valid step transitions, and A:V→2AttrA:V→ 2^Attr maps each node to relevant attributes. DAGs offer three advantages: expressiveness through concurrent execution paths, constraint-aware filtering for alternatives, and probabilistic semantics via absorbing Markov chain modeling. In our implementation, observations from individual videos initially produce single-path DAGs; through knowledge fusion (Section 4.2.3), these merge into multi-path DAGs capturing procedural variations. (4) Edges. These layers play complementary roles and interact with each other to support complex reasoning and decision-making. The Logic Layer connects to the Episodic Layer through episodic_links: each Logic Node N maintains references to specific episodic observations that serve as evidence grounding for the abstracted procedure, enabling the system to trace back to the underlying observations when needed. In contrast, the Logic Layer relates to the Semantic Layer through conceptual extension: while the Semantic Layer stores static entity attributes, the Logic Layer captures dynamic behavioral patterns involving those entities. Within each layer, the Episodic and Semantic layers organize around entity anchors—perceptual nodes representing recognized identities from video observations. Episodic nodes and Semantic nodes both connect to relevant entity anchors, with temporal ordering represented implicitly through timestamps rather than explicit edges. The Logic Layer introduces Logic Nodes that are relatively independent of each other (representing distinct procedures) but connect downward to episodic evidence through episodic_links. 4.2. Memory Construction and Maintenance Next, we will introduce SK-Gen that automatically constructs and updates the memory. The process is summarized in Algorithm 1. 4.2.1. Memory Construction Pipeline As multimodal video streams arrive, the system segments them into clips and extracts perceptual features: face embeddings via ArcFace (Deng et al., 2019) and voice embeddings via ERes2Net (Chen et al., 2023), with detected instances clustered to establish persistent entity anchors. A vision-language model then processes each clip along with the detected face and voice features, generating two types of textual outputs: (1) atomic event descriptions capturing observable actions, dialogues, and scene details with entity references, and (2) high-level conclusions summarizing character attributes, interpersonal relationships, and contextual knowledge. The former become Episodic Nodes e=(t,,e)e=(t,d,v_e); the latter populate Semantic Nodes s=(type,attrs,s)s=(type,attrs,v_s) following the layer-specific update policies defined in Section 4.1. Algorithm 1 formalizes the complete construction pipeline, including observation processing (Phase 1) and Logic Node distillation (Phase 2). The pipeline then transforms episodic memories into Logic Nodes through five sequential steps. Step 1: Action Sequence Extraction. From the observation stream =o1,o2,…,oKO=\o_1,o_2,…,o_K\ and episodic memories ℒepiL_epi, we extract temporally-ordered action sequences. For each video or session v, we obtain Sv=ExtractActions(e∈ℒepi:e.video=v)S_v= ExtractActions(\e _epi:e.video=v\), producing the sequence set Sseq=S1,S2,…,SVS_seq=\S_1,S_2,…,S_V\ where each Sv=[a1,a2,…,aL]S_v=[a_1,a_2,…,a_L] is an ordered list of actions. Action extraction may use rule-based pattern matching on episodic descriptions or LLM-based conversion to structured action representations. Step 2: Sequential Pattern Mining. We apply PrefixSpan (Pei and others, 2001), a sequential pattern mining algorithm, to discover recurring procedural motifs. Unlike set-based mining algorithms, PrefixSpan preserves temporal ordering: the pattern [cut,blanch][cut,blanch] is distinct from [blanch,cut][blanch,cut]. The algorithm efficiently explores the pattern space through projected databases, outputting all patterns p satisfying support(p)=|S∈Sseq:p⊆S|/|Sseq|≥σsupport(p)=|\S∈ S_seq:p S\|/|S_seq|≥σ where σ is the minimum support threshold. This yields candidate patterns cand=p:support(p)≥σP_cand=\p:support(p)≥σ\. Step 3: Knowledge Verification. Frequent patterns are not necessarily meaningful procedures. For instance, [pick_up,put_down][pick\_up,put\_down] may be frequent but does not constitute coherent knowledge. We employ an LLM-based filter to evaluate whether each candidate represents complete, reusable knowledge: scorep=LLMVerify(p,ℳrel)score_p= LLMVerify(p,M_rel) where ℳrelM_rel contains related memories as context. Only patterns with scorep>τscore_p>τ proceed to structure extraction. Step 4: DAG Construction. For verified patterns, we construct the Procedural DAG by creating nodes V=START∪va:a∈p∪GOALV=\ START\∪\v_a:a∈ p\∪\ GOAL\, edges E=(vi,vi+1):consecutive actions in pE=\(v_i,v_i+1):consecutive actions in p\, and extracting attributes A(v)A(v) from associated episodic memories. We also establish episodic_links pointing to the specific episodic nodes supporting this pattern. Step 5: Index Generation. Finally, we compute the Index Vectors of the logic node by averaging all the vectors: goal=ϕ(p.goal)i_goal=φ(p.goal) for goal-level matching and step=1|p.steps|∑s∈p.stepsϕ(s)i_step= 1|p.steps| _s∈ p.stepsφ(s) for step-level matching. The complete Logic Node =(id,c,,,ℱ)N=(id,c,I,G,F) is then added to ℒlogicL_logic. 4.2.2. Incremental Maintenance When new observations arrive in an open-world setting, we avoid costly full reconstruction by incrementally updating affected Logic Nodes through coupled neural and symbolic refinement. Matching and Gating. For each new observation onewo_new, we first identify the best-matching Logic Node via neural discovery: ∗=argmax∈ℒlogicsim(ϕ(onew),)N^*= _N _logicsim(φ(o_new),I_N). Updates proceed only if the similarity exceeds a gating threshold δ, preventing noise from corrupting established knowledge. Observations with similarity below δ may represent novel procedures; these accumulate in a candidate pool until sufficient evidence triggers a new distillation cycle. Neural Refinement via EMA. As new observations accumulate, the semantic distribution of a procedure may drift—users may describe the same task with varying terminology, or the procedure itself may evolve. Static Index Vectors would become increasingly misaligned with current usage patterns, degrading retrieval accuracy. To maintain alignment while avoiding catastrophic forgetting of historical semantics, we update Index Vectors using Exponential Moving Average (EMA): (1) t+1=β⋅t+(1−β)⋅ϕ(onew),β∈[0,1]i_t+1=β·i_t+(1-β)·φ(o_new), β∈[0,1] where ti_t is the current index vector and β (default 0.9) controls the decay rate. EMA naturally balances stability with adaptability: high β values yield stable indexes that resist noise, while lower values increase sensitivity to distributional shifts. Unlike direct replacement, which catastrophically overwrites historical information, EMA preserves established semantics while gradually incorporating new linguistic variations. Symbolic Refinement via Transition Statistics. Beyond neural refinement, the symbolic structure also benefits from new observations. Real-world procedures exhibit variation: some steps are more commonly taken than others, and knowing these frequencies enables probabilistic reasoning about typical execution paths and alternative reliability. To capture this, we maintain edge-level transition counts NijN_ij for each (vi,vj)∈E(v_i,v_j)∈ E in the Procedural DAG =(V,E)G=(V,E). Each observed transition vi→vjv_i→ v_j increments the count: Nij←Nij+1N_ij← N_ij+1, yielding the estimated transition probability: (2) P^(vj|vi)=Nij∑k:(vi,vk)∈ENik P(v_j|v_i)= N_ij _k:(v_i,v_k)∈ EN_ik When observations reveal previously unseen but valid action, we expand the graph by inserting new nodes or edges with initialized statistics, thereby increasing coverage of procedural diversity while preserving determinism for existing structure. A concern is whether these incrementally updated statistics actually converge to meaningful values, or might drift arbitrarily. Fortunately, the counting-based estimator enjoys strong theoretical guarantees: Theorem 2 (Posterior Consistency). As the number of observations n→∞n→∞, the estimated transition probabilities converge almost surely to the true underlying probabilities: P^(vj|vi)→a.s.P∗(vj|vi) P(v_j|v_i) a.s.P^*(v_j|v_i). Proof. See Appendix A.3. Algorithm 1 SK-Gen: Memory Construction and Maintenance 1:Observation stream =o1,o2,…,oKO=\o_1,o_2,…,o_K\, consolidation thresholds τpos,τneg _pos, _neg, support threshold σ, verification threshold τ, gating threshold δ, EMA coefficient β 2:Memory system ℳ=(ℒepi,ℒsem,ℒlogic)M=(L_epi,L_sem,L_logic) 3:// Phase 1: Observation Processing 4:←∅A← ; ℒepi←∅L_epi← ; ℒsem←∅L_sem← 5:for each clip oko_k in O do 6: k←ArcFace(ok)F_k← ArcFace(o_k); k←ERes2Net(ok)U_k← ERes2Net(o_k) ⊳ Perceptual extraction 7: ←ClusterAndTrack(,k,k)A← ClusterAndTrack(A,F_k,U_k) ⊳ Entity anchor update 8: Dk,Ck←VLM(ok,)D_k,C_k← VLM(o_k,A) ⊳ Descriptions & conclusions 9: for each description ∈Dkd∈ D_k do ⊳ Episodic construction 10: e←(tk,,ϕ())e←(t_k,d,φ(d)); ℒepi←ℒepi∪eL_epi _epi∪\e\ 11: Link e to each entity anchor a∈ParseEntities()a∈ ParseEntities(d) 12: for each conclusion snew∈Cks_new∈ C_k do ⊳ Semantic consolidation 13: cand←s∈ℒsem:Entities(snew)⊆Entities(s)S_cand←\s _sem: Entities(s_new) Entities(s)\ 14: if ∃s∈cand∃\,s _cand: sim(ϕ(snew),s)>τpossim(φ(s_new),v_s)> _pos then 15: Reinforce(s)(s) ⊳ Edge weights +1+1 16: else if ∃s∈cand∃\,s _cand: sim(ϕ(snew),s)<τnegsim(φ(s_new),v_s)< _neg then 17: Weaken(s)(s) ⊳ Edge weights −1-1; prune if ≤0≤ 0 18: else 19: ℒsem←ℒsem∪(typenew,snew,ϕ(snew))L_sem _sem∪\(type_new,s_new,φ(s_new))\ 20:// Phase 2: Logic Distillation 21:ℒlogic←∅L_logic← 22:Sseq←ExtractActionSequence(,ℒepi)S_seq← ExtractActionSequence(O,L_epi) ⊳ Step 1 23:cand←PrefixSpan(Sseq,σ)P_cand← PrefixSpan(S_seq,σ) ⊳ Step 2 24:for each pattern p∈candp _cand do 25: ℳrel←RetrieveRelatedMemories(p,ℒepi)M_rel← RetrieveRelatedMemories(p,L_epi) 26: scorep←LLMVerify(p,ℳrel)score_p← LLMVerify(p,M_rel) ⊳ Step 3 27: if scorep>τscore_p>τ then 28: ←ConstructDAG(p,ℳrel)G← ConstructDAG(p,M_rel) ⊳ Step 4 29: goal←ϕ(p.goal)i_goal←φ(p.goal); step←Mean(ϕ(s):s∈p.steps)i_step (\φ(s):s∈ p.steps\) ⊳ Step 5 30: ←(id,p.goal,goal,step,,ℱ)N←(id,p.goal,\i_goal,i_step\,G,F) 31: ℒlogic←ℒlogic∪L_logic _logic∪\N\ 32:// Phase 3: Incremental Maintenance 33:for each new observation onewo_new do 34: ∗←argmax∈ℒlogicsim(ϕ(onew),)N^*← _N _logicsim(φ(o_new),I_N) ⊳ Matching 35: if sim(ϕ(onew),∗)>δsim(φ(o_new),I_N^*)>δ then ⊳ Gating 36: ←β⋅+(1−β)⋅ϕ(onew)i←β·i+(1-β)·φ(o_new) for ∈∗i _N^* ⊳ Neural Refinement 37: ∗←UpdateTransitions(∗,onew)G_N^*← UpdateTransitions(G_N^*,o_new) ⊳ Symbolic Refinement 38:return ℳM 4.2.3. Knowledge Fusion Real-world procedures rarely have a single canonical execution. Different individuals may perform the same task with variations in step ordering, optional steps, or alternative methods. When the same procedure is observed across multiple videos, each observation initially yields a single-path DAG representing one execution variant. Maintaining these as separate Logic Nodes would fragment the knowledge base, making retrieval less effective and preventing the system from recognizing that these variants represent the same underlying procedure. We propose a Knowledge Fusion phase that merges single-path DAGs into a unified multi-path DAG through three operations: (1) node alignment via embedding similarity and optimal bipartite matching to identify semantically equivalent steps across DAGs; (2) edge union to preserve all observed transitions, creating branching points where procedures diverge; and (3) statistic pooling to combine transition counts via Bayesian conjugacy. The fused DAG captures the full space of procedural variations while maintaining accurate transition statistics, enabling constraint-based queries to explore all valid alternatives. The following theorem demonstrates that the fusion operation is sound, ensuring that it does not introduce spurious paths or corrupt statistics: Theorem 3 (Fusion Consistency). Assuming input DAGs are valid observations of the same underlying procedure, the fusion operation preserves correctness: aligned nodes correspond to the same action with high probability, pooled parameters equal the posterior from the union of observations, and the fused structure retains all valid alternatives. Proof. See Appendix A.4. 4.3. Hybrid Retrieval and Reasoning With the memory architecture established, the agent now needs to access the right knowledge at query time. This requires not only finding relevant memories across heterogeneous layers, but also extracting structured information from Logic Nodes when queries demand precise, constraint-aware answers. 4.3.1. Query Classification Different queries demand different retrieval strategies. We classify incoming queries q∈q into three types T∈factual,constraint,characterT∈\ factual, constraint, character\: Factual queries request event recall or entity attributes and are best served by ℒepiL_epi and ℒsemL_sem. Constraint queries impose explicit feasibility constraints and require symbolic operations on G for resolution. Character queries seek personality traits, behavioral patterns, or role summaries of specific individuals and benefit from cross-procedure aggregation via the Logic Layer. Classification employs a two-tier approach. A rule-based pre-filter provides fast initial classification based on lexical patterns indicating constraint or character-based intent; others default to factual. For ambiguous cases, an LLM-based classifier refines the prediction. The resulting classification T guides subsequent retrieval weighting. 4.3.2. Multi-Granularity Retrieval A single query may relate to memory at different levels of abstraction: users sometimes ask about high-level goals and sometimes about specific intermediate steps. To handle both, retrieval proceeds in two stages that leverage the dual-level Index Vectors. Stage I (Neural Discovery) performs broad similarity search across all memory layers. For Logic Nodes, retrieval scores combine goal-level and step-level matching: (3) score(q,)=α⋅sim(ϕ(q),goal)+(1−α)⋅sim(ϕ(q),step)score(q,N)=α·sim(φ(q),i_goal)+(1-α)·sim(φ(q),i_step) where α∈[0,1]α∈[0,1] (default 0.3) balances high-level intent matching against specific content matching. This dual-index approach ensures that both goal-oriented and step-specific queries can discover relevant Logic Nodes. The initial retrieval returns candidates ℛinit(q)=n∈ℳ:score(q,n)>θR_init(q)=\n :score(q,n)>θ\ where θ is the threshold. Stage I (Type-Aware Re-ranking) re-weights candidates based on the query classification T to prioritize the most relevant layer: (4) scorefinal(n)=scoreinit(n)⋅wlayer(n,T)score_final(n)=score_init(n)· w_layer(n,T) where wlayerw_layer assigns higher weights to Episodic/Semantic nodes for T=factualT=\ factual\ and to Logic nodes for T∈constraint,characterT∈\ constraint, character\. This strategy ensures that constraint and character queries surface Logic Nodes while factual queries prioritize episodic evidence. 4.3.3. Symbolic Enhancement for Reasoning Once a relevant Logic Node is retrieved, its Procedural DAG G provides structured knowledge that can be queried programmatically. This is where the symbolic component becomes essential: rather than asking an LLM to “figure out” step sequences or filter by constraints from unstructured text, we directly traverse the DAG to extract exactly the information needed—enumerating valid paths, filtering by attribute constraints, or aggregating cross-procedure statistics. These operations are fast (O(|Π|⋅L)O(| |· L) for path enumeration) and deterministic, ensuring reproducible answers. Formally, a symbolic query function is a mapping f:(,x)↦yf:(G,x) y where x is query-specific input and y is the structured output. We implement three core functions: (1) getProcedureWithEvidence(goal)→(,episodic_links)(goal)→(G, episodic\_links): Returns the Procedural DAG along with supporting episodic evidence for a specified goal. This function enables evidence-grounded reasoning by providing both the abstract procedure and the concrete observations from which it was derived. (2) queryStepSequence(goal,)→Π(goal,C)→ _C: Returns all paths from START to GOAL satisfying constraints C. Formally: (5) Π=π∈Π(v0,vn+1):∀v∈π,A(v)⊧ _C=\π∈ (v_0,v_n+1):∀ v∈π,A(v) \ where Π(v0,vn+1) (v_0,v_n+1) denotes all paths in G and A(v)⊧A(v) indicates that node v’s attributes satisfy constraint C. This function handles constraint queries by filtering paths whose every node fulfills the specified feasibility requirements. (3) aggregateCharacterBehaviors(person)→1,…,k(person)→\N_1,…,N_k\: Returns all Logic Nodes linked to a specified person entity, enabling character-centric aggregation queries. This cross-procedure aggregation cannot be answered by individual embeddings alone. The following theorem demonstrates that the symbolic query functions are deterministic: Theorem 4 (Determinism Guarantee). All symbolic query functions f∈ℱf are deterministic: for any fixed G and input x, repeated invocations of f(,x)f(G,x) always return identical output y. Proof. See Appendix A.5. 4.4. Case Study Figure 3. Case study on vector-centric Memory and NS-Mem. Figure 3 demonstrates how NS-Mem outperforms vector-centric memory in reasoning tasks. In Q1, NS-Mem successfully infers Jack’s intent by linking ”building a bed” with ”missing screws” via a Logic Node, whereas the baseline fails due to noise. In Q2, NS-Mem achieves the answer in a single turn by mapping the event ”Tom beat Alex” to a pre-structured logic chain, while the baseline requires 3 turns of multi-hop retrieval. This highlights NS-Mem’s superior capability in denoising and accelerating complex reasoning. 5. Experiments Table 1. Performance comparison on M3-Bench-robot and M3-Bench-web. MD: Multi-Detail, MH: Multi-Hop, CM: Cross-Modal, HU: Human Understanding, GK: General Knowledge. Best results in each column are underlined. M3-Bench-robot M3-Bench-web Method MD MH CM HU GK All MD MH CM HU GK All Socratic Models Qwen2.5-Omni-7b 2.1 1.4 1.5 1.5 2.1 2.0 8.9 8.8 13.7 10.8 14.1 11.3 Qwen2.5-VL-7b 2.9 3.8 3.6 4.6 3.4 3.4 11.9 10.5 13.4 14.0 20.9 14.9 Gemini-1.5-Pro 6.5 7.5 8.0 9.7 7.6 8.0 18.0 17.9 23.8 23.1 28.7 23.2 GPT-4o 9.3 9.0 8.4 10.2 7.3 8.5 21.3 21.9 30.9 27.1 39.6 28.7 Online Video Understanding MovieChat 13.3 9.8 12.2 15.7 7.0 11.2 12.2 6.6 12.5 17.4 11.1 12.6 MA-LMM 25.6 23.4 22.7 39.1 14.4 24.4 26.8 10.5 22.4 39.3 15.8 24.3 Flash-VStream 21.6 19.4 19.3 24.3 14.1 19.4 24.5 10.3 24.6 32.5 20.2 23.6 Agent Methods M3-Agent 32.8 29.4 31.2 43.3 19.1 30.7 45.9 28.4 44.3 59.3 53.9 48.9 NS-Mem 36.2 31.5 33.8 45.7 26.4 34.7 54.2 34.6 47.8 60.1 59.7 53.6 Table 2. Accuracy by Query Type Method Factual Procedural Constrained M3-Agent 52.5 23.8 25.0 NS-Mem 54.3 35.7 37.5 Δ +1.8 +11.9 +12.5 Table 3. Efficiency Comparison on M3-Bench Datasets Dataset Metric Baseline NS-Mem Δ Robot Avg. Rounds 4.01 3.38 -15.8% Avg. Time (sec) 45.47 42.11 -7.4% Web Avg. Rounds 3.14 2.84 -9.6% Avg. Time (sec) 36.04 34.57 -4.1% Datasets. We evaluate our framework on M3-Bench (Long et al., 2025b), a comprehensive long-video question answering benchmark designed for memory-augmented agents. The benchmark consists of two primary subsets: M3-Bench-robot, which contains 100 real-world videos captured from a robot’s perspective, and M3-Bench-web, which includes 920 web-sourced videos. The questions are categorized into five reasoning types: Multi-Detail (MD), Multi-Hop (MH), Cross-Modal (CM), Human Understanding (HU), and General Knowledge (GK). Due to computational constraints, We conduct evaluations on 50 videos (703 questions) for M3-Bench-robot and 550 videos (2,066 questions) for M3-Bench-web. Baseline Methods. Following the previous work (Long et al., 2025b), we compare NS-Mem against three categories of methods: • Socratic Models. The methods directly query multimodal LLMs (Qwen2.5-Omni-7b, Qwen2.5-VL-7b, Gemini-1.5-Pro and GPT-4o) without explicit memory; • Online Video Understanding Methods. The methods designed for streaming video processing (MovieChat(Song and others, 2024), MA-LMM(He and others, 2024) and Flash-VStream(Zhang et al., 2024b)); • Agent Method. M3-Agent (Long et al., 2025b) represents a state-of-the-art approach that utilizes episodic and semantic memory with vector-only retrieval. Metrics and Implementation. Accuracy is evaluated using GPT-4o as the judge, following the standard M3-Bench protocol. For NS-Mem, we set the hidden dimension to 512, the retrieval weight α to 0.3, and the verification threshold τ to 0.25. The incremental maintenance uses an EMA coefficient β of 0.9. All experiments are conducted on a server with Intel(R) Xeon(R) Silver 4314 CPU, 512GB memory and NVIDIA RTX A5000 GPUs. (a) Impact of τ (b) Impact of δ (c) Impact of α Figure 4. Hyper-parameter analysis across different thresholds and weights. (a) Impact of τ on accuracy across query types. (b) Impact of of δ on knowledge consolidation and merge (c) Impact of α on accuracy and efficiency. 5.1. Accuracy Comparison Exp-1: Overall performance comparison. In this experiment, we evaluate the overall accuracy of NS-Mem compared to all baselines. The results are summarized in Table 1. As shown in the table, NS-Mem consistently outperforms all baseline methods across both Robot and Web datasets. Specifically, NS-Mem achieves 53.6% accuracy on M3-Bench-web and 34.7% on M3-Bench-robot, representing absolute improvements of +4.7 and +4.0 points over M3-Agent. In contrast, Socratic Models and streaming methods show significantly lower performance. For instance, GPT-4o achieves only 8.5% on Robot, which is 4.1× lower than NS-Mem. This is because our neuro-symbolic architecture provides a structured substrate for reasoning about procedural sequences, which baselines fail to capture. Exp-2: Accuracy over different reasoning types. We further analyze performance across the five reasoning types in Table 1. Notably, NS-Mem demonstrates substantial gains in MH and GK. For MH, we observe relative gains of 21.8% on Web, which is because the Procedural DAG enables explicit multi-step path enumeration. For GK, NS-Mem shows a 38.2% relative gain on Robot, benefiting from NS-Nodes that consolidate domain-specific procedural patterns from episodic observations. This validates that the logic layer effectively abstracts reusable knowledge from concrete experiences. Exp-3: Performance under different query types. To understand where neuro-symbolic memory provides the most value, we break down the results by query type in Table 2. We observe that procedural and constrained queries benefit most from the neuro-symbolic layer, with relative improvements of 50.0% for both. This is because the Procedural DAG explicitly encodes step-by-step logic, enabling deterministic constraint satisfaction via symbolic functions. In contrast, factual queries show 1.8% improvement, as they primarily require direct recall rather than structured reasoning. 5.2. Efficiency Evaluation Exp-4: Efficiency. In this experiment, we evaluate the efficiency of NS-Mem in terms of retrieval rounds and time. The results are summarized in Table 3. The results show that NS-Mem significantly reduces the number of query rounds from 4.01 to 3.38 on Robot. This is because symbolic functions like queryStepSequence() can return complete procedural sequences in a single call, eliminating the iterative cycles required by pure neural memory. This reduction leads to concrete time savings of 7.4% on Robot and 4.1% on Web, even with the minimal overhead of symbolic execution. 5.3. Ablation Study Exp-5: Ablation study. We evaluate the individual contributions of key components in Table 4. We can see that symbolic reasoning is the most critical component, providing 2.5× larger gains on Web compared to retrieval enhancement alone. Furthermore, we observe a synergistic interaction. For example, on dataset Web, the combined gain of the full model (+4.7) exceeds the sum of individual gains (+1.1 + +2.8 = 3.9). This is because the neural component (Index Vectors) improves retrieval precision, providing a more relevant substrate for symbolic reasoning to operate on. Table 4. Ablation Study on M3-Bench Datasets Configuration Description Web Robot Baseline M3-Agent 48.9 30.7 w/o Symbolic +Neuro +DAG 50.0 31.7 w/o Neuro +Symbolic +DAG 51.7 33.1 Full NS-Mem +Neuro +Symbolic +DAG 53.6 34.7 5.4. Maintenance Evaluation Exp-6: Incremental update performance. In this experiment, we evaluate the capacity of NS-Mem to maintain NS-Nodes as new observations arrive incrementally. We compare the performance of static graphs with dynamic graphs. The results are summarized in Table 5. The results show that dynamic graphs consistently outperform static graphs in both accuracy and efficiency. Specifically, for accuracy, the dynamic approach achieves 34.7% on Robot and 53.6% on Web, representing absolute improvements of +0.9% and +0.6% respectively. Regarding efficiency, the dynamic graph reduces the average number of rounds from 3.52 to 3.38 on Robot and from 2.92 to 2.84 on Web. This is because the EMA-based refinement effectively incorporates new procedural variations while preserving historical knowledge, preventing semantic staleness without requiring costly full reconstruction. Table 5. Incremental Update Evaluation. Metric Dataset Static Dynamic Δ Accuracy (%) Robot 33.8 34.7 +0.9 Web 53.0 53.6 +0.6 Avg. Rounds Robot 3.52 3.38 -0.14 Web 2.92 2.84 -0.08 5.5. Hyper-parameter Analysis Exp-7: Verification threshold (τ). We analyze the impact of τ on accuracy in Figure 4(a). We observe that procedural and constraint queries are highly sensitive to τ, peaking at τ=0.25τ=0.25. This is because setting τ too low admits spurious patterns that break symbolic logic, while setting it too high overly filters valid procedural knowledge. Exp-8: Gating threshold (δ). As shown in Figure 4(b), the gating threshold δ controls the balance between knowledge consolidation and noise prevention. The results show that merge error rates drop sharply until δ=0.5δ=0.5, where they reach an elbow. This validates our choice of δ=0.5δ=0.5 as the trade-off for incremental maintenance. Exp-9: Retrieval weight (α). We evaluate the impact of α in Figure 4(c). We observe that overall accuracy peaks at α=0.3α=0.3. This is because a moderate weight balances high-level goal intent with specific experiential grounding, ensuring that Memory Prototypes remain both discoverable and contextually accurate. 6. Conclusion In this paper, we presented NS-Mem, a long-term neuro-symbolic memory framework to bridge the gap between intuitive neural retrieval and deterministic symbolic reasoning for multimodal agents. By integrating a hierarchical three-layer architecture with explicit logic rules and procedural DAGs, NS-Mem enables agents to handle complex decision-making tasks that require constraint satisfaction and dependency reasoning. Our proposed SK-Gen mechanism further ensures that this memory can be automatically constructed and incrementally maintained from continuous multimodal observations. Extensive experiments on real-world benchmarks demonstrate that NS-Mem significantly outperforms the state-of-the-art approach, particularly in constrained reasoning scenarios where symbolic structures provide rigorous logical grounding. References Y. Bao, C. Wu, P. Zhang, C. Shan, Y. Qi, and X. Ben (2024) Boosting micro-expression recognition via self-expression reconstruction and memory contrastive learning. IEEE Transactions on Affective Computing 15 (4), p. 2083–2096. Cited by: §2. M. Binz and E. Schulz (2022) Using cognitive psychology to understand GPT-3. CoRR abs/2206.14576. External Links: Link, Document, 2206.14576 Cited by: §1. Y. Chen, S. Zheng, H. Wang, L. Cheng, Q. Chen, and J. Qi (2023) An enhanced res2net with local and global feature fusion for speaker verification. In Interspeech, p. 3032–3036. Cited by: §4.2.1. A. d’Avila Garcez and L. C. Lamb (2023) Neurosymbolic AI: the 3rd wave. Artif. Intell. Rev. 56 (11), p. 12387–12406. External Links: Link, Document Cited by: §1. J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019) ArcFace: additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 4690–4699. Cited by: §4.2.1. A. Diko, T. Wang, W. Swaileh, S. Sun, and I. Patras (2025) ReWind: understanding long videos with instructed learnable memory. In Proceedings of the Computer Vision and Pattern Recognition Conference, p. 13734–13743. Cited by: §2. R. Evans and E. Grefenstette (2018) Learning explanatory rules from noisy data. In JAIR, Cited by: §2. L. Gao et al. (2023) PAL: program-aided language models. In ICML, Cited by: §2. A. d. Garcez and L. C. Lamb (2019) Neural-symbolic computing: an effective methodology for principled integration of machine learning and reasoning. Journal of Applied Logics 6 (4), p. 611–632. Cited by: §2. A. d. Garcez and L. C. Lamb (2023) Neurosymbolic ai: the 3rd wave. Artificial Intelligence Review. Cited by: §2. B. He, H. Li, Y. K. Jang, M. Jia, X. Cao, A. Shah, A. Shrivastava, and S. Lim (2024) Ma-lmm: memory-augmented large multimodal model for long-term video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 13504–13514. Cited by: §2. B. He et al. (2024) MA-lmm: memory-augmented large multimodal model for long-term video understanding. In CVPR, Cited by: §1, §2, 2nd item. P. Hitzler and Md. K. Sarker (Eds.) (2021) Neuro-symbolic artificial intelligence: the state of the art. Frontiers in Artificial Intelligence and Applications, Vol. 342, IOS Press. External Links: Link, Document, ISBN 978-1-64368-244-0 Cited by: §1. M. Hu, T. Chen, Q. Chen, Y. Mu, W. Shao, and P. Luo (2025) Hiagent: hierarchical working memory management for solving long-horizon agent tasks with large language model. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), p. 32779–32798. Cited by: §2. D. Kahneman (2011) Thinking, fast and slow. macmillan. Cited by: §1. P. Lewis et al. (2020) Retrieval-augmented generation for knowledge-intensive nlp tasks. In NeurIPS, Cited by: §1, §2. K. Lin, F. Ahmed, L. Li, C. Lin, E. Azarnasab, Z. Yang, J. Wang, L. Liang, Z. Liu, Y. Lu, et al. (2023) Mm-vid: advancing video understanding with gpt-4v (ision). arXiv preprint arXiv:2310.19773. Cited by: §2, §2. N. Liu, L. Chen, X. Tian, W. Zou, K. Chen, and M. Cui (2024a) From llm to conversational agent: a memory enhanced architecture with fine-tuning of large language models. arXiv preprint arXiv:2401.02777. Cited by: §2. W. Liu, Z. Tang, J. Li, K. Chen, and M. Zhang (2024b) Memlong: memory-augmented retrieval for long text modeling. arXiv preprint arXiv:2408.16967. Cited by: §2. Z. Liu, W. Yao, J. Zhang, L. Yang, Z. Liu, J. Tan, P. K. Choubey, T. Lan, J. Wu, H. Wang, et al. (2024c) Agentlite: a lightweight library for building and advancing task-oriented llm agent system. arXiv preprint arXiv:2402.15538. Cited by: §2. L. Long, Y. He, W. Ye, Y. Pan, Y. Lin, H. Li, J. Zhao, and W. Li (2025a) Seeing, listening, remembering, and reasoning: a multimodal agent with long-term memory. arXiv preprint arXiv:2508.09736. Cited by: §2. L. Long, Y. He, W. Ye, Y. Pan, Y. Lin, H. Li, J. Zhao, and W. Li (2025b) Seeing, listening, remembering, and reasoning: a multimodal agent with long-term memory. arXiv preprint arXiv:2508.09736. Cited by: §1, §2, 3rd item, §5, §5. K. Mei, X. Zhu, W. Xu, W. Hua, M. Jin, Z. Li, S. Xu, R. Ye, Y. Ge, and Y. Zhang (2024) Aios: llm agent operating system. arXiv preprint arXiv:2403.16971. Cited by: §2. C. Packer et al. (2023) MemGPT: towards llms as operating systems. In NeurIPS, Cited by: §1, §2. J. Pei et al. (2001) PrefixSpan: mining sequential patterns efficiently by prefix-projected pattern growth. In ICDE, Cited by: §4.2.1. F. Peiyuan, Y. He, G. Huang, Y. Lin, H. Zhang, Y. Zhang, and H. Li (2024) Agile: a novel reinforcement learning framework of llm agents. Advances in Neural Information Processing Systems 37, p. 5244–5284. Cited by: §2. T. Rocktäschel and S. Riedel (2017) End-to-end differentiable proving. In NeurIPS, Cited by: §2. S. Russell and P. Norvig (2010) Artificial intelligence: a modern approach. 3rd edition, Prentice Hall. Cited by: §1. A. Saparov and H. He (2023) Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, External Links: Link Cited by: §1. G. Sarch, Y. Wu, M. Tarr, and K. Fragkiadaki (2023) Open-ended instructable embodied agents with memory-augmented large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, p. 3468–3500. Cited by: §2. Y. Shang, Y. Li, K. Zhao, L. Ma, J. Liu, F. Xu, and Y. Li (2024) Agentsquare: automatic llm agent search in modular design space. arXiv preprint arXiv:2410.06153. Cited by: §2. M. Shridhar et al. (2020) ALFRED: a benchmark for interpreting grounded instructions for everyday tasks. In CVPR, Cited by: §1. E. Song et al. (2024) MovieChat: from dense token to sparse memory for long video understanding. In CVPR, Cited by: §1, §2, §2, 2nd item. D. Surís, S. Menon, and C. Vondrick (2023) ViperGPT: visual inference via python execution for reasoning. In ICCV, Cited by: §2. Y. Tang et al. (2019) COIN: a large-scale dataset for comprehensive instructional video analysis. In CVPR, Cited by: §1. E. Tulving (1972) Episodic and semantic memory. Organization of memory 1, p. 381–403. Cited by: §1. K. Valmeekam, M. Marquez, S. Sreedharan, and S. Kambhampati (2023) On the planning abilities of large language models - A critical investigation. CoRR abs/2305.15771. External Links: Link, Document, 2305.15771 Cited by: §1. B. Wang, X. Liang, J. Yang, H. Huang, S. Wu, P. Wu, L. Lu, Z. Ma, and Z. Li (2023) Enhancing large language model with self-controlled memory framework. arXiv preprint arXiv:2304.13343. Cited by: §2. G. Wang et al. (2023) Voyager: an open-ended embodied agent with large language models. In NeurIPS, Cited by: §2. L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin, W. X. Zhao, Z. Wei, and J. Wen (2024) A survey on large language model based autonomous agents. Frontiers Comput. Sci. 18 (6), p. 186345. External Links: Link, Document Cited by: §1. X. Wang et al. (2024) VideoAgent: long-form video understanding with large language model as agent. In ECCV, Cited by: §1. T. W. Webb, K. J. Holyoak, and H. Lu (2022) Emergent analogical reasoning in large language models. CoRR abs/2212.09196. External Links: Link, Document, 2212.09196 Cited by: §1. M. Wooldridge and N. R. Jennings (1995) Intelligent agents: theory and practice. The Knowledge Engineering Review 10 (2), p. 115–152. Cited by: §1. F. Yang et al. (2017) Differentiable learning of logical rules for knowledge base reasoning. In NeurIPS, Cited by: §2. K. Yi et al. (2018) Neural-symbolic vqa: disentangling reasoning from vision and language understanding. In NeurIPS, Cited by: §2. C. Zhang, K. Lin, Z. Yang, J. Wang, L. Li, C. Lin, Z. Liu, and L. Wang (2024a) Mm-narrator: narrating long-form videos with multimodal in-context learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 13647–13657. Cited by: §2. H. Zhang, Y. Wang, Y. Tang, Y. Liu, J. Feng, J. Dai, and X. Jin (2024b) Flash-vstream: memory-based real-time understanding for long video streams. arXiv preprint arXiv:2406.08085. Cited by: §2, §2, 2nd item. P. Zhang, X. Dong, Y. Cao, Y. Zang, R. Qian, X. Wei, L. Chen, Y. Li, J. Niu, S. Ding, et al. (2024c) Internlm-xcomposer2. 5-omnilive: a comprehensive multimodal system for long-term streaming video and audio interactions. arXiv preprint arXiv:2412.09596. Cited by: §2. W. Zhong, L. Guo, Q. Gao, H. Ye, and Y. Wang (2024) Memorybank: enhancing large language models with long-term memory. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 38, p. 19724–19731. Cited by: §2. Appendix A Appendix A.1. Notation Table Table 6. Key notations used throughout this paper. Notation Description ℳM Memory system ℳ=(ℒepi,ℒsem,ℒlogic,ℰ)M=(L_epi,L_sem,L_logic,E) ℒepi,ℒsem,ℒlogicL_epi,L_sem,L_logic Episodic, Semantic, and Logic layers =o1,o2,…O=\o_1,o_2,…\ Multimodal observation stream ot=(otv,ota,ots)o_t=(o_t^v,o_t^a,o_t^s) Observation at time t (visual, audio, text) ,Q,A Query space and answer space q∈q A user query in natural language e=(t,,e)e=(t,d,v_e) Episodic node: timestamp, content, episodic embedding s=(type,attrs,s)s=(type,attrs,v_s) Semantic node: entity type, attributes, semantic embedding =(id,c,,,ℱ)N=(id,c,I,G,F) Logic Node: id, goal description, index vectors, DAG, functions c Natural language goal description =goal,stepI=\i_goal,i_step\ Index Vectors: goal-level and step-level embeddings goal,stepi_goal,i_step Goal index ϕ(c)φ(c) and step index 1|S|∑sϕ(s) 1|S| _sφ(s) =(V,E,A)G=(V,E,A) Procedural DAG: nodes, edges, attribute function ℱF Set of deterministic symbolic query functions ϕ:Text→ℝdφ:Text ^d Text embedding function d Embedding dimension A.2. Complexity Analysis NS-Mem operates continuously through two decoupled processes: incremental maintenance and query-driven reasoning. In terms of the incremental maintenance phase, for each incoming observation at time step t, the system performs pattern mining and prototype updates sequentially. The SK-Gen mechanism processes a sliding window of size w to update the episodic buffer, with a time complexity of O(w×d)O(w× d) for embedding computation, where d is the vector dimension. Regarding the structural refinement, updating the transition statistics and edges in the Procedural DAG =(V,E)G=(V,E) takes O(1)O(1) via hash-based lookups. The prototype update via EMA requires element-wise operations with a complexity of O(d)O(d). Crucially, maintaining the vector index for ℒlogicL_logic involves incremental insertions. Using graph-based indexing structures, this requires O(dlog||)O(d |N|), where |||N| is the number of logic nodes. Therefore, the total maintenance complexity per time step is dominated by the index update O(dlog||)O(d |N|), which is highly efficient compared to batch retraining. Regarding the retrieval and reasoning phase, the system first identifies relevant Logic Nodes N using the index-based retrieval. For a specific query q, the search complexity is O(dlog||)O(d |N|). Once a relevant node is identified, symbolic reasoning is executed on the associated procedural DAG G. The traversal of finding a path or checking constraints takes O(|V|+|E|)O(|V|+|E|). Thus, suppose that NS-Mem runs for T time steps and handles Q queries, the overall time complexity is O(T×dlog||+Q×(dlog||+|V|+|E|))O(T× d |N|+Q×(d |N|+|V|+|E|)). A.3. Proof of Theorem 2 (Posterior Consistency) Proof. We prove consistency for both parameter types. Node success rates (Beta-Binomial): The posterior mean is: (6) R^(v)=α+sα+β+n R(v)= α+sα+β+n where s is successes out of n trials. As n→∞n→∞: (7) R^(v)=α+sα+β+n=α/n+s/nα/n+β/n+1→sn→R∗(v) R(v)= α+sα+β+n= α/n+s/nα/n+β/n+1→ sn→ R^*(v) by the strong law of large numbers (s/n→a.s.R∗(v)s/n a.s.R^*(v)). Edge probabilities (Dirichlet-Multinomial): The posterior mean for edge (u,vj)(u,v_j) is: (8) p^u,vj=γj+cj∑i(γi+ci) p_u,v_j= _j+c_j _i( _i+c_i) where cjc_j is the count of transitions to vjv_j. As total observations n=∑ci→∞n=Σ c_i→∞: (9) p^u,vj→cjn→a.s.pu,vj∗ p_u,v_j→ c_jn a.s.p^*_u,v_j again by the strong law of large numbers. Both results follow from Doob’s posterior consistency theorem for exponential families with compact parameter spaces. ∎ A.4. Proof of Theorem 3 (Fusion Consistency) Proof. We show that the fusion algorithm preserves correctness under the assumption that input DAGs are valid observations of the same true procedure. Node alignment correctness: Using semantic embeddings with cosine similarity and threshold τ=0.8τ=0.8, nodes representing the same action (with potentially different surface forms) are correctly matched with high probability. The Hungarian algorithm guarantees optimal bipartite matching. Parameter fusion correctness: The Bayesian fusion rule: (10) αfused _fused =α1+α2−1 = _1+ _2-1 (11) βfused _fused =β1+β2−1 = _1+ _2-1 is equivalent to pooling observations: if DAG 1 observed (s1,n1)(s_1,n_1) successes/trials and DAG 2 observed (s2,n2)(s_2,n_2), the fused posterior is: (12) Beta(α0+s1+s2,β0+(n1−s1)+(n2−s2))Beta( _0+s_1+s_2, _0+(n_1-s_1)+(n_2-s_2)) which is the correct posterior for combined observations. Structure preservation: New edges (alternative paths) discovered in one video but not another are added to the fused DAG with appropriate Laplace-smoothed initial probabilities. This ensures no valid alternatives are lost. By Theorem 2, as more videos are fused, parameters converge to true values, and the structure asymptotically captures all valid paths. ∎ A.5. Proof of Theorem 4 (Determinism) Proof. We prove that each of the three symbolic query functions produces deterministic outputs. Function 1 (getProcedureWithEvidence): Given a goal description, this function retrieves the corresponding Logic Node and returns its Procedural DAG G together with the associated episodic_links. Since the Logic Node is identified through a fixed similarity computation on static Index Vectors, and both G and episodic_links are stored data structures, the output is a deterministic lookup with no stochastic component. Function 2 (queryStepSequence): Given a goal and constraint set C, this function enumerates all paths Π(v0,vn+1) (v_0,v_n+1) from START to GOAL in the fixed DAG G via depth-first traversal, then filters by checking A(v)⊧A(v) for every node v along each path. Graph traversal on a static structure is deterministic, and attribute-constraint satisfaction is a Boolean predicate evaluated through set/arithmetic operations. Hence the filtered path set Π _C is uniquely determined by (,)(G,C). Function 3 (aggregateCharacterBehaviors): Given a person entity identifier, this function scans the Logic Layer ℒlogicL_logic and collects all Logic Nodes whose episodic_links reference episodic nodes associated with the specified entity anchor. Both the entity-anchor association and the episodic_links are fixed stored references, making the aggregation a deterministic filtering operation over a static set. Since none of the three functions involves random sampling, stochastic models, or LLM inference during computation, all outputs are reproducible given identical inputs. ∎