We describe the DSHM (Dynamically Structured Holographic Memory) model of human memory, which uses high dimensional vectors to represent items in memory. The complexity and intelligence of human behavior can be attributed, in part, to our...
moreWe describe the DSHM (Dynamically Structured Holographic Memory) model of human memory, which uses high dimensional vectors to represent items in memory. The complexity and intelligence of human behavior can be attributed, in part, to our ability to utilize vast knowledge acquired over a lifetime of experience with our environment. Thus models of memory, particularly models that can scale up to lifetime learning, are critical to modeling human intelligence. DHSM is based on the BEAGLE model of language acquisition (Jones and Mewhort, 2007) and extends this type of model to general memory phenomena. We demonstrate that DHSM can model a wide variety of human memory effects. Specifically, we model the fan effect, the problem size effect (from math cognition), dynamic game playing (detecting sequential dependencies from memories of past moves), and time delay learning (using an instance based approach). This work suggests that DSHM is suitable as a basis for learning both over the short-term and over the lifetime of the agent, and as a basis for both procedural and declarative memory. We argue that cognition needs to be understood at both the symbolic and sub-symbolic levels, and demonstrate that DSHM intrinsically operates at both of these levels of description. In order to situate DSHM in a familiar context, we discuss the relationship between DHSM and ACT-R. Dynamically Structured Holographic Memory 3 Dynamically Structured Holographic Memory Cognitive science, as a discipline, provides explanations for why cognitive phenomena occur and how they occur. The explanation for how a phenomenon occurs often involves a description of what processes underlie it. This need for process level accounts makes modeling, which is explicit in mechanical details, a particularly useful tool in generating explanations for cognitive phenomena. To achieve a full, theoretical understanding of a cognitive process, explanations need to be provided at both symbolic (i.e., representational) and sub-symbolic levels of description. The classic symbolic approaches to modeling do not account for how the symbol manipulations described in the model could arise from neural tissue, nor do they account for how the symbols themselves come into existence. Classic connectionist approaches are more concerned with neural plausibility, but are notoriously opaque, doing little to aid our understanding of the cognitive processes modeled. By contrast, the vector-symbolic approach to modeling explicitly provides an account at both levels of description. Vector Symbolic Architectures (VSAs), a term coined by Gayler (2003), are a set of techniques for instantiating and manipulating symbolic structures in distributed representations. Research into VSAs has been motivated by limitations in the ability of traditional connectionist models (i.e., non-recurrent models with one or two layers of connections) to represent knowledge with complicated structure (Plate, 1995). Like human memory, vector symbolic architectures can store complicated and recursive relations between ideas. VSAs use vectors with hundreds of dimensions, but the number of dimensions does not grow with either the quantity or complexity of the experiences stored within the vectors. For a formal analysis of VSAs, including time complexity considerations, see Kelly, Blostein, and Mewhort, 2013 as well as Plate, 1995.