Car Size
20 hours ago
Musings of a cognitive scientist

nections between different subsystems of units. To my knowledge, the most recent PDP model was proposed by Plaut and Booth (2000) and it uses a fully recurrent network for the meaning subsystem: that is to say, a subnetwork in which all units are connected to each other. Recurrent networks are dynamical systems which exhibit attractor dynamics: the meaning of a word corresponds to an attractor of the system, i.e. a state towards which the system will naturally converge if it starts close enough. The central disk in the picture is an attractor in the state space of a system evolving under discrete time steps: each fan corresponds to many starting states being mapped to one, but only one point (the center of the disk) maps to itself -there's no escaping from it and it attracts all other states.
etween orthographic, semantic and phonological codes, but here codes of each type are supported by particular subsystems called self-organising maps (see right image), and these communicate with one another through hebbian connections. Examples of this approach include Mikkulainen (1993), Li et al (2004) and Mayor and Plunkett (2008). These models take a little bit from both worlds. Indeed, access to meaning corresponds to the apparition of a pattern of activation in the semantic map, an activation pattern which is distributed across all units like in the PDP approach, but which is centered on a so-called "best-matching" one like in the localist approach. These models usually put the emphasis on developmental and neural plausibility, and they are not yet ripe (or not upgraded) for the modeling of adult lexical decision or semantic categorisation tasks.
ribed. Moreover these models say nothing about the interaction between orthography, semantics and phonology, and hence they should not be seen as lexical access models, but rather as models of how humans build and represent lexical semantic information. The common hypothesis made by the tenants of this approach is that semantic codes are large vectors (they live in a high-dimensional semantic space), computed by more or less complicated co-occurence statistics in the language environment. Examples include Landauer and Dumais (1997)'s LSA model, and more recently Jones and Mewhort (2007)'s Beagle model (left picture). The focus of interest here lies in the internal organisation of the model: the topology of the semantic space -how related word meanings come to be represented in similar patches of the semantic space (the picture shows a 2D-projection of a semantic space in which related words are clustered together). Because they make no commitment as to how lexical access occurs, semantic spaces could theoretically be used with any of the previously mentioned models. However being distributed, semantic space vectors are more easily interfaced with the PDP or the self-organising approaches.
