Lexical semantics (1/4)

What we cognitive scientists usually mean by lexical semantics is the system responsible for storing the meaning of words. How is this system organised? How does it develop ? How does it work? And is it located anywhere in the brain in particular? How can we know? Let's start with the last question.

How can we know?
Well cognitive science uses three main classes of tools to probe the human mind: behavioral experiments, brain imagery and computational models. There is of course a continual conversation going on between the three. Behavioral experiments ask each subject in a large group to operate a simple task in controled and reproducible laboratory conditions. Responses (usually motor responses) are then treated statistically and in this way significant effects can stand out. Hypotheses are suported or disproved.

A most common behavioral experiment in lexical semantics would be semantic categorization, in which upon presentation of a word on the screen, subjects are being asked whether it belongs to one category (is this bigger than a brick?) or not. What we look at: reaction times and percent errors. You can see that the idea is to plug our measuring apparatus to the simplest and hopefully the most objective signal possible.

Brain imagery simply tries to push the logic forward: we plug our measuring apparatus directly to the brain (non-invasively of course). The brain emits all kinds of signals strong enough to be picked-up at the surface of our skulls. There are many ways to do that: EEG, or MEG, another is PET, but I guess fMRI would be the most used brain imaging technique to date. All techniques have their strengths and weaknesses, for instance they are more or less accurate as for the spatial or temporal qualities of the signal.

Once behavioral and brain imagery studies are available, computational models try to make sense of the data. Computational models can be more biologically or psychologically oriented. A good balance in this respect seems to have been found early on by connectionism, which I for one define as the minimal concession to the hardware that one should acknowledge when modeling the software: using a large number of simple units connected to one another and which can be more or less active.

The explanatory power of a computational model is, as usual, given by the number of facts it can explain relative to the number of hypotheses it makes. But explanatory power is not the whole story, one would also like a model to make predictions which could be further tested with behavioral or brain studies -we want our models to be falsifiable.

No comments:

Post a Comment