ISCID News Editor
Member # 1417
posted 27. September 2004 21:48
PLoS Biology Volume 2 | Issue 9 | September 2004
Bridging Psychology and Mathematics: Can the Brain Understand the Brain?
Mariano Sigman is in the Cognitive Neuroimaging Research Unit of l'Institut National de la Santé et de la recherche Médicale, Orsay, France, and a fellow of the Human Frontiers Science Program. E-mail: firstname.lastname@example.org
Published September 14, 2004
Copyright: © 2004 Mariano Sigman. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Citation: Sigman M (2004) Bridging Psychology and Mathematics: Can the Brain Understand the Brain?. PLoS Biol 2(9): e297.
Paragraphs 3 and 4
In the middle of the last century, working on the problem of communications, languages, codes, and channels, Claude Shannon proposed a very elegant theory that formalized intuitive ideas on the essence (and the limits) of communications (Weaver and Shannon 1949). One of its key aspects was that, depending on the code and on the input, channels are not used optimally and thus do not carry all the information they potentially could. When we compress (zip) a file, we are actually rewriting it in a more efficient (though not necessarily more intelligible) language in which we can store the same amount of information in less space. This compression has, of course, a limit (we cannot convey the universe in a single point), and the mere existence of an optimal code is central to Shannon's idea. Attneave was probably the first to think that these ideas could help unravel how the brain worked, and in a long series of works relating these ideas to experimental data, it was shown that the retina's business is mainly to get rid of the redundancies in the visual world.
About four years ago, Jacob Feldman published a paper, similar in spirit, proposing a simple explanation for a long-standing problem in cognitive science about why some concepts are inherently more difficult to learn than others (Feldman 2000). An article whose first reference is to work carried out 50 years previously makes us suspect that an important gap is being filled. As in the previous experiments, Feldman borrowed a theory—he did not invent it—to explain longstanding and previously unexplained research in this area. Feldman's idea was based on a theory developed by Kolmogorov that established formal mathematical grounds to define and measure complexity. Kolmogorov's theory was focused on categories, which are just subsets of a universe, a bunch of exemplars that constitute a group. “Dogs” is a category in the universe of animals. Different statements can define the same category, for example, “big dogs” and “dogs that are not small” are the same group, and some information in the statement may be irrelevant because it does not aid in determining which elements belong or not. In the same way that Shannon spoke of a non-redundant code, Kolmogorov showed that categories could be defined with an optimal (non-redundant) statement. The length of this statement defines a measure of complexity termed Kolmogorov complexity.
[ 03. December 2004, 21:31: Message edited by: ISCID News Editor ]