A lot of computational neuroscientists use something called information theory to try and understand how the parts of the brain communicate with each other. Info theory is a relatively young field, dating back to some work by Claude Shannon in the mid 1900's, and basically formalizes (mathematically) a lot of ideas about how much one could learn from a signal.
The goal of this blog post is to understand a beautiful experimental result published in 1996 by Yang Dan and colleagues. To understand this, we need to first understand how redundancy affects information transfer efficiency.
Let's imagine that you and I are in a conversation, and I choose to repeat every word twice (so it starts as "Hi Hi how how are are you you doing doing today today??"). Now clearly that is not an efficient use of my speech, because we know of a simple way I could have said the same thing in less (1/2 as many) words. One way to formalize that notion is by observing that, the way I spoke, you could predict every 2nd word, once you knew the odd-numbered words, so 1/2 the words are redundant.
What Yang Dan and colleagues showed is that the outputs of LGN neurons have the minimum possible amount of redundancy (like in the case where I only say "Hi how are you doing today?" instead of repeating myself), when presented with naturalistic movies; they showed Casablanca to their subjects.
Now, on it's own, that might seem unimpressive: maybe LGN is just set up so that it always has non-redundant outputs. Well, they did a great control experiment to show that that's not true: they presented their subjects with white noise stimuli (like the static you might see on old-timey televisions when the cable is out), and found that, in that case, LGN outputs were highly redundant! What gives?
Well, it turns out that movies (and images) of real-world stuff (forests, cities, animals, etc.) all have very similar statistical properties. This means that, if you were to make a system for communicating those signals, you could set it up in a way that removes all the redundancies that occur in those movies (like, for example, nearby parts of an image tend to be the same brightness). But, if you took that highly engineered system and applied it to movies with different redundancies, it wouldn't work quite right.
The result of Yang Dan's experiment suggests that, by adapting to the natural environment (possibly over evolutionary time scales), our brains are set up so as to do the most efficient possible job for typical real-world movies!
This remains to me one of the best success stories of systems neuroscience, in which a combination of mathematics (understanding information theory) and experimentation lead us to better understand how it is that our brains work.
No comments:
Post a Comment