This post will be about the application of prior knowledge in decision making. I'll start with a very simple example involving coin flips, then move on to discussing the results of a very cool cognitive science experiment.
So, here's our example. Let's imagine that you and I are betting on coin flips. We each wager $1, and you get to flip the coin. If it lands on heads, I win your dollar (and get mine back), while if it lands on tails, you get my dollar (and you get yours back). Simple, right. Good. Now, I pull out from my pocket a US 25-cent coin, and we start flipping coins. It lands on heads on each of the first 3 flips, giving me a profit of $3.
Now, before we go any further, you want to estimate the probability that any given flip will land on heads versus tails. From the data available in our (3-flip) experiment, it looks like the coin always lands on heads. So, based purely on that information, you should stop betting, and you should call me a cheat! Is that really the best course of action?
Intuitively, you know that it's not so simple. We all know the exact sequence of flips we saw would happen 12.5% of the time, if we were using a fair coin. And, we've seen enough coins before in our lives to expect that they are fair (land heads or tails roughly equally often). And that's the crux of the issue: you have some prior knowledge about coins that tells you not to rush to hasty decisions.
Now, I promised you a very cool cognitive science experiment, and I'm going to deliver just that. Here's the experiment. Tom Griffiths (now a Berkeley prof) asked a bunch of human subjects (randomly selected undergrads) questions, such as (quoted from Griffiths and Tenebaum's paper)
"Imagine you hear about a movie that has taken in 10 million dollars at the box office, but don’t know how long it has been running. What would you predict for the total amount of box office intake for that movie?"
or
"If your friend read you her favorite line of poetry, and told you it was line 5 of a poem, what would you predict for the total length of the poem?"
In statistics, if you know the distributions of, say, lengths of poems, it's a fairly straightforward (Bayesian inference) problem to calculate the answers to these questions. But Griffith's subjects were not stats wizards, and they didn't have time to calculate, and they were not provided with the distributions. Furthermore, they were explicitly instructed to make intuitive guesses, not calculations.
Shockingly (to me, anyway), the subject's answers (on average) match the statistically optimal predictions!
So, somehow, your brain automatically "knows" all these statistics distributions from your everyday experience. And, when you make seemingly random intuitive guesses about stuff, your brain draws on that information to make (statistically) the best possible decision.
Not too bad for a giant lump of fat.
discussing topics in neuroscience, the process of doing science, and the everyday ennui associated with being a grad student
Monday, November 22, 2010
Monday, November 15, 2010
Are you looking for a job?
This post contains a fun (imho) anecdote from the (30,000 person) neuroscience meeting I am currently attending.
I was presenting a poster yesterday afternoon, in the neuroethology section of this meeting. This was, on its own, a very new experience for me: mine was poster MMM35 (they start at A1), and the poster session was in this gigantic warehouse-like space at the San Diego convention center. There could have easily been 10,000 people in this room.
About 1/2-way through the poster session, a scientist from [a prestigious university] approached me, after listening to me explain about my poster and asked "Are you looking for a job? How would you feel about joining our new group in [a relatively new area of research]?"
Now, one of the things I struggle with is being confident that the work I am doing is of interest to the general scientific community. This sort of feedback (and that of other people I spoke with) reaffirms that I am doing something at least mildly worthwhile (in addition to fun!).
Thus, I declare SFN to be a success, with several days more conferencing to go!
I was presenting a poster yesterday afternoon, in the neuroethology section of this meeting. This was, on its own, a very new experience for me: mine was poster MMM35 (they start at A1), and the poster session was in this gigantic warehouse-like space at the San Diego convention center. There could have easily been 10,000 people in this room.
About 1/2-way through the poster session, a scientist from [a prestigious university] approached me, after listening to me explain about my poster and asked "Are you looking for a job? How would you feel about joining our new group in [a relatively new area of research]?"
Now, one of the things I struggle with is being confident that the work I am doing is of interest to the general scientific community. This sort of feedback (and that of other people I spoke with) reaffirms that I am doing something at least mildly worthwhile (in addition to fun!).
Thus, I declare SFN to be a success, with several days more conferencing to go!
Wednesday, November 10, 2010
SoCal
I'm at SFO now, on my way to Los Angeles for the annual Fulbright science retreat. These things are always lots of fun. More importantly, they are a great excuse to have informal conversations with scientists from many different fields. This is always really interesting, and often sparks new ideas that wouldn't come up otherwise. In particular, it was at one of these retreats that I cemented my decision to quit doing "traditional" physics, and start doing research in theoretical neuroscience.
By contrast, most conferences are single-discipline.
From LA, I'm off to San Diego for the annual Society for Neuroscience meeting. This is a huge meeting (over 30,000 scientists!), and is single-disciplinary (although neuroscience is a pretty multidisciplinary field, so this is a bit of a misnomer).
Hopefully I'll have a chance to hit the beach (and maybe do some dinghy sailing in the warm SoCal waters... a nice contrast from the frigidity of the ocean in the SF area), in addition to some serious sciencing.
That's all for this (decidedly low-content) post. I'll write something more serious about the role of statistical priors on decision-making when I get a chance.
By contrast, most conferences are single-discipline.
From LA, I'm off to San Diego for the annual Society for Neuroscience meeting. This is a huge meeting (over 30,000 scientists!), and is single-disciplinary (although neuroscience is a pretty multidisciplinary field, so this is a bit of a misnomer).
Hopefully I'll have a chance to hit the beach (and maybe do some dinghy sailing in the warm SoCal waters... a nice contrast from the frigidity of the ocean in the SF area), in addition to some serious sciencing.
That's all for this (decidedly low-content) post. I'll write something more serious about the role of statistical priors on decision-making when I get a chance.
Sunday, November 7, 2010
the thermodynamics of mid-term elections
As most of you know, last week was the US mid-term elections. Their electoral system is a bit intricate, and I'll (briefly) summarize the key feature before I move on:
Every 4 years, they elect the president, and a large fraction of their congress. Also every 4 years, but 1/2-way through the presidential term (the elections are staggered so there's one election every 2 years), they elect the other (large) fraction of congress, along with state governors, etc. While the president is the face of the government, and has a lot of power, he can't actually institute much change without the support of congress.
Okay, now that we know the lay of the land, let's imagine that you are a newly elected president, who promised sweeping changes. Indeed, it's hard to imagine someone getting elected unless they make such promises (irregardless of the nature of the promises: cutting spending, or building new social programs, or whatever): there's always lots of stuff wrong, and the voters want to elect someone to fix that stuff.
Well, if you promised to cut things, and remove existing social programs (or institutions, or whatever), you are in luck: it's pretty easy, and pretty fast to do that. 2 years into your term, when the mid-term elections come up, you can say to the voters "look at the stuff I promised to get rid of that I, indeed, got rid off. Give me more power in congress, and I will do more of this stuff." Consequently, you are likely to get that power, and to have increased power in the next 2 years of your term.
Now, let's imagine that, instead of promising to get rid of stuff, you promised to build new things (health care, or whatever).
Well, an important lesson from physics is that it's much harder to build things than to tear them down (this is the second law of thermodynamics, which says that chaos and disorder are always increasing over time). Imagine, for example, how long it takes to build a house, compared to how long it takes that house to fall down, once you set off an explosive, or start a fire, in that house.
Okay, so it takes a long time to build new things, and so it's pretty likely that, come the mid-term election, you won't yet have succeeded in getting your new programs running, or at least not running very effectively.
Now, at the mid-term election, the opposition can correctly say "see, the president promised all this stuff, but it's not working. give us more power!". The result is that the voters give more power to the opposition. Consequently, in the last 1/2 of your term as president, you have even less power in congress, making it very hard to ever get all of those programs working (the ones you promised to get working, in order to be elected as president in the first place).
So, we see an interesting effect: the relative slowness of building new programs (versus cutting them), coupled with the existence of mid-term elections that can change the balance of power in congress, means that administrations that cut existing programs meet with much more success than those that institute new ones. This is all a consequence of well-understood physics, but I haven't yet seen anyone spell out the consequences of the second law when it comes to elections.
Is this a good thing? I have my opinions, which I've tried to keep to myself. I'll let you decide.
Disclaimer: I am not a political analyst, nor do I have any training in political science. But, I know physics, and I'm willing to take a shot at applying that knowledge to any domain in which I think it is appropriate.
Every 4 years, they elect the president, and a large fraction of their congress. Also every 4 years, but 1/2-way through the presidential term (the elections are staggered so there's one election every 2 years), they elect the other (large) fraction of congress, along with state governors, etc. While the president is the face of the government, and has a lot of power, he can't actually institute much change without the support of congress.
Okay, now that we know the lay of the land, let's imagine that you are a newly elected president, who promised sweeping changes. Indeed, it's hard to imagine someone getting elected unless they make such promises (irregardless of the nature of the promises: cutting spending, or building new social programs, or whatever): there's always lots of stuff wrong, and the voters want to elect someone to fix that stuff.
Well, if you promised to cut things, and remove existing social programs (or institutions, or whatever), you are in luck: it's pretty easy, and pretty fast to do that. 2 years into your term, when the mid-term elections come up, you can say to the voters "look at the stuff I promised to get rid of that I, indeed, got rid off. Give me more power in congress, and I will do more of this stuff." Consequently, you are likely to get that power, and to have increased power in the next 2 years of your term.
Now, let's imagine that, instead of promising to get rid of stuff, you promised to build new things (health care, or whatever).
Well, an important lesson from physics is that it's much harder to build things than to tear them down (this is the second law of thermodynamics, which says that chaos and disorder are always increasing over time). Imagine, for example, how long it takes to build a house, compared to how long it takes that house to fall down, once you set off an explosive, or start a fire, in that house.
Okay, so it takes a long time to build new things, and so it's pretty likely that, come the mid-term election, you won't yet have succeeded in getting your new programs running, or at least not running very effectively.
Now, at the mid-term election, the opposition can correctly say "see, the president promised all this stuff, but it's not working. give us more power!". The result is that the voters give more power to the opposition. Consequently, in the last 1/2 of your term as president, you have even less power in congress, making it very hard to ever get all of those programs working (the ones you promised to get working, in order to be elected as president in the first place).
So, we see an interesting effect: the relative slowness of building new programs (versus cutting them), coupled with the existence of mid-term elections that can change the balance of power in congress, means that administrations that cut existing programs meet with much more success than those that institute new ones. This is all a consequence of well-understood physics, but I haven't yet seen anyone spell out the consequences of the second law when it comes to elections.
Is this a good thing? I have my opinions, which I've tried to keep to myself. I'll let you decide.
Disclaimer: I am not a political analyst, nor do I have any training in political science. But, I know physics, and I'm willing to take a shot at applying that knowledge to any domain in which I think it is appropriate.
Thursday, November 4, 2010
computing at a Van Halen concert
So... one of the key features of the brain is that it is, in some sense, noisy.
I mean this in the sense of electronics, or communications "noise" (like the static when you listen to a radio station and your radio isn't exactly tuned right). This noise makes it hard to pick out the underlying "signal": the thing you are actually interested in.
Well, neurons in your brain are also pretty noisy: when presented with the same stimulus over and over again, they don't always respond the same way. Furthermore, the environment we live in is intrinsically noisy: very chaotic things like winds, cloud cover etc. mean that even the same tree will look slightly different each time you look at it.
Somehow, the noisy operation of your brain, functioning in this noisy world, still allows it to do things (like recognize that tree) that even super-advanced computers have trouble with. Those computers have none of this randomness associated with their operation. One possibility is that the noise in your brain is, somehow, crucial for it to function properly (as opposed to being a distraction that stops it from working).
Yesterday's Redwood Center talk was from some bay-area entrepreneurs who are trying to make computing machines that have some of this randomness built-in as part of their core functionality. Basically, like building a machine to work more like the brain.
They showed some pretty impressive results, although a fully functioning fake brain still lies in the very distant future.
I mean this in the sense of electronics, or communications "noise" (like the static when you listen to a radio station and your radio isn't exactly tuned right). This noise makes it hard to pick out the underlying "signal": the thing you are actually interested in.
Well, neurons in your brain are also pretty noisy: when presented with the same stimulus over and over again, they don't always respond the same way. Furthermore, the environment we live in is intrinsically noisy: very chaotic things like winds, cloud cover etc. mean that even the same tree will look slightly different each time you look at it.
Somehow, the noisy operation of your brain, functioning in this noisy world, still allows it to do things (like recognize that tree) that even super-advanced computers have trouble with. Those computers have none of this randomness associated with their operation. One possibility is that the noise in your brain is, somehow, crucial for it to function properly (as opposed to being a distraction that stops it from working).
Yesterday's Redwood Center talk was from some bay-area entrepreneurs who are trying to make computing machines that have some of this randomness built-in as part of their core functionality. Basically, like building a machine to work more like the brain.
They showed some pretty impressive results, although a fully functioning fake brain still lies in the very distant future.
Monday, November 1, 2010
is Eminem a closet neuroscientist?
"I can't tell you what it really is, I can only tell you what it feels like"
- Marshall Mathers
This quote, from a rap duet by Rihanna and Eminem explains, in a nutshell, the key challenge in cognitive neuroscience: missing information.
You see, your brain is constantly trying to build up a good representation of your surroundings, which in turn allows you to make "sensible" behavioral decisions. However, the information that you can gather from the outside world is insufficient to know for sure the state of your environment ("what it really is", so to speak).
Now, your brain is pretty good at making educated guesses (inference) about the environment, but, at the end of the day, those guesses ("what it feels like") are all that you have available to guide your behavior.
As an example, consider the problem of vision. You have two eyes (probably), each of which collects light on a 2-D array of photoreceptors that are each sensitive to one of 3 colors.
But the world you are trying to understand has objects spread out in three dimensions, and with a near-infinite number of colors. So, clearly there is some information you are missing.
Your brain's ability to fill in the pieces, and make good guesses is absolutely remarkable. However, the fact that it's constantly making these insane leaps of inference also makes your brain very susceptible to being tricked.
This leaves us with an interesting dichotomy: the same computational inference ability that makes the brain such a powerful tool is also the one of its main sources of weakness.
- Marshall Mathers
This quote, from a rap duet by Rihanna and Eminem explains, in a nutshell, the key challenge in cognitive neuroscience: missing information.
You see, your brain is constantly trying to build up a good representation of your surroundings, which in turn allows you to make "sensible" behavioral decisions. However, the information that you can gather from the outside world is insufficient to know for sure the state of your environment ("what it really is", so to speak).
Now, your brain is pretty good at making educated guesses (inference) about the environment, but, at the end of the day, those guesses ("what it feels like") are all that you have available to guide your behavior.
As an example, consider the problem of vision. You have two eyes (probably), each of which collects light on a 2-D array of photoreceptors that are each sensitive to one of 3 colors.
But the world you are trying to understand has objects spread out in three dimensions, and with a near-infinite number of colors. So, clearly there is some information you are missing.
Your brain's ability to fill in the pieces, and make good guesses is absolutely remarkable. However, the fact that it's constantly making these insane leaps of inference also makes your brain very susceptible to being tricked.
This leaves us with an interesting dichotomy: the same computational inference ability that makes the brain such a powerful tool is also the one of its main sources of weakness.