I've been reading a fair bit lately about agent-based modeling. Basically, these are models of interactions between agents, each of whom decides for themselves how they will behave.
One interesting question that comes up is "why should I be nice to people, when that niceness has an associated cost?" For example, imagine that I share my lunch with someone. In that case, I end up with less lunch. So, it would seem that the most successful agents would not engage in such sharing activities.
But, by and large, people are kind to each other, which begs the question "why do people sacrifice in order to help others?"
In a very cold economic sense, the answer is that it is beneficial to sacrifice some resources, in order to help others, because those people will remember your kindness and repay you in future with kindness of their own (or you get a reputation as a good person, and other people are kind to you in future). And that payoff makes it worthwhile to be nice to other people.
Lately, I have spent a fair bit of time interacting with an elderly faculty member at UC Berkeley. These conversations typically start with me reminding this individual of who I am (which is not surprising, given his age, and the fact that I am by no means an "important" person in the Berkeley physics scene).
This got me to thinking about how altruism might play out in a world where people do not remember your good deeds, and thus there is no chance of you being repayed for your kindness.
In such a world, the economic value of kindness is significantly mitigated. If people make purely economic decisions, then, they would likely not engage in altruistic behavior.
I wonder how far such thinking would go in helping us to understand the workings of communities in which individuals are highly anonymous (if you are completely anonymous, you effectively interact with a memoryless populace, since no one knows who you are and thus cannot link your actions to some identity). These might include on-line communities, as well as large cities.
What's my point? Well, it might be a good idea to introduce yourself to your neighbors, and maybe to smile when you do so.
discussing topics in neuroscience, the process of doing science, and the everyday ennui associated with being a grad student
Wednesday, December 22, 2010
Saturday, December 18, 2010
gold rush!
So... discovery channel loves to run TV shows about roughnecks doing roughneck things. And they do a fine job of it.
Deadliest catch was an old favorite of mine; it's a documentary type series about crab fisherman in the arctic, and is very addictive to watch.
Last night, I watched a new show of theirs (in a rare moment of not sciencing) called Gold Rush. It's awesome. Basically, a bunch of unemployed men from Oregon got sick of sitting around being unemployed. So they sold all their stuff to raise $100 K, and used that to buy a couple of old backhoes, drove up to Alaska, and started digging for gold. Of course, none of them know anything about mining.
Anyhow, great show, highly recommended for those non-sciencing times.
Deadliest catch was an old favorite of mine; it's a documentary type series about crab fisherman in the arctic, and is very addictive to watch.
Last night, I watched a new show of theirs (in a rare moment of not sciencing) called Gold Rush. It's awesome. Basically, a bunch of unemployed men from Oregon got sick of sitting around being unemployed. So they sold all their stuff to raise $100 K, and used that to buy a couple of old backhoes, drove up to Alaska, and started digging for gold. Of course, none of them know anything about mining.
Anyhow, great show, highly recommended for those non-sciencing times.
Monday, December 13, 2010
predictions, FTW!
In grade school, we were all taught the scientific method, right?
The idea is that you observe something, and that makes you have some thought about how it works, and that thought gives you ideas about what other things might be true that you could observe, and then you go and look for those things, thus making more observations, and the cycle continues.
All too often, though, it becomes very hard to "have the thought about how it works" (come up with a compelling and parsimonious theory), and then the next stage of predicting observations that would be true, if your theory is correct, falls by the wayside.
In a really compelling paper, Peter Lipton explains why it's not okay to just by-pass this seemingly hard step (theorizing).
His basic argument is that, once you have all the facts, you can contort a theory in any way you want to get it to fit all the data. So the fact that the theory agrees with data is not necessarily impressive.
However, if you are predicting the results of experiments that haven't been done yet, you don't have that luxury, which forces the predictions to be justified by firm logic rather than "it fits the data" (because the data hasn't been collected yet!)
After months of hard work, I have finally succeeded in making some concrete predictions for doable experiments. I will discuss this a bit more when the paper (eventually) comes out, although those who were at my talk today in the Redwood Center already have some idea.
The idea is that you observe something, and that makes you have some thought about how it works, and that thought gives you ideas about what other things might be true that you could observe, and then you go and look for those things, thus making more observations, and the cycle continues.
All too often, though, it becomes very hard to "have the thought about how it works" (come up with a compelling and parsimonious theory), and then the next stage of predicting observations that would be true, if your theory is correct, falls by the wayside.
In a really compelling paper, Peter Lipton explains why it's not okay to just by-pass this seemingly hard step (theorizing).
His basic argument is that, once you have all the facts, you can contort a theory in any way you want to get it to fit all the data. So the fact that the theory agrees with data is not necessarily impressive.
However, if you are predicting the results of experiments that haven't been done yet, you don't have that luxury, which forces the predictions to be justified by firm logic rather than "it fits the data" (because the data hasn't been collected yet!)
After months of hard work, I have finally succeeded in making some concrete predictions for doable experiments. I will discuss this a bit more when the paper (eventually) comes out, although those who were at my talk today in the Redwood Center already have some idea.
Thursday, December 9, 2010
The play-off beard
Apologies to regular followers of my blog for the relative dearth of content lately, and especially the shortage of pop-science posts.
Today's post will continue in that vein, and be about superstition, hockey, and undergrads.
Most hockey fans are familiar with the concept of a play-off beard. Basically, if you are a hockey player, and your team is in the championship rounds of play, you grow a beard. Typically, this is seen as some sort of team bonding (and maybe superstition) thing.
Has anyone noticed that undergrads do the same thing around exam time? I mean, wandering around Berkeley this week, you would swear that it was
Stanley cup season, and that the San Jose sharks (our "local" NHL team) had just signed thousands of scrawny, glasses-wearing forwards.
This probably has less to do with team bonding (since Berkeley's "official" policy is that undergrad exams are not a team excercise, although I have seen some students attempting to make them more collaborative, often with poor results for all involved), and more to do with sheer laziness.
Whatever the reason, I propose the term "play-off beard" to refer to the general air of unkemptedness one sees in undergrads around exam time. Any takers?
Today's post will continue in that vein, and be about superstition, hockey, and undergrads.
Most hockey fans are familiar with the concept of a play-off beard. Basically, if you are a hockey player, and your team is in the championship rounds of play, you grow a beard. Typically, this is seen as some sort of team bonding (and maybe superstition) thing.
Has anyone noticed that undergrads do the same thing around exam time? I mean, wandering around Berkeley this week, you would swear that it was
Stanley cup season, and that the San Jose sharks (our "local" NHL team) had just signed thousands of scrawny, glasses-wearing forwards.
This probably has less to do with team bonding (since Berkeley's "official" policy is that undergrad exams are not a team excercise, although I have seen some students attempting to make them more collaborative, often with poor results for all involved), and more to do with sheer laziness.
Whatever the reason, I propose the term "play-off beard" to refer to the general air of unkemptedness one sees in undergrads around exam time. Any takers?
Thursday, December 2, 2010
what's in a name?
At my qualifying exam yesterday (long oral exam you need to pass in order to become a PhD candidate), it became apparent that there is lots I don't yet know about the anatomy of the brain, although I know enough about my subfield that I passed my exam: yay!
This experience reminded me of the value of factual knowledge, in addition to technical skill.
Now, physicists often make disparaging comments about fields that require lots of factual knowledge. The famous nuclear physicist Ernest Rutherford for example once remarked that "all science is either physics or stamp collecting". In some sense, this is a valid criticism of the way biology was approached in the past: our goal as scientists is not simply to identify and name phenomena in the natural world, but rather to seek out elegant, simple explanations for why things are the way they are. For that reason, physicists value theories with simple premises and broad predictive power above all else.
However, at some point, knowledge of the names and properties of all the stamps out there can help to inform theories that simplify that list: maybe there are fundamental properties that are common to all stamps, or at least certain classes of stamps that would be missed if you just ignored all the "stampiness" in the world. And, in such a situation, an arrogant attitude of "if I can't derive it from first principles it must not be important" is no longer productive.
So far in my work, none of these fine details have been very important, which is probably why I haven't bothered to learn them yet. But, as I move forward, and attempt to make more and more realistic models, there will come a time when I will need to know these things.
So, that is my vow for the next 6 months, which I am putting in writing, so as to force myself to do it. I will learn the functional anatomy of the brain. In particular, I will be able to:
1) Identify, on a diagram of the brain, the major regions (medial temporal lobe, auditory cortex, etc.), and describe (at least briefly) our current knowledge of their functions
2) Describe the different types of cells present in primate cortex, along with how they are identified (ie: the differences in their appearances under a microscope), and what differs between their physiology, and how they are connected.
This experience reminded me of the value of factual knowledge, in addition to technical skill.
Now, physicists often make disparaging comments about fields that require lots of factual knowledge. The famous nuclear physicist Ernest Rutherford for example once remarked that "all science is either physics or stamp collecting". In some sense, this is a valid criticism of the way biology was approached in the past: our goal as scientists is not simply to identify and name phenomena in the natural world, but rather to seek out elegant, simple explanations for why things are the way they are. For that reason, physicists value theories with simple premises and broad predictive power above all else.
However, at some point, knowledge of the names and properties of all the stamps out there can help to inform theories that simplify that list: maybe there are fundamental properties that are common to all stamps, or at least certain classes of stamps that would be missed if you just ignored all the "stampiness" in the world. And, in such a situation, an arrogant attitude of "if I can't derive it from first principles it must not be important" is no longer productive.
So far in my work, none of these fine details have been very important, which is probably why I haven't bothered to learn them yet. But, as I move forward, and attempt to make more and more realistic models, there will come a time when I will need to know these things.
So, that is my vow for the next 6 months, which I am putting in writing, so as to force myself to do it. I will learn the functional anatomy of the brain. In particular, I will be able to:
1) Identify, on a diagram of the brain, the major regions (medial temporal lobe, auditory cortex, etc.), and describe (at least briefly) our current knowledge of their functions
2) Describe the different types of cells present in primate cortex, along with how they are identified (ie: the differences in their appearances under a microscope), and what differs between their physiology, and how they are connected.
Monday, November 22, 2010
prior prior pants on fior (sp.?)
This post will be about the application of prior knowledge in decision making. I'll start with a very simple example involving coin flips, then move on to discussing the results of a very cool cognitive science experiment.
So, here's our example. Let's imagine that you and I are betting on coin flips. We each wager $1, and you get to flip the coin. If it lands on heads, I win your dollar (and get mine back), while if it lands on tails, you get my dollar (and you get yours back). Simple, right. Good. Now, I pull out from my pocket a US 25-cent coin, and we start flipping coins. It lands on heads on each of the first 3 flips, giving me a profit of $3.
Now, before we go any further, you want to estimate the probability that any given flip will land on heads versus tails. From the data available in our (3-flip) experiment, it looks like the coin always lands on heads. So, based purely on that information, you should stop betting, and you should call me a cheat! Is that really the best course of action?
Intuitively, you know that it's not so simple. We all know the exact sequence of flips we saw would happen 12.5% of the time, if we were using a fair coin. And, we've seen enough coins before in our lives to expect that they are fair (land heads or tails roughly equally often). And that's the crux of the issue: you have some prior knowledge about coins that tells you not to rush to hasty decisions.
Now, I promised you a very cool cognitive science experiment, and I'm going to deliver just that. Here's the experiment. Tom Griffiths (now a Berkeley prof) asked a bunch of human subjects (randomly selected undergrads) questions, such as (quoted from Griffiths and Tenebaum's paper)
"Imagine you hear about a movie that has taken in 10 million dollars at the box office, but don’t know how long it has been running. What would you predict for the total amount of box office intake for that movie?"
or
"If your friend read you her favorite line of poetry, and told you it was line 5 of a poem, what would you predict for the total length of the poem?"
In statistics, if you know the distributions of, say, lengths of poems, it's a fairly straightforward (Bayesian inference) problem to calculate the answers to these questions. But Griffith's subjects were not stats wizards, and they didn't have time to calculate, and they were not provided with the distributions. Furthermore, they were explicitly instructed to make intuitive guesses, not calculations.
Shockingly (to me, anyway), the subject's answers (on average) match the statistically optimal predictions!
So, somehow, your brain automatically "knows" all these statistics distributions from your everyday experience. And, when you make seemingly random intuitive guesses about stuff, your brain draws on that information to make (statistically) the best possible decision.
Not too bad for a giant lump of fat.
So, here's our example. Let's imagine that you and I are betting on coin flips. We each wager $1, and you get to flip the coin. If it lands on heads, I win your dollar (and get mine back), while if it lands on tails, you get my dollar (and you get yours back). Simple, right. Good. Now, I pull out from my pocket a US 25-cent coin, and we start flipping coins. It lands on heads on each of the first 3 flips, giving me a profit of $3.
Now, before we go any further, you want to estimate the probability that any given flip will land on heads versus tails. From the data available in our (3-flip) experiment, it looks like the coin always lands on heads. So, based purely on that information, you should stop betting, and you should call me a cheat! Is that really the best course of action?
Intuitively, you know that it's not so simple. We all know the exact sequence of flips we saw would happen 12.5% of the time, if we were using a fair coin. And, we've seen enough coins before in our lives to expect that they are fair (land heads or tails roughly equally often). And that's the crux of the issue: you have some prior knowledge about coins that tells you not to rush to hasty decisions.
Now, I promised you a very cool cognitive science experiment, and I'm going to deliver just that. Here's the experiment. Tom Griffiths (now a Berkeley prof) asked a bunch of human subjects (randomly selected undergrads) questions, such as (quoted from Griffiths and Tenebaum's paper)
"Imagine you hear about a movie that has taken in 10 million dollars at the box office, but don’t know how long it has been running. What would you predict for the total amount of box office intake for that movie?"
or
"If your friend read you her favorite line of poetry, and told you it was line 5 of a poem, what would you predict for the total length of the poem?"
In statistics, if you know the distributions of, say, lengths of poems, it's a fairly straightforward (Bayesian inference) problem to calculate the answers to these questions. But Griffith's subjects were not stats wizards, and they didn't have time to calculate, and they were not provided with the distributions. Furthermore, they were explicitly instructed to make intuitive guesses, not calculations.
Shockingly (to me, anyway), the subject's answers (on average) match the statistically optimal predictions!
So, somehow, your brain automatically "knows" all these statistics distributions from your everyday experience. And, when you make seemingly random intuitive guesses about stuff, your brain draws on that information to make (statistically) the best possible decision.
Not too bad for a giant lump of fat.
Monday, November 15, 2010
Are you looking for a job?
This post contains a fun (imho) anecdote from the (30,000 person) neuroscience meeting I am currently attending.
I was presenting a poster yesterday afternoon, in the neuroethology section of this meeting. This was, on its own, a very new experience for me: mine was poster MMM35 (they start at A1), and the poster session was in this gigantic warehouse-like space at the San Diego convention center. There could have easily been 10,000 people in this room.
About 1/2-way through the poster session, a scientist from [a prestigious university] approached me, after listening to me explain about my poster and asked "Are you looking for a job? How would you feel about joining our new group in [a relatively new area of research]?"
Now, one of the things I struggle with is being confident that the work I am doing is of interest to the general scientific community. This sort of feedback (and that of other people I spoke with) reaffirms that I am doing something at least mildly worthwhile (in addition to fun!).
Thus, I declare SFN to be a success, with several days more conferencing to go!
I was presenting a poster yesterday afternoon, in the neuroethology section of this meeting. This was, on its own, a very new experience for me: mine was poster MMM35 (they start at A1), and the poster session was in this gigantic warehouse-like space at the San Diego convention center. There could have easily been 10,000 people in this room.
About 1/2-way through the poster session, a scientist from [a prestigious university] approached me, after listening to me explain about my poster and asked "Are you looking for a job? How would you feel about joining our new group in [a relatively new area of research]?"
Now, one of the things I struggle with is being confident that the work I am doing is of interest to the general scientific community. This sort of feedback (and that of other people I spoke with) reaffirms that I am doing something at least mildly worthwhile (in addition to fun!).
Thus, I declare SFN to be a success, with several days more conferencing to go!
Wednesday, November 10, 2010
SoCal
I'm at SFO now, on my way to Los Angeles for the annual Fulbright science retreat. These things are always lots of fun. More importantly, they are a great excuse to have informal conversations with scientists from many different fields. This is always really interesting, and often sparks new ideas that wouldn't come up otherwise. In particular, it was at one of these retreats that I cemented my decision to quit doing "traditional" physics, and start doing research in theoretical neuroscience.
By contrast, most conferences are single-discipline.
From LA, I'm off to San Diego for the annual Society for Neuroscience meeting. This is a huge meeting (over 30,000 scientists!), and is single-disciplinary (although neuroscience is a pretty multidisciplinary field, so this is a bit of a misnomer).
Hopefully I'll have a chance to hit the beach (and maybe do some dinghy sailing in the warm SoCal waters... a nice contrast from the frigidity of the ocean in the SF area), in addition to some serious sciencing.
That's all for this (decidedly low-content) post. I'll write something more serious about the role of statistical priors on decision-making when I get a chance.
By contrast, most conferences are single-discipline.
From LA, I'm off to San Diego for the annual Society for Neuroscience meeting. This is a huge meeting (over 30,000 scientists!), and is single-disciplinary (although neuroscience is a pretty multidisciplinary field, so this is a bit of a misnomer).
Hopefully I'll have a chance to hit the beach (and maybe do some dinghy sailing in the warm SoCal waters... a nice contrast from the frigidity of the ocean in the SF area), in addition to some serious sciencing.
That's all for this (decidedly low-content) post. I'll write something more serious about the role of statistical priors on decision-making when I get a chance.
Sunday, November 7, 2010
the thermodynamics of mid-term elections
As most of you know, last week was the US mid-term elections. Their electoral system is a bit intricate, and I'll (briefly) summarize the key feature before I move on:
Every 4 years, they elect the president, and a large fraction of their congress. Also every 4 years, but 1/2-way through the presidential term (the elections are staggered so there's one election every 2 years), they elect the other (large) fraction of congress, along with state governors, etc. While the president is the face of the government, and has a lot of power, he can't actually institute much change without the support of congress.
Okay, now that we know the lay of the land, let's imagine that you are a newly elected president, who promised sweeping changes. Indeed, it's hard to imagine someone getting elected unless they make such promises (irregardless of the nature of the promises: cutting spending, or building new social programs, or whatever): there's always lots of stuff wrong, and the voters want to elect someone to fix that stuff.
Well, if you promised to cut things, and remove existing social programs (or institutions, or whatever), you are in luck: it's pretty easy, and pretty fast to do that. 2 years into your term, when the mid-term elections come up, you can say to the voters "look at the stuff I promised to get rid of that I, indeed, got rid off. Give me more power in congress, and I will do more of this stuff." Consequently, you are likely to get that power, and to have increased power in the next 2 years of your term.
Now, let's imagine that, instead of promising to get rid of stuff, you promised to build new things (health care, or whatever).
Well, an important lesson from physics is that it's much harder to build things than to tear them down (this is the second law of thermodynamics, which says that chaos and disorder are always increasing over time). Imagine, for example, how long it takes to build a house, compared to how long it takes that house to fall down, once you set off an explosive, or start a fire, in that house.
Okay, so it takes a long time to build new things, and so it's pretty likely that, come the mid-term election, you won't yet have succeeded in getting your new programs running, or at least not running very effectively.
Now, at the mid-term election, the opposition can correctly say "see, the president promised all this stuff, but it's not working. give us more power!". The result is that the voters give more power to the opposition. Consequently, in the last 1/2 of your term as president, you have even less power in congress, making it very hard to ever get all of those programs working (the ones you promised to get working, in order to be elected as president in the first place).
So, we see an interesting effect: the relative slowness of building new programs (versus cutting them), coupled with the existence of mid-term elections that can change the balance of power in congress, means that administrations that cut existing programs meet with much more success than those that institute new ones. This is all a consequence of well-understood physics, but I haven't yet seen anyone spell out the consequences of the second law when it comes to elections.
Is this a good thing? I have my opinions, which I've tried to keep to myself. I'll let you decide.
Disclaimer: I am not a political analyst, nor do I have any training in political science. But, I know physics, and I'm willing to take a shot at applying that knowledge to any domain in which I think it is appropriate.
Every 4 years, they elect the president, and a large fraction of their congress. Also every 4 years, but 1/2-way through the presidential term (the elections are staggered so there's one election every 2 years), they elect the other (large) fraction of congress, along with state governors, etc. While the president is the face of the government, and has a lot of power, he can't actually institute much change without the support of congress.
Okay, now that we know the lay of the land, let's imagine that you are a newly elected president, who promised sweeping changes. Indeed, it's hard to imagine someone getting elected unless they make such promises (irregardless of the nature of the promises: cutting spending, or building new social programs, or whatever): there's always lots of stuff wrong, and the voters want to elect someone to fix that stuff.
Well, if you promised to cut things, and remove existing social programs (or institutions, or whatever), you are in luck: it's pretty easy, and pretty fast to do that. 2 years into your term, when the mid-term elections come up, you can say to the voters "look at the stuff I promised to get rid of that I, indeed, got rid off. Give me more power in congress, and I will do more of this stuff." Consequently, you are likely to get that power, and to have increased power in the next 2 years of your term.
Now, let's imagine that, instead of promising to get rid of stuff, you promised to build new things (health care, or whatever).
Well, an important lesson from physics is that it's much harder to build things than to tear them down (this is the second law of thermodynamics, which says that chaos and disorder are always increasing over time). Imagine, for example, how long it takes to build a house, compared to how long it takes that house to fall down, once you set off an explosive, or start a fire, in that house.
Okay, so it takes a long time to build new things, and so it's pretty likely that, come the mid-term election, you won't yet have succeeded in getting your new programs running, or at least not running very effectively.
Now, at the mid-term election, the opposition can correctly say "see, the president promised all this stuff, but it's not working. give us more power!". The result is that the voters give more power to the opposition. Consequently, in the last 1/2 of your term as president, you have even less power in congress, making it very hard to ever get all of those programs working (the ones you promised to get working, in order to be elected as president in the first place).
So, we see an interesting effect: the relative slowness of building new programs (versus cutting them), coupled with the existence of mid-term elections that can change the balance of power in congress, means that administrations that cut existing programs meet with much more success than those that institute new ones. This is all a consequence of well-understood physics, but I haven't yet seen anyone spell out the consequences of the second law when it comes to elections.
Is this a good thing? I have my opinions, which I've tried to keep to myself. I'll let you decide.
Disclaimer: I am not a political analyst, nor do I have any training in political science. But, I know physics, and I'm willing to take a shot at applying that knowledge to any domain in which I think it is appropriate.
Thursday, November 4, 2010
computing at a Van Halen concert
So... one of the key features of the brain is that it is, in some sense, noisy.
I mean this in the sense of electronics, or communications "noise" (like the static when you listen to a radio station and your radio isn't exactly tuned right). This noise makes it hard to pick out the underlying "signal": the thing you are actually interested in.
Well, neurons in your brain are also pretty noisy: when presented with the same stimulus over and over again, they don't always respond the same way. Furthermore, the environment we live in is intrinsically noisy: very chaotic things like winds, cloud cover etc. mean that even the same tree will look slightly different each time you look at it.
Somehow, the noisy operation of your brain, functioning in this noisy world, still allows it to do things (like recognize that tree) that even super-advanced computers have trouble with. Those computers have none of this randomness associated with their operation. One possibility is that the noise in your brain is, somehow, crucial for it to function properly (as opposed to being a distraction that stops it from working).
Yesterday's Redwood Center talk was from some bay-area entrepreneurs who are trying to make computing machines that have some of this randomness built-in as part of their core functionality. Basically, like building a machine to work more like the brain.
They showed some pretty impressive results, although a fully functioning fake brain still lies in the very distant future.
I mean this in the sense of electronics, or communications "noise" (like the static when you listen to a radio station and your radio isn't exactly tuned right). This noise makes it hard to pick out the underlying "signal": the thing you are actually interested in.
Well, neurons in your brain are also pretty noisy: when presented with the same stimulus over and over again, they don't always respond the same way. Furthermore, the environment we live in is intrinsically noisy: very chaotic things like winds, cloud cover etc. mean that even the same tree will look slightly different each time you look at it.
Somehow, the noisy operation of your brain, functioning in this noisy world, still allows it to do things (like recognize that tree) that even super-advanced computers have trouble with. Those computers have none of this randomness associated with their operation. One possibility is that the noise in your brain is, somehow, crucial for it to function properly (as opposed to being a distraction that stops it from working).
Yesterday's Redwood Center talk was from some bay-area entrepreneurs who are trying to make computing machines that have some of this randomness built-in as part of their core functionality. Basically, like building a machine to work more like the brain.
They showed some pretty impressive results, although a fully functioning fake brain still lies in the very distant future.
Monday, November 1, 2010
is Eminem a closet neuroscientist?
"I can't tell you what it really is, I can only tell you what it feels like"
- Marshall Mathers
This quote, from a rap duet by Rihanna and Eminem explains, in a nutshell, the key challenge in cognitive neuroscience: missing information.
You see, your brain is constantly trying to build up a good representation of your surroundings, which in turn allows you to make "sensible" behavioral decisions. However, the information that you can gather from the outside world is insufficient to know for sure the state of your environment ("what it really is", so to speak).
Now, your brain is pretty good at making educated guesses (inference) about the environment, but, at the end of the day, those guesses ("what it feels like") are all that you have available to guide your behavior.
As an example, consider the problem of vision. You have two eyes (probably), each of which collects light on a 2-D array of photoreceptors that are each sensitive to one of 3 colors.
But the world you are trying to understand has objects spread out in three dimensions, and with a near-infinite number of colors. So, clearly there is some information you are missing.
Your brain's ability to fill in the pieces, and make good guesses is absolutely remarkable. However, the fact that it's constantly making these insane leaps of inference also makes your brain very susceptible to being tricked.
This leaves us with an interesting dichotomy: the same computational inference ability that makes the brain such a powerful tool is also the one of its main sources of weakness.
- Marshall Mathers
This quote, from a rap duet by Rihanna and Eminem explains, in a nutshell, the key challenge in cognitive neuroscience: missing information.
You see, your brain is constantly trying to build up a good representation of your surroundings, which in turn allows you to make "sensible" behavioral decisions. However, the information that you can gather from the outside world is insufficient to know for sure the state of your environment ("what it really is", so to speak).
Now, your brain is pretty good at making educated guesses (inference) about the environment, but, at the end of the day, those guesses ("what it feels like") are all that you have available to guide your behavior.
As an example, consider the problem of vision. You have two eyes (probably), each of which collects light on a 2-D array of photoreceptors that are each sensitive to one of 3 colors.
But the world you are trying to understand has objects spread out in three dimensions, and with a near-infinite number of colors. So, clearly there is some information you are missing.
Your brain's ability to fill in the pieces, and make good guesses is absolutely remarkable. However, the fact that it's constantly making these insane leaps of inference also makes your brain very susceptible to being tricked.
This leaves us with an interesting dichotomy: the same computational inference ability that makes the brain such a powerful tool is also the one of its main sources of weakness.
Thursday, October 28, 2010
pop-science and the science of getting popped
So.... the blogosphere is alight this morning with reports that more intelligent children grow up to consume more alcohol as adults than their less-clever peers.
Some data is presented, and that seems fairly compelling. In particular, this data comes from longitudinal studies that survey people and collect data on them repeatedly from childhood until adulthood.
This result has led a lot of people to speculate wildly about "why smart people drink more", but I think the issue is not quite as cut-and-dry as it is made to seem.
For example, one of the results of the same longitudinal (UNC) study is that alcohol consumption is negatively correlated with academic achievement in high school. So, it's not like drinking makes you smart, or anything. It's also not true that being smart makes you want to drink. If those things were true, you would expect to see high GPA and alcohol consumption positively correlated.
But, smarter kids drink more as adults. And, given the high-school study, it appears to be a late-onset effect (the smart kids don't drink more than everyone else in high-school, they wait until they are older).
What's my point? Well, for one thing, a lot of science reporting likes to pick up the flashiest headline they can ("OMG! drinking makes you smart", for example), to get people to read their stuff. And a lot of people browsing the news just skim through headlines to get a quick sense of the relevant information. But this whole process ignores the inherent messiness of scientific results, and can be very misleading.
So, before you rush out to put your kids in beer-chugging lessons, take a deep breath, and let the hype die down a bit.
Some data is presented, and that seems fairly compelling. In particular, this data comes from longitudinal studies that survey people and collect data on them repeatedly from childhood until adulthood.
This result has led a lot of people to speculate wildly about "why smart people drink more", but I think the issue is not quite as cut-and-dry as it is made to seem.
For example, one of the results of the same longitudinal (UNC) study is that alcohol consumption is negatively correlated with academic achievement in high school. So, it's not like drinking makes you smart, or anything. It's also not true that being smart makes you want to drink. If those things were true, you would expect to see high GPA and alcohol consumption positively correlated.
But, smarter kids drink more as adults. And, given the high-school study, it appears to be a late-onset effect (the smart kids don't drink more than everyone else in high-school, they wait until they are older).
What's my point? Well, for one thing, a lot of science reporting likes to pick up the flashiest headline they can ("OMG! drinking makes you smart", for example), to get people to read their stuff. And a lot of people browsing the news just skim through headlines to get a quick sense of the relevant information. But this whole process ignores the inherent messiness of scientific results, and can be very misleading.
So, before you rush out to put your kids in beer-chugging lessons, take a deep breath, and let the hype die down a bit.
the sound of settling
I've been running a lot of computer simulations lately.
These start in random initial conditions and (eventually) learn image features. When the simulation has figured it all out, its dictionary of features stops changing: we say that the simulation has "converged", or "settled".
This can, and often is, a long process (several days, up to a few weeks), which is frustrating if you just want to know the answer!
So, what is the sound of a simulation as it settles? Well, it sounds like hope and despair, all set to the gentle hum of the computer's cooling fan. Ben Gibbard would be proud.
These start in random initial conditions and (eventually) learn image features. When the simulation has figured it all out, its dictionary of features stops changing: we say that the simulation has "converged", or "settled".
This can, and often is, a long process (several days, up to a few weeks), which is frustrating if you just want to know the answer!
So, what is the sound of a simulation as it settles? Well, it sounds like hope and despair, all set to the gentle hum of the computer's cooling fan. Ben Gibbard would be proud.
Sunday, October 24, 2010
who will watch the watchmen?
Thanks to my dad for sending me an excellent article.
For concreteness, the article in question is a popularized discussion of a paper published in PLos Medicine (a high profile medical journal) entitled Why most published research findings are false.
I think that Ioannidis (the author of the PLoS paper) makes some excellent points, but I am more confident in the quality of scientific publications than is he. Should you agree with me? I'll let you judge for yourself, once you understand the idea behind the argument.
So here's the basic idea. As a scientist, you impress funding agencies and hiring committees (and secure yourself a career), at least in part, by publishing in highly selective journals. Those journals only want to publish results that are "surprising" in some way. Now, on their own, both of these things are completely reasonable.
However, combining these properties, you get the result is that surprising work is more often published, and has more impact in the scientific community than does less surprising work. Here's a quick example (from Ioannidis, quoted by Freedman in the Atlantic article) to show how this works, which I modify slightly for my purposes.
The results of most experiments have some intrinsic randomness associated with them. So, if you repeat the experiment a few times, you expect to get a different, but (probably) similar, result each time. If you repeat the experiment enough times, you eventually get an answer that is very different from the norm. If you are repeating the experiment yourself, you know this, and identify the unusual result as being a statistical fluke. When you report your result, you include all of the trials (or even omit the outlier), and the reader has a good knowledge of the typical result, and the variation they can expect. This is good science, and is not a problem.
Now imagine that the experiment is very long and costly to perform, so you only do it once. With high probability, you get the typical (maybe boring) result, and either publish it in a low-ranking journal (where not many people read it), or not at all. However, there is some (maybe small) chance that you will discover something exciting, and will not know that the result is atypical, inasmuch as the result would not occur often if the experiment were repeated many times. If you do get the "exciting" (surprising) result, you publish it in a high-profile journal. Here, as in the first example, you are still not doing anything "wrong" as a scientist. Since the experiment can't be repeated, you can't say if the result is typical or not, but that's how it goes.You just report thoroughly what you observed, and how you did the measurement, and any relevant interpretations you made, and leave it to your readers to make responsible use of your results.
But, to save time in wading through the mountains of work being published, most scientists (myself included) start by reading the "important" journals, and don't spend as much time digging through the lesser ones.
Interestingly, the end result for the community seems to be that statistically atypical results have more prominence than do more typical ones. And no one has to do anything overtly "wrong" for it to happen: it's a natural consequence of giving more exposure to more "surprising" research.
So that's Ioannidis' argument, and it's pretty compelling.
As a theorist, I like to imagine that I am immune from such things (since I don't really do experiments, these "randomness" effects from experimentation don't affect my work in the same way). However, when I sit down to formulate new theories, they are often heavily guided by the observations of experimenters. And I, too, spend more time reading high-profile papers than low-profile ones. So, in some sense, I am as vulnerable as anyone else.
What to do about this? Well, I think we, as a scientific community, should be more prone to publish negative results (ie: "I didn't see anything interesting happen"), as well as positive ones (ie: "OMG! It totally turned blue!", or whatever). We should probably also not put such a premium on papers from high profile journals, especially in terms of what we read to direct our research.
So, this is my mid-October resolution: I will spend more time reading results from low-profile journals, and give those results the same amount of thought that I put into higher-profile ones.
For concreteness, the article in question is a popularized discussion of a paper published in PLos Medicine (a high profile medical journal) entitled Why most published research findings are false.
I think that Ioannidis (the author of the PLoS paper) makes some excellent points, but I am more confident in the quality of scientific publications than is he. Should you agree with me? I'll let you judge for yourself, once you understand the idea behind the argument.
So here's the basic idea. As a scientist, you impress funding agencies and hiring committees (and secure yourself a career), at least in part, by publishing in highly selective journals. Those journals only want to publish results that are "surprising" in some way. Now, on their own, both of these things are completely reasonable.
However, combining these properties, you get the result is that surprising work is more often published, and has more impact in the scientific community than does less surprising work. Here's a quick example (from Ioannidis, quoted by Freedman in the Atlantic article) to show how this works, which I modify slightly for my purposes.
The results of most experiments have some intrinsic randomness associated with them. So, if you repeat the experiment a few times, you expect to get a different, but (probably) similar, result each time. If you repeat the experiment enough times, you eventually get an answer that is very different from the norm. If you are repeating the experiment yourself, you know this, and identify the unusual result as being a statistical fluke. When you report your result, you include all of the trials (or even omit the outlier), and the reader has a good knowledge of the typical result, and the variation they can expect. This is good science, and is not a problem.
Now imagine that the experiment is very long and costly to perform, so you only do it once. With high probability, you get the typical (maybe boring) result, and either publish it in a low-ranking journal (where not many people read it), or not at all. However, there is some (maybe small) chance that you will discover something exciting, and will not know that the result is atypical, inasmuch as the result would not occur often if the experiment were repeated many times. If you do get the "exciting" (surprising) result, you publish it in a high-profile journal. Here, as in the first example, you are still not doing anything "wrong" as a scientist. Since the experiment can't be repeated, you can't say if the result is typical or not, but that's how it goes.You just report thoroughly what you observed, and how you did the measurement, and any relevant interpretations you made, and leave it to your readers to make responsible use of your results.
But, to save time in wading through the mountains of work being published, most scientists (myself included) start by reading the "important" journals, and don't spend as much time digging through the lesser ones.
Interestingly, the end result for the community seems to be that statistically atypical results have more prominence than do more typical ones. And no one has to do anything overtly "wrong" for it to happen: it's a natural consequence of giving more exposure to more "surprising" research.
So that's Ioannidis' argument, and it's pretty compelling.
As a theorist, I like to imagine that I am immune from such things (since I don't really do experiments, these "randomness" effects from experimentation don't affect my work in the same way). However, when I sit down to formulate new theories, they are often heavily guided by the observations of experimenters. And I, too, spend more time reading high-profile papers than low-profile ones. So, in some sense, I am as vulnerable as anyone else.
What to do about this? Well, I think we, as a scientific community, should be more prone to publish negative results (ie: "I didn't see anything interesting happen"), as well as positive ones (ie: "OMG! It totally turned blue!", or whatever). We should probably also not put such a premium on papers from high profile journals, especially in terms of what we read to direct our research.
So, this is my mid-October resolution: I will spend more time reading results from low-profile journals, and give those results the same amount of thought that I put into higher-profile ones.
Thursday, October 21, 2010
it pays to be sparse
Today's post will be about sparseness.
The basic idea is that, if you look in my brain while it's processing an image, there will only be a small number of nerve cells active at any time. So, while the input image comes in as millions of numbers (the activity values of all the photoreceptors on my retina), my visual cortex is representing that image in terms of a much smaller number of variables.
This is good for a lot of reasons: it reduces the amount of energy I need to spend on image processing (small number of active neurons means less energy, and my brain takes up a lot of my body's energy budget), reduces the number of values that need to be passed on to the next stage of sensory processing, and it makes the input "simpler".
What do I mean by simpler? Well, on some level, my brain is seeking to "explain" the input image, in terms of a (usually small) number of relevant "causes". As an example, my desk right now contains a laptop, a coffee cup, and a picture of my girlfriend. If I want to make behavioral decisions, that's probably enough information for me: I don't need to actively consider all of the messy details of each of those objects, although I can figure them out of I want to.
So, by maintaining a sparse representation, my brain is forcing itself to find the relevant information, while filtering away a lot of the unnecessary details. For this reason, sparseness is one of the most important ideas in all of unsupervised learning.
Indeed, almost every paper published in the last 15 years about coding of sensory inputs boils down to seeking sparse representations of naturalistic stimuli.
The cool thing is that the guys who invented this notion work just down the hall for me. Berkeley FTW!
The basic idea is that, if you look in my brain while it's processing an image, there will only be a small number of nerve cells active at any time. So, while the input image comes in as millions of numbers (the activity values of all the photoreceptors on my retina), my visual cortex is representing that image in terms of a much smaller number of variables.
This is good for a lot of reasons: it reduces the amount of energy I need to spend on image processing (small number of active neurons means less energy, and my brain takes up a lot of my body's energy budget), reduces the number of values that need to be passed on to the next stage of sensory processing, and it makes the input "simpler".
What do I mean by simpler? Well, on some level, my brain is seeking to "explain" the input image, in terms of a (usually small) number of relevant "causes". As an example, my desk right now contains a laptop, a coffee cup, and a picture of my girlfriend. If I want to make behavioral decisions, that's probably enough information for me: I don't need to actively consider all of the messy details of each of those objects, although I can figure them out of I want to.
So, by maintaining a sparse representation, my brain is forcing itself to find the relevant information, while filtering away a lot of the unnecessary details. For this reason, sparseness is one of the most important ideas in all of unsupervised learning.
Indeed, almost every paper published in the last 15 years about coding of sensory inputs boils down to seeking sparse representations of naturalistic stimuli.
The cool thing is that the guys who invented this notion work just down the hall for me. Berkeley FTW!
Monday, October 18, 2010
the mating game
I am back from the neuroscience retreat in Lake Tahoe. I had a lot of fun, and have some new ideas for science. These involve semi-autonomous sensorimotor control systems, and will not be discussed in this blog post.
Both nights of the retreat, the neuro grad students threw a big party for all of us. It was a great opportunity to drink a few beers, and do some networking.
At one of said parties, I was discussing some recent work I did on escape decisions for prey animals with imperfect information, and my colleague inquired about whether or not I had considered the issue of mating opportunities with imperfect information.
That question is the topic of this blog post.
Imagine that you are a lady-deer (doe), and that it's mating season. You will be in heat for 10 days, after which it's too late for you (you have to wait until next year to mate).
Imagine that you get to mate once and only once this mating season and that, each day, you get the chance to inspect one randomly selected man-deer (buck), and choose whether or not to mate with him. Also imagine that you can assess the quality of the man-deer from your interaction, and that not all men-deer are equal (some are better potential mates). What selection strategy can you use to mate with the best possible male, and how does that strategy change as the season progresses?
I think the answer is pretty simple, and we can figure it out by working backwards from the last day. On the last day, you should mate with whatever male you see, because it is your last chance to mate (and even a poor quality mating opportunity is better than none at all, right?!).
On the second-to-last-day, you should mate with the male if he is better than average (in other words, better than the expectation value of quality of the male you will see the next day).
One the third-to-last day, you should mate with the male if he is better than 2/3 of the population. To be more rigorous, I would say "mate if the male is better than the expectation value of the max. quality of two randomly selected males", but the 2/3 rule is fine for our current purposes.
Clearly, with more time left in the mating season, we can afford to be more selective.
Formally, I think the optimal strategy is "mate with the male if they are better than the maximum quality in a group of n randomly selected males, where n is the number of days left in the mating season."
I suspect that this result is both easy to prove, and that it has probably already been done by someone (although I am too lazy to find out whom).
Anyhow, next time you are people-watching at a club, and you see people pairing up with strangers, look at the clock, calculate how long until "last call", and consider the subtle mathematics behind "the mating game."
Both nights of the retreat, the neuro grad students threw a big party for all of us. It was a great opportunity to drink a few beers, and do some networking.
At one of said parties, I was discussing some recent work I did on escape decisions for prey animals with imperfect information, and my colleague inquired about whether or not I had considered the issue of mating opportunities with imperfect information.
That question is the topic of this blog post.
Imagine that you are a lady-deer (doe), and that it's mating season. You will be in heat for 10 days, after which it's too late for you (you have to wait until next year to mate).
Imagine that you get to mate once and only once this mating season and that, each day, you get the chance to inspect one randomly selected man-deer (buck), and choose whether or not to mate with him. Also imagine that you can assess the quality of the man-deer from your interaction, and that not all men-deer are equal (some are better potential mates). What selection strategy can you use to mate with the best possible male, and how does that strategy change as the season progresses?
I think the answer is pretty simple, and we can figure it out by working backwards from the last day. On the last day, you should mate with whatever male you see, because it is your last chance to mate (and even a poor quality mating opportunity is better than none at all, right?!).
On the second-to-last-day, you should mate with the male if he is better than average (in other words, better than the expectation value of quality of the male you will see the next day).
One the third-to-last day, you should mate with the male if he is better than 2/3 of the population. To be more rigorous, I would say "mate if the male is better than the expectation value of the max. quality of two randomly selected males", but the 2/3 rule is fine for our current purposes.
Clearly, with more time left in the mating season, we can afford to be more selective.
Formally, I think the optimal strategy is "mate with the male if they are better than the maximum quality in a group of n randomly selected males, where n is the number of days left in the mating season."
I suspect that this result is both easy to prove, and that it has probably already been done by someone (although I am too lazy to find out whom).
Anyhow, next time you are people-watching at a club, and you see people pairing up with strangers, look at the clock, calculate how long until "last call", and consider the subtle mathematics behind "the mating game."
Thursday, October 14, 2010
Retreat!
Today will be a rare two-post day.
Tomorrow, all the neuroscientists (myself included) are off to lake Tahoe for our annual neuro retreat.
What, you might ask, does one do at a neuroscience retreat? Well, it's kind of like what you would get if a frat party mated with a science conference and their children lived in the woods.
We'll present the stuff we've been working on to the other Berkeley neuro people, drink a few beers, maybe go swimming or canoeing (or whatever).
Should be good times. Expect a more thorough post next week.
Tomorrow, all the neuroscientists (myself included) are off to lake Tahoe for our annual neuro retreat.
What, you might ask, does one do at a neuroscience retreat? Well, it's kind of like what you would get if a frat party mated with a science conference and their children lived in the woods.
We'll present the stuff we've been working on to the other Berkeley neuro people, drink a few beers, maybe go swimming or canoeing (or whatever).
Should be good times. Expect a more thorough post next week.
It pays to be submissive
It pays to be submissive... of fellowship and scholarship applications, that is.
Most students apply for some form of scholarship, bursary, or fellowship at some point in their lives. Often, they apply because: a) someone told them they would be a good candidate, or b) they read the qualities for which the award is given and thought they'd be a good match.
Those are excellent reasons to apply for stuff, but simply using a) and b) as your criteria for what to apply for results in a lot of missed opportunities.
When I was an undergrad, I was pretty shameless in applying for every scrap of money for which I wasn't explicitly ineligible (as a white male, some of the "for visible minorities", etc. awards were just not gonna go to me). In fact, the University Women's club of Vancouver gives out an annual scholarship (several thousand dollars), for which the criteria don't explicitly state that the recipient must be female. I applied, and subsequently won the award!
Here is my point, and some A+ advice for academic success: often the cost of applying for stuff (in terms of time and effort) is very low compared to the benefit that you get if you win (ie: 30 minutes of work to apply for a $5,000 scholarship is a pretty good hourly rate!). On those grounds alone, you should apply for anything you have even a remote chance of winning.
But there's another, potentially more important, effect that I like to call the "cash snowball". You see, most awards you apply for ask you to list the other awards you've won. And most committees look at that list and use it to decide how "good" you are. So, if you've won lots of stuff, you will tend to be more successful in winning future stuff.
I suspect that this trend still holds, even if the committee has never heard of the awards on your list. So, they don't know how prestigious (or not) the award was: they just know that someone else thought you were a winner.
So, applying for lots of (even un-prestigious) awards early on in your academic career can be a solid way to set yourself up for future success. It's not a guaranteed strategy for success, but it sure can help.
I suspect the same is true for non-academics: since the cost of finding a better job is low compared to the value of having a better job, it is probably a good idea to always keep your eyes open for new opporunities and to be shameless in pursuing them.
I will refrain from giving relationship advice.
Most students apply for some form of scholarship, bursary, or fellowship at some point in their lives. Often, they apply because: a) someone told them they would be a good candidate, or b) they read the qualities for which the award is given and thought they'd be a good match.
Those are excellent reasons to apply for stuff, but simply using a) and b) as your criteria for what to apply for results in a lot of missed opportunities.
When I was an undergrad, I was pretty shameless in applying for every scrap of money for which I wasn't explicitly ineligible (as a white male, some of the "for visible minorities", etc. awards were just not gonna go to me). In fact, the University Women's club of Vancouver gives out an annual scholarship (several thousand dollars), for which the criteria don't explicitly state that the recipient must be female. I applied, and subsequently won the award!
Here is my point, and some A+ advice for academic success: often the cost of applying for stuff (in terms of time and effort) is very low compared to the benefit that you get if you win (ie: 30 minutes of work to apply for a $5,000 scholarship is a pretty good hourly rate!). On those grounds alone, you should apply for anything you have even a remote chance of winning.
But there's another, potentially more important, effect that I like to call the "cash snowball". You see, most awards you apply for ask you to list the other awards you've won. And most committees look at that list and use it to decide how "good" you are. So, if you've won lots of stuff, you will tend to be more successful in winning future stuff.
I suspect that this trend still holds, even if the committee has never heard of the awards on your list. So, they don't know how prestigious (or not) the award was: they just know that someone else thought you were a winner.
So, applying for lots of (even un-prestigious) awards early on in your academic career can be a solid way to set yourself up for future success. It's not a guaranteed strategy for success, but it sure can help.
I suspect the same is true for non-academics: since the cost of finding a better job is low compared to the value of having a better job, it is probably a good idea to always keep your eyes open for new opporunities and to be shameless in pursuing them.
I will refrain from giving relationship advice.
Tuesday, October 12, 2010
you gotta know when to fold 'em
I'm a scientist.
By definition that means that I am always trying to do things that have never been done before, and may not be possible. Sometimes, that impossibility is bound to creep up on me.
In fact, the more interesting the research question is, the more likely it is that it's not solvable (because, if it's interesting and possible to solve, it's likely that someone will already have solved it).
The problem is that it's very very rarely obvious that a problem is actually not solvable. There's always the chance that, if I only had some new insight, or was a little smarter, I could figure out whatever it is that I'm toiling over. And, once I've sunk months into some question, it gets tough to just jump ship and move on.
For some good advice on this issue, I turn to country music singer Kenny Rogers. The real question is, how do you know when to walk away, and when to run? Unfortunately, Kenny can't answer that question, and neither can I.
By definition that means that I am always trying to do things that have never been done before, and may not be possible. Sometimes, that impossibility is bound to creep up on me.
In fact, the more interesting the research question is, the more likely it is that it's not solvable (because, if it's interesting and possible to solve, it's likely that someone will already have solved it).
The problem is that it's very very rarely obvious that a problem is actually not solvable. There's always the chance that, if I only had some new insight, or was a little smarter, I could figure out whatever it is that I'm toiling over. And, once I've sunk months into some question, it gets tough to just jump ship and move on.
For some good advice on this issue, I turn to country music singer Kenny Rogers. The real question is, how do you know when to walk away, and when to run? Unfortunately, Kenny can't answer that question, and neither can I.
Friday, October 8, 2010
computer codes killed the analytical math star
I'm an awkward code writer and I ain't gonna lie, but I'll be damned if that means that I ain't gonna try
When I started university, I had no idea how to write code, and I was sure that I didn't really want to. But, the SFU physics department required me to take a programming class in order to get my degree (and many years later, I'm glad they did!).
When I was first taught to write code, I understood how to do it, but saw it as something that was probably not necessary for my career. It was just a hoop to jump through en route to getting a degree.
My first summer research job was in a materials chemistry lab. I spent my days mixing chemicals, etc. That experience strengthened my conviction that computer programming wasn't necessary.
My next summer research job was in a nuclear physics lab. Most of what I actually accomplished that summer was write a computer program to simulate reactions in the apparatus. I was glad that I knew how to program computers, but was still pretty sure that this was a one-time hassle.
Since then, I've worked in particle physics, astrophysics, and now theoretical neuroscience. In all of these fields, most of my day-to-day activities revolved around writing code to analyze data, or to simulate complicated math problems.
I'm still not great at coding, and I don't love writing code (although I like it more than I used to!), but I do love the power of being able to solve mathematical problems that are so complex I'd have no hope of solving them by hand.
I guess I'm in this code writing thing for life.
To the young kids out there eager to be physicists, I suggest that you learn to be an expert computer programmer. In fact, learn to love programming. It'll make things much easier for you down the road.
When I started university, I had no idea how to write code, and I was sure that I didn't really want to. But, the SFU physics department required me to take a programming class in order to get my degree (and many years later, I'm glad they did!).
When I was first taught to write code, I understood how to do it, but saw it as something that was probably not necessary for my career. It was just a hoop to jump through en route to getting a degree.
My first summer research job was in a materials chemistry lab. I spent my days mixing chemicals, etc. That experience strengthened my conviction that computer programming wasn't necessary.
My next summer research job was in a nuclear physics lab. Most of what I actually accomplished that summer was write a computer program to simulate reactions in the apparatus. I was glad that I knew how to program computers, but was still pretty sure that this was a one-time hassle.
Since then, I've worked in particle physics, astrophysics, and now theoretical neuroscience. In all of these fields, most of my day-to-day activities revolved around writing code to analyze data, or to simulate complicated math problems.
I'm still not great at coding, and I don't love writing code (although I like it more than I used to!), but I do love the power of being able to solve mathematical problems that are so complex I'd have no hope of solving them by hand.
I guess I'm in this code writing thing for life.
To the young kids out there eager to be physicists, I suggest that you learn to be an expert computer programmer. In fact, learn to love programming. It'll make things much easier for you down the road.
Wednesday, October 6, 2010
nice guys finish last, sort of
In this paper, the authors considered the problem of a group of bacteria living together. The bacteria can make proteins, which are needed for metabolizing sugars ("cooperators"), but which cost energy to make, or they can simply use the proteins surrounding them while producing none ("cheats"). They then did experiments to work out which balance of co-operators and cheaters allowed the population to grow the fastest.
The result is quite surprising: adding some cheaters makes the population grow faster than a population of all cooperators.
Essentially, what happens is that, when there is lots of proteins around, the cooperators have plenty of sugar, so they slow down protein production. When there's a shortage of sugars however, the cooperators produce more of the proteins.
Adding some cheats to the population reduces the sugar supply, driving the cooperators to produce more of the proteins, allowing the population to get more sugar, and thus to grow faster.
This is a very interesting result, and may tell us a lot about group dynamics in competitive-cooperative environments.
Tuesday, October 5, 2010
a dirty free-for-all
This past weekend was the annual Sonoma county harvest fair.
H. and I drove up Sunday morning for a relaxing day of rural pursuits, including tastings of the winning wines from the harvest fair wine competition (btw. the Stryker cab. was brilliant!), a sheep-herding contest, "llamas of wine country" (I kid you not), and the world championship grape stomp competition. That competition will be the focus of this blog post.
The contest itself is pretty simple. Each team consists of one stomper (who stands in a barrel full of grapes, and mashes them with their bare feet), and one person whose job is to collect the juice (with their bare hands). Each team has 30 lbs of grapes, and 3 minutes to collect the most juice. When you enter, you first compete in a qualifying round, and the winning team from each qualifier moves on to the final. The winner of the final gets $1000, and some plane tickets. Pretty straight-forward, right?
Well, the hole through which one attempts to extract the juice is several inches above the bottom of the barrel, so the juice collection is a bit tricky. It turns out that it's quite straightforward to mash all the grapes (and that takes about 30 seconds), so the efficient collecting of juice is what really determines the winner.
Heather and I had the misfortune of competing against the defending champions (from 2004,2006,2008, and 2009) in the qualifier, and thusly did not advance to the finals. However, our experience in the contest (and watching a few of the rounds after ours) gave me some ideas on how to improve our juice-collecting.
In essence, the stomper needs to create a standing wave inside the barrel, with a maximum located right at the hole. That way, there's always juice pushing through the hole. The collector, then just needs to keep the hole from getting clogged with peels (and possibly use their hands to assist in maintaining this wave).
This may require some practice, but we've still got 364 days until next year's championship. Now, back to science.
H. and I drove up Sunday morning for a relaxing day of rural pursuits, including tastings of the winning wines from the harvest fair wine competition (btw. the Stryker cab. was brilliant!), a sheep-herding contest, "llamas of wine country" (I kid you not), and the world championship grape stomp competition. That competition will be the focus of this blog post.
The contest itself is pretty simple. Each team consists of one stomper (who stands in a barrel full of grapes, and mashes them with their bare feet), and one person whose job is to collect the juice (with their bare hands). Each team has 30 lbs of grapes, and 3 minutes to collect the most juice. When you enter, you first compete in a qualifying round, and the winning team from each qualifier moves on to the final. The winner of the final gets $1000, and some plane tickets. Pretty straight-forward, right?
Well, the hole through which one attempts to extract the juice is several inches above the bottom of the barrel, so the juice collection is a bit tricky. It turns out that it's quite straightforward to mash all the grapes (and that takes about 30 seconds), so the efficient collecting of juice is what really determines the winner.
Heather and I had the misfortune of competing against the defending champions (from 2004,2006,2008, and 2009) in the qualifier, and thusly did not advance to the finals. However, our experience in the contest (and watching a few of the rounds after ours) gave me some ideas on how to improve our juice-collecting.
In essence, the stomper needs to create a standing wave inside the barrel, with a maximum located right at the hole. That way, there's always juice pushing through the hole. The collector, then just needs to keep the hole from getting clogged with peels (and possibly use their hands to assist in maintaining this wave).
This may require some practice, but we've still got 364 days until next year's championship. Now, back to science.
Thursday, September 30, 2010
how good is optimality?
Evolution is, undoubtedly, the key principle of theoretical biology. Here's an example to illustrate why it is such a powerful idea.
Imagine that I start off with a whole bunch of animals, 1/2 of which are red, 1/2 of which are blue. Every year, every red animal has 2 babies, and then dies. Every year, each blue animal has 1 baby, and then dies. Well, in this (very simple!) scenario, the number of blue animals stays constant, while the number of red animals increases very fast (it doubles every year!). It's not hard to see that, if I wait a long time, and then look at the population, it will consist of mostly red animals: the population "evolved" to be more red.
This example illustrates the basic idea: over time, populations change to resemble those animals that have the most babies.
If you are constructing a theoretical model of how animals look, or behave (or whatever), then, you have a seemingly easy task: for any property (say, size, for example) of the animal, estimate how many babies an animal with any value for that property (100lbs vs 50 lbs, etc.) will have, then choose the value that maximizes the number of babies.
The problem is that it's often not very clear how to estimate the number of babies based on one particular property. In fact, often different properties will be in conflict. For example, it would be, in principle, good for me to have a much bigger brain. However, then I would require more food (brains consume a lot of energy), so I would be more prone to starvation. How does nature balance these conflicting goals? And, how does brain size relate to number of babies?
Most of the time, theoretical biologists ignore these complications.
Instead of thinking about the number of babies an animal produces, they just postulate some "goal" for the system they are studying, and then figure out the best way to meet that goal. They usually also ignore that fact that different aspects of the animal might have competing interests.
For example, one line of research might go something like this: "The goal of vision is to allow the brain to form an accurate model of the external environment. So, I theorize that the visual system should look like the best possible camera (or whatever) for making high-fidelity images of the world."
So far, I have probably come off as being very critical of this optimality approach. However, it's an approach that I use quite often (and the "line of research" in quotations above is one that I am currently pursuing), because it is relatively straightforward, and often gives useful insights into the workings of complicated biological systems.
The whole point of theoretical biology is to make (educated) guesses about how stuff might work, and how it might all fit together. These guesses will (hopefully) inform new experiments that will let us make better models, and the cycle continues. In that sense, a theory that's "wrong" is still useful, so long as it leads people to ask questions that generate new insights.
So what's my point here? Well, for one thing, it's actually pretty tough to do good work in theoretical biology. Also, while it may be a fine starting point to consider parts of the animal in isolation, we eventually need to assemble all the pieces, and consider the way evolution acts on individual animals, and on populations of animals.
Imagine that I start off with a whole bunch of animals, 1/2 of which are red, 1/2 of which are blue. Every year, every red animal has 2 babies, and then dies. Every year, each blue animal has 1 baby, and then dies. Well, in this (very simple!) scenario, the number of blue animals stays constant, while the number of red animals increases very fast (it doubles every year!). It's not hard to see that, if I wait a long time, and then look at the population, it will consist of mostly red animals: the population "evolved" to be more red.
This example illustrates the basic idea: over time, populations change to resemble those animals that have the most babies.
If you are constructing a theoretical model of how animals look, or behave (or whatever), then, you have a seemingly easy task: for any property (say, size, for example) of the animal, estimate how many babies an animal with any value for that property (100lbs vs 50 lbs, etc.) will have, then choose the value that maximizes the number of babies.
The problem is that it's often not very clear how to estimate the number of babies based on one particular property. In fact, often different properties will be in conflict. For example, it would be, in principle, good for me to have a much bigger brain. However, then I would require more food (brains consume a lot of energy), so I would be more prone to starvation. How does nature balance these conflicting goals? And, how does brain size relate to number of babies?
Most of the time, theoretical biologists ignore these complications.
Instead of thinking about the number of babies an animal produces, they just postulate some "goal" for the system they are studying, and then figure out the best way to meet that goal. They usually also ignore that fact that different aspects of the animal might have competing interests.
For example, one line of research might go something like this: "The goal of vision is to allow the brain to form an accurate model of the external environment. So, I theorize that the visual system should look like the best possible camera (or whatever) for making high-fidelity images of the world."
So far, I have probably come off as being very critical of this optimality approach. However, it's an approach that I use quite often (and the "line of research" in quotations above is one that I am currently pursuing), because it is relatively straightforward, and often gives useful insights into the workings of complicated biological systems.
The whole point of theoretical biology is to make (educated) guesses about how stuff might work, and how it might all fit together. These guesses will (hopefully) inform new experiments that will let us make better models, and the cycle continues. In that sense, a theory that's "wrong" is still useful, so long as it leads people to ask questions that generate new insights.
So what's my point here? Well, for one thing, it's actually pretty tough to do good work in theoretical biology. Also, while it may be a fine starting point to consider parts of the animal in isolation, we eventually need to assemble all the pieces, and consider the way evolution acts on individual animals, and on populations of animals.
Tuesday, September 28, 2010
hot, hot, hot!
So... it's right around 95 Fahrenheit right now in Berkeley (that's something like 35 Celsius, for all the Canucks who read this). Fortunately, I'm not in L.A. right now (it was 113 Fahrenheit = 45 Celsius there yesterday, although there are other reasons I'm glad to not be in L.A!).
Anyhow, it is Hot out (capital H intentional), and that's got me thinking a few things
1) Man, I wish my office had air conditioning
2) Yo quiero una cerveza fria
3) Why is it that the heat makes people so lethargic?
Now, I'm not really an expert on this last point, but I'm gonna take a wild stab at this one (that's what theorists do, right?). Here goes:
When you do stuff (any stuff) your metabolic rate increases, which generates some heat since your body is not 100% efficient at using it's energy for the stuff you are doing.
The heat generated warms you up. Of course, if you get too hot, things go pear shaped faster than you can say "Allo gov'nah." (now imagine saying this with a strong Cockney accent).
So, we may have evolved this heat-triggered lethargy as a way of avoiding overheating when it's hot out. Seems pretty obvious, right? Well, it's too hot for any deeper insight.
Anyhow, it is Hot out (capital H intentional), and that's got me thinking a few things
1) Man, I wish my office had air conditioning
2) Yo quiero una cerveza fria
3) Why is it that the heat makes people so lethargic?
Now, I'm not really an expert on this last point, but I'm gonna take a wild stab at this one (that's what theorists do, right?). Here goes:
When you do stuff (any stuff) your metabolic rate increases, which generates some heat since your body is not 100% efficient at using it's energy for the stuff you are doing.
The heat generated warms you up. Of course, if you get too hot, things go pear shaped faster than you can say "Allo gov'nah." (now imagine saying this with a strong Cockney accent).
So, we may have evolved this heat-triggered lethargy as a way of avoiding overheating when it's hot out. Seems pretty obvious, right? Well, it's too hot for any deeper insight.
Thursday, September 23, 2010
learning, unsupervised
This post is about image processing in the brain.
If you look at a digital image, the input is just a bunch of numbers (the red, green, and blue values for each pixel). The same is (sort of) true for the data your eyes collect from the world.
But, how does your brain go from this long list of numbers to the more abstract (and useful) representation "I am looking at my desk, with a laptop and a cup of coffee on it" (or whatever you happen to be looking at)?
There's a lot of stuff going on here that is just not yet known. This is also (incidentally), more-or-less what my PhD research is about.
What a lot of people (myself included) suspect is that the first few stages of image processing in the brain are just there to find common patterns, in a way that reduces redundancy. As an analogy, consider this line of text: thisisabunchofwordswithnospacesbutyoucanstillfigureitout
When your brain sees this, it "knows" what the common features are (words), and it picks them out of the slop. Then, the next stages of image processing (that do the abstractions, etc.) get these nice neat "words" to process instead of the (more complicated) raw input.
This process of finding the common patterns in a bunch of data is called unsupervised learning because there's no "teacher" signal saying "look for the red blob" (or whatever): you really just look around and find patterns that occur the most often.
If the early visual system does this sort of thing, then people should be able to write computer programs to find the common patterns in natural scenes, and use those to predict some of the properties of the visual center(s) of the brain. Indeed, several of the guys in our theory center built their careers on doing just that, with great success.
These same techniques are useful in other fields that seek to find patterns in data, like finance (looking for stocks that are likely to behave similarly, for example).
If you look at a digital image, the input is just a bunch of numbers (the red, green, and blue values for each pixel). The same is (sort of) true for the data your eyes collect from the world.
But, how does your brain go from this long list of numbers to the more abstract (and useful) representation "I am looking at my desk, with a laptop and a cup of coffee on it" (or whatever you happen to be looking at)?
There's a lot of stuff going on here that is just not yet known. This is also (incidentally), more-or-less what my PhD research is about.
What a lot of people (myself included) suspect is that the first few stages of image processing in the brain are just there to find common patterns, in a way that reduces redundancy. As an analogy, consider this line of text: thisisabunchofwordswithnospacesbutyoucanstillfigureitout
When your brain sees this, it "knows" what the common features are (words), and it picks them out of the slop. Then, the next stages of image processing (that do the abstractions, etc.) get these nice neat "words" to process instead of the (more complicated) raw input.
This process of finding the common patterns in a bunch of data is called unsupervised learning because there's no "teacher" signal saying "look for the red blob" (or whatever): you really just look around and find patterns that occur the most often.
If the early visual system does this sort of thing, then people should be able to write computer programs to find the common patterns in natural scenes, and use those to predict some of the properties of the visual center(s) of the brain. Indeed, several of the guys in our theory center built their careers on doing just that, with great success.
These same techniques are useful in other fields that seek to find patterns in data, like finance (looking for stocks that are likely to behave similarly, for example).
Saturday, September 18, 2010
I can haz fellowship?
While I intended to do a lot of research this past week, I ended up spending a lot of time working on a funding application for next year (this is a slow process: you apply now for next fall's grants). Funding is very important (see below), and, unlike a lot of people, I actually kind of enjoy this process.
For those of you who have never written a scientific funding application, you pretty much write a lot about what you are planning to do, why you think it will work, and, most importantly, why that project (once you succeed) will matter in the grand scheme of things.
While day-to-day sciencing consists of a lot of frustrating details (why won't my code compile?!, for example), the fellowship game gives you an excuse to think about your work in a bigger context. And, after all, isn't this "big picture" the reason we do science in the first place?
That being said, I am antsy to make some new discoveries, and that means getting back to some details!
As promised, here is why it's important to win the fellowship game. The results here are stated for a typical physics graduate student at Berkeley. Results vary by department and by school.
If you do not win the fellowship game,
You will spend 20 hours/week teaching undergrads, in exchange for which the department will pay your tuition, and give you a salary that is just barely enough to pay for food and rent. Not too bad: you won't starve, or have to live on the street, and you get about 1/2 of your "work-time" (40 hours/week, right?) for research towards your thesis.
If you win the fellowship game,
You will not have to do any teaching. You may still choose to (and reap the financial rewards, in addition to your fellowship), and you can choose between 10 or 20 hours/week if you do want to teach. Even if you don't teach (which yields extra $$), you will be paid 25-75% more than your non-fellowship colleagues. You will also have around twice as many hours/week to work on your thesis project, meaning you will likely graduate sooner than they will. Graduating earlier is good because people hiring scientists will think you are smarter (when, really, you just happened to win this fellowship game).
For those of you who have never written a scientific funding application, you pretty much write a lot about what you are planning to do, why you think it will work, and, most importantly, why that project (once you succeed) will matter in the grand scheme of things.
While day-to-day sciencing consists of a lot of frustrating details (why won't my code compile?!, for example), the fellowship game gives you an excuse to think about your work in a bigger context. And, after all, isn't this "big picture" the reason we do science in the first place?
That being said, I am antsy to make some new discoveries, and that means getting back to some details!
As promised, here is why it's important to win the fellowship game. The results here are stated for a typical physics graduate student at Berkeley. Results vary by department and by school.
If you do not win the fellowship game,
You will spend 20 hours/week teaching undergrads, in exchange for which the department will pay your tuition, and give you a salary that is just barely enough to pay for food and rent. Not too bad: you won't starve, or have to live on the street, and you get about 1/2 of your "work-time" (40 hours/week, right?) for research towards your thesis.
If you win the fellowship game,
You will not have to do any teaching. You may still choose to (and reap the financial rewards, in addition to your fellowship), and you can choose between 10 or 20 hours/week if you do want to teach. Even if you don't teach (which yields extra $$), you will be paid 25-75% more than your non-fellowship colleagues. You will also have around twice as many hours/week to work on your thesis project, meaning you will likely graduate sooner than they will. Graduating earlier is good because people hiring scientists will think you are smarter (when, really, you just happened to win this fellowship game).
Thursday, September 16, 2010
of mice and men
Apologies to fellow northern Californian (Steinbeck) for the title of this post, which is about motivation and reward structures.
For the uninitiated, let me first give you a quick run down of a typical day at the office for a grad student:
8-9 am: check email, scan the contents of my favorite journals for any new papers of interest
9am-noon: look over the results from the previous day's experiments or simulations (often, these run overnight). Usually, this is when you realize that your experiment failed (or your simulation crashed, or whatever).
12-1: Lunch! Read some of the papers that I found in my quick morning scan. Be amazed by how smart the paper-writers seem to be.
1-4 pm: set up more experiments (or simulations). Most of this time is spent de-bugging, figuring out why the thing isn't working
4-5 pm: go to a lecture by a visiting scientist. Be impressed by how smart (s)he is.
5-7 pm: commute home, make dinner, eat dinner, make conversation with housemate(s)
7pm-midnight: think about science, either actively or passively (maybe brainstorming in a quiet room, or watching TV).
Now, you'll notice that nowhere in this typical day is there "Eureka! I understand the brain now!" You'll also notice that the typical day also doesn't contain "win a prize for being awesome" or "get compliments on how smart you are" or anything resembling a "reward" that would motivate getting out of bed and putting forth your best scientific efforts.
To understand why myself (and my colleagues!) keep getting up to go to work in the morning, let's consider an old experiment by a guy named B. F. Skinner. In his experiments, he put a mouse in a box with a lever. When the mouse pushed the lever, he (let's assume it's a male mouse for now) may or may not get a food pellet as a reward.
If you give him a pellet with every lever press (consistent reward), he learns that the food is there waiting for him, and he presses the lever sometimes. No surprises here.
If you never give him a pellet, he learns to not bother pressing the lever. Also unsurprising.
So, what happens if you sometimes give him pellets for lever presses? You might guess that the result would be somehwere in the middle: he presses it less often than when the reward is consistent, but still sometimes. If you did make that guess, you would be wrong. Very, very wrong!
Here's the interesting part: if you give the mouse pellets for some, but not all lever presses, he learns that pressing the lever is good, but that he can't just rely on the lever giving him food. The result? The mouse frantically presses the lever, over and over again.
These experiments give a lot of insight into motivation. For the scientist, even though most days are pretty frustrating, the rare day (maybe one in 100 if you're really successful) when you win a grant (or fellowship, or whatever), or discover something new and exciting, are just frequent enough to make you keep doing it in the interim. To complete the analogy, scientists are mice, their labs are Skinner's boxes, and their lab equipment is the lever.
For the uninitiated, let me first give you a quick run down of a typical day at the office for a grad student:
8-9 am: check email, scan the contents of my favorite journals for any new papers of interest
9am-noon: look over the results from the previous day's experiments or simulations (often, these run overnight). Usually, this is when you realize that your experiment failed (or your simulation crashed, or whatever).
12-1: Lunch! Read some of the papers that I found in my quick morning scan. Be amazed by how smart the paper-writers seem to be.
1-4 pm: set up more experiments (or simulations). Most of this time is spent de-bugging, figuring out why the thing isn't working
4-5 pm: go to a lecture by a visiting scientist. Be impressed by how smart (s)he is.
5-7 pm: commute home, make dinner, eat dinner, make conversation with housemate(s)
7pm-midnight: think about science, either actively or passively (maybe brainstorming in a quiet room, or watching TV).
Now, you'll notice that nowhere in this typical day is there "Eureka! I understand the brain now!" You'll also notice that the typical day also doesn't contain "win a prize for being awesome" or "get compliments on how smart you are" or anything resembling a "reward" that would motivate getting out of bed and putting forth your best scientific efforts.
To understand why myself (and my colleagues!) keep getting up to go to work in the morning, let's consider an old experiment by a guy named B. F. Skinner. In his experiments, he put a mouse in a box with a lever. When the mouse pushed the lever, he (let's assume it's a male mouse for now) may or may not get a food pellet as a reward.
If you give him a pellet with every lever press (consistent reward), he learns that the food is there waiting for him, and he presses the lever sometimes. No surprises here.
If you never give him a pellet, he learns to not bother pressing the lever. Also unsurprising.
So, what happens if you sometimes give him pellets for lever presses? You might guess that the result would be somehwere in the middle: he presses it less often than when the reward is consistent, but still sometimes. If you did make that guess, you would be wrong. Very, very wrong!
Here's the interesting part: if you give the mouse pellets for some, but not all lever presses, he learns that pressing the lever is good, but that he can't just rely on the lever giving him food. The result? The mouse frantically presses the lever, over and over again.
These experiments give a lot of insight into motivation. For the scientist, even though most days are pretty frustrating, the rare day (maybe one in 100 if you're really successful) when you win a grant (or fellowship, or whatever), or discover something new and exciting, are just frequent enough to make you keep doing it in the interim. To complete the analogy, scientists are mice, their labs are Skinner's boxes, and their lab equipment is the lever.
Tuesday, September 14, 2010
learning causal connections
"Correlation does not imply causality." Makes sense, right? Well, your brain doesn't think so.
Imagine that there are a bunch of neurons (nerve cells that process information in the brain), labeled A,B,C, and so on, and that there are connections between them. If neuron A emits a "spike" of activity, and then (shortly afterwards), neuron B spikes, the connection from A->B is strengthened, and the reverse connection (from B->A) is weakened.
What does that mean? Well, the next time neuron A spikes, it is more likely that it will cause B to spike (because the A->B connection is strengthened), but the next time B spikes, it is less likely that it will cause A to spike. So your brain is learning the causal structure of the world ("A causes B"), in some sense. And, as explained above, the "signal" that it uses to find this structure is the temporal correlation between activities; which one comes first.
This effect is called "spike timing dependent plasticity" (STDP) and it remains one of the most significant discoveries in neuroscience.
Maybe that's why people are so quick to assume causality when they see correlations. Could it be that we are hard-wired to make logical fallacies? I dunno, but I sure would like to find out.
Imagine that there are a bunch of neurons (nerve cells that process information in the brain), labeled A,B,C, and so on, and that there are connections between them. If neuron A emits a "spike" of activity, and then (shortly afterwards), neuron B spikes, the connection from A->B is strengthened, and the reverse connection (from B->A) is weakened.
What does that mean? Well, the next time neuron A spikes, it is more likely that it will cause B to spike (because the A->B connection is strengthened), but the next time B spikes, it is less likely that it will cause A to spike. So your brain is learning the causal structure of the world ("A causes B"), in some sense. And, as explained above, the "signal" that it uses to find this structure is the temporal correlation between activities; which one comes first.
This effect is called "spike timing dependent plasticity" (STDP) and it remains one of the most significant discoveries in neuroscience.
Maybe that's why people are so quick to assume causality when they see correlations. Could it be that we are hard-wired to make logical fallacies? I dunno, but I sure would like to find out.
Monday, September 13, 2010
who is danger cat?
Well, I can't tell you who danger cat is, but I, for one, am back from Reno.
I left work a bit early on Friday (yay for grad school!) to drive up to Reno with some friends. Friday night involved $0.8 shots at the CalNeva in Reno (how is that even possible?!), and taking note of how depressing the Reno casinos are. I don't recall seeing a single smiling face, aside from those of Heather, Kati, Will, and myself.
After 2 hours of sleep, we scraped ourselves out of bed to go to the Reno hot air balloon festival. It was, in a word, awesome. A few minutes before sunrise, the "dawn patrol" of 5 balloons rose up over the desert. After the sun came up, we spent a couple hours wandering around the field, where 100 other hot air balloons were being unfurled and inflated for take-off. Watching all 100 of them take to the skies within the next hour was pretty much awesome.
A post-ballooning nap was in order, followed by lunch, and a short drive up to Kati's cabin in Graeagle.
The afternoon and evening were filled with a short hike in the mountains followed by a BBQ, many beers, and some much-needed sleep.
I am excited to be back in the bay area where (surprise!) it's cold and cloudy. Today should be a fun day of sciencing: first up, install developper tools on my mac so I can compile some C libraries. Not as glamorous as you might hope, but that's how it goes sometimes.
Expect me to discuss some actual science (or academic stuff) in my next post.
I left work a bit early on Friday (yay for grad school!) to drive up to Reno with some friends. Friday night involved $0.8 shots at the CalNeva in Reno (how is that even possible?!), and taking note of how depressing the Reno casinos are. I don't recall seeing a single smiling face, aside from those of Heather, Kati, Will, and myself.
After 2 hours of sleep, we scraped ourselves out of bed to go to the Reno hot air balloon festival. It was, in a word, awesome. A few minutes before sunrise, the "dawn patrol" of 5 balloons rose up over the desert. After the sun came up, we spent a couple hours wandering around the field, where 100 other hot air balloons were being unfurled and inflated for take-off. Watching all 100 of them take to the skies within the next hour was pretty much awesome.
A post-ballooning nap was in order, followed by lunch, and a short drive up to Kati's cabin in Graeagle.
The afternoon and evening were filled with a short hike in the mountains followed by a BBQ, many beers, and some much-needed sleep.
I am excited to be back in the bay area where (surprise!) it's cold and cloudy. Today should be a fun day of sciencing: first up, install developper tools on my mac so I can compile some C libraries. Not as glamorous as you might hope, but that's how it goes sometimes.
Expect me to discuss some actual science (or academic stuff) in my next post.
Friday, September 10, 2010
Stay in School
I am a doctoral student in the physics department at UC Berkeley, working in theoretical neuroscience.
This blog will contain anecdotes about things I am learning (incidentally, I am currently learning about mechanisms underlying the learning process, both in machines, and in biological systems), the process of doing science, and the sorts of things that scientists do when they aren't sciencing (yes, "science" can be a verb!).
Being a grad student is awesome. I get to work on fun things, with a very flexible schedule, while meeting incredibly interesting (and smart!) people. I would strongly recommend it.
That's it for my first (somewhat boring) post. I'm off to Reno with some friends this weekend for the hot air balloon festival.
This blog will contain anecdotes about things I am learning (incidentally, I am currently learning about mechanisms underlying the learning process, both in machines, and in biological systems), the process of doing science, and the sorts of things that scientists do when they aren't sciencing (yes, "science" can be a verb!).
Being a grad student is awesome. I get to work on fun things, with a very flexible schedule, while meeting incredibly interesting (and smart!) people. I would strongly recommend it.
That's it for my first (somewhat boring) post. I'm off to Reno with some friends this weekend for the hot air balloon festival.