Evolution is, undoubtedly, the key principle of theoretical biology. Here's an example to illustrate why it is such a powerful idea.
Imagine that I start off with a whole bunch of animals, 1/2 of which are red, 1/2 of which are blue. Every year, every red animal has 2 babies, and then dies. Every year, each blue animal has 1 baby, and then dies. Well, in this (very simple!) scenario, the number of blue animals stays constant, while the number of red animals increases very fast (it doubles every year!). It's not hard to see that, if I wait a long time, and then look at the population, it will consist of mostly red animals: the population "evolved" to be more red.
This example illustrates the basic idea: over time, populations change to resemble those animals that have the most babies.
If you are constructing a theoretical model of how animals look, or behave (or whatever), then, you have a seemingly easy task: for any property (say, size, for example) of the animal, estimate how many babies an animal with any value for that property (100lbs vs 50 lbs, etc.) will have, then choose the value that maximizes the number of babies.
The problem is that it's often not very clear how to estimate the number of babies based on one particular property. In fact, often different properties will be in conflict. For example, it would be, in principle, good for me to have a much bigger brain. However, then I would require more food (brains consume a lot of energy), so I would be more prone to starvation. How does nature balance these conflicting goals? And, how does brain size relate to number of babies?
Most of the time, theoretical biologists ignore these complications.
Instead of thinking about the number of babies an animal produces, they just postulate some "goal" for the system they are studying, and then figure out the best way to meet that goal. They usually also ignore that fact that different aspects of the animal might have competing interests.
For example, one line of research might go something like this: "The goal of vision is to allow the brain to form an accurate model of the external environment. So, I theorize that the visual system should look like the best possible camera (or whatever) for making high-fidelity images of the world."
So far, I have probably come off as being very critical of this optimality approach. However, it's an approach that I use quite often (and the "line of research" in quotations above is one that I am currently pursuing), because it is relatively straightforward, and often gives useful insights into the workings of complicated biological systems.
The whole point of theoretical biology is to make (educated) guesses about how stuff might work, and how it might all fit together. These guesses will (hopefully) inform new experiments that will let us make better models, and the cycle continues. In that sense, a theory that's "wrong" is still useful, so long as it leads people to ask questions that generate new insights.
So what's my point here? Well, for one thing, it's actually pretty tough to do good work in theoretical biology. Also, while it may be a fine starting point to consider parts of the animal in isolation, we eventually need to assemble all the pieces, and consider the way evolution acts on individual animals, and on populations of animals.
discussing topics in neuroscience, the process of doing science, and the everyday ennui associated with being a grad student
Thursday, September 30, 2010
Tuesday, September 28, 2010
hot, hot, hot!
So... it's right around 95 Fahrenheit right now in Berkeley (that's something like 35 Celsius, for all the Canucks who read this). Fortunately, I'm not in L.A. right now (it was 113 Fahrenheit = 45 Celsius there yesterday, although there are other reasons I'm glad to not be in L.A!).
Anyhow, it is Hot out (capital H intentional), and that's got me thinking a few things
1) Man, I wish my office had air conditioning
2) Yo quiero una cerveza fria
3) Why is it that the heat makes people so lethargic?
Now, I'm not really an expert on this last point, but I'm gonna take a wild stab at this one (that's what theorists do, right?). Here goes:
When you do stuff (any stuff) your metabolic rate increases, which generates some heat since your body is not 100% efficient at using it's energy for the stuff you are doing.
The heat generated warms you up. Of course, if you get too hot, things go pear shaped faster than you can say "Allo gov'nah." (now imagine saying this with a strong Cockney accent).
So, we may have evolved this heat-triggered lethargy as a way of avoiding overheating when it's hot out. Seems pretty obvious, right? Well, it's too hot for any deeper insight.
Anyhow, it is Hot out (capital H intentional), and that's got me thinking a few things
1) Man, I wish my office had air conditioning
2) Yo quiero una cerveza fria
3) Why is it that the heat makes people so lethargic?
Now, I'm not really an expert on this last point, but I'm gonna take a wild stab at this one (that's what theorists do, right?). Here goes:
When you do stuff (any stuff) your metabolic rate increases, which generates some heat since your body is not 100% efficient at using it's energy for the stuff you are doing.
The heat generated warms you up. Of course, if you get too hot, things go pear shaped faster than you can say "Allo gov'nah." (now imagine saying this with a strong Cockney accent).
So, we may have evolved this heat-triggered lethargy as a way of avoiding overheating when it's hot out. Seems pretty obvious, right? Well, it's too hot for any deeper insight.
Thursday, September 23, 2010
learning, unsupervised
This post is about image processing in the brain.
If you look at a digital image, the input is just a bunch of numbers (the red, green, and blue values for each pixel). The same is (sort of) true for the data your eyes collect from the world.
But, how does your brain go from this long list of numbers to the more abstract (and useful) representation "I am looking at my desk, with a laptop and a cup of coffee on it" (or whatever you happen to be looking at)?
There's a lot of stuff going on here that is just not yet known. This is also (incidentally), more-or-less what my PhD research is about.
What a lot of people (myself included) suspect is that the first few stages of image processing in the brain are just there to find common patterns, in a way that reduces redundancy. As an analogy, consider this line of text: thisisabunchofwordswithnospacesbutyoucanstillfigureitout
When your brain sees this, it "knows" what the common features are (words), and it picks them out of the slop. Then, the next stages of image processing (that do the abstractions, etc.) get these nice neat "words" to process instead of the (more complicated) raw input.
This process of finding the common patterns in a bunch of data is called unsupervised learning because there's no "teacher" signal saying "look for the red blob" (or whatever): you really just look around and find patterns that occur the most often.
If the early visual system does this sort of thing, then people should be able to write computer programs to find the common patterns in natural scenes, and use those to predict some of the properties of the visual center(s) of the brain. Indeed, several of the guys in our theory center built their careers on doing just that, with great success.
These same techniques are useful in other fields that seek to find patterns in data, like finance (looking for stocks that are likely to behave similarly, for example).
If you look at a digital image, the input is just a bunch of numbers (the red, green, and blue values for each pixel). The same is (sort of) true for the data your eyes collect from the world.
But, how does your brain go from this long list of numbers to the more abstract (and useful) representation "I am looking at my desk, with a laptop and a cup of coffee on it" (or whatever you happen to be looking at)?
There's a lot of stuff going on here that is just not yet known. This is also (incidentally), more-or-less what my PhD research is about.
What a lot of people (myself included) suspect is that the first few stages of image processing in the brain are just there to find common patterns, in a way that reduces redundancy. As an analogy, consider this line of text: thisisabunchofwordswithnospacesbutyoucanstillfigureitout
When your brain sees this, it "knows" what the common features are (words), and it picks them out of the slop. Then, the next stages of image processing (that do the abstractions, etc.) get these nice neat "words" to process instead of the (more complicated) raw input.
This process of finding the common patterns in a bunch of data is called unsupervised learning because there's no "teacher" signal saying "look for the red blob" (or whatever): you really just look around and find patterns that occur the most often.
If the early visual system does this sort of thing, then people should be able to write computer programs to find the common patterns in natural scenes, and use those to predict some of the properties of the visual center(s) of the brain. Indeed, several of the guys in our theory center built their careers on doing just that, with great success.
These same techniques are useful in other fields that seek to find patterns in data, like finance (looking for stocks that are likely to behave similarly, for example).
Saturday, September 18, 2010
I can haz fellowship?
While I intended to do a lot of research this past week, I ended up spending a lot of time working on a funding application for next year (this is a slow process: you apply now for next fall's grants). Funding is very important (see below), and, unlike a lot of people, I actually kind of enjoy this process.
For those of you who have never written a scientific funding application, you pretty much write a lot about what you are planning to do, why you think it will work, and, most importantly, why that project (once you succeed) will matter in the grand scheme of things.
While day-to-day sciencing consists of a lot of frustrating details (why won't my code compile?!, for example), the fellowship game gives you an excuse to think about your work in a bigger context. And, after all, isn't this "big picture" the reason we do science in the first place?
That being said, I am antsy to make some new discoveries, and that means getting back to some details!
As promised, here is why it's important to win the fellowship game. The results here are stated for a typical physics graduate student at Berkeley. Results vary by department and by school.
If you do not win the fellowship game,
You will spend 20 hours/week teaching undergrads, in exchange for which the department will pay your tuition, and give you a salary that is just barely enough to pay for food and rent. Not too bad: you won't starve, or have to live on the street, and you get about 1/2 of your "work-time" (40 hours/week, right?) for research towards your thesis.
If you win the fellowship game,
You will not have to do any teaching. You may still choose to (and reap the financial rewards, in addition to your fellowship), and you can choose between 10 or 20 hours/week if you do want to teach. Even if you don't teach (which yields extra $$), you will be paid 25-75% more than your non-fellowship colleagues. You will also have around twice as many hours/week to work on your thesis project, meaning you will likely graduate sooner than they will. Graduating earlier is good because people hiring scientists will think you are smarter (when, really, you just happened to win this fellowship game).
For those of you who have never written a scientific funding application, you pretty much write a lot about what you are planning to do, why you think it will work, and, most importantly, why that project (once you succeed) will matter in the grand scheme of things.
While day-to-day sciencing consists of a lot of frustrating details (why won't my code compile?!, for example), the fellowship game gives you an excuse to think about your work in a bigger context. And, after all, isn't this "big picture" the reason we do science in the first place?
That being said, I am antsy to make some new discoveries, and that means getting back to some details!
As promised, here is why it's important to win the fellowship game. The results here are stated for a typical physics graduate student at Berkeley. Results vary by department and by school.
If you do not win the fellowship game,
You will spend 20 hours/week teaching undergrads, in exchange for which the department will pay your tuition, and give you a salary that is just barely enough to pay for food and rent. Not too bad: you won't starve, or have to live on the street, and you get about 1/2 of your "work-time" (40 hours/week, right?) for research towards your thesis.
If you win the fellowship game,
You will not have to do any teaching. You may still choose to (and reap the financial rewards, in addition to your fellowship), and you can choose between 10 or 20 hours/week if you do want to teach. Even if you don't teach (which yields extra $$), you will be paid 25-75% more than your non-fellowship colleagues. You will also have around twice as many hours/week to work on your thesis project, meaning you will likely graduate sooner than they will. Graduating earlier is good because people hiring scientists will think you are smarter (when, really, you just happened to win this fellowship game).
Thursday, September 16, 2010
of mice and men
Apologies to fellow northern Californian (Steinbeck) for the title of this post, which is about motivation and reward structures.
For the uninitiated, let me first give you a quick run down of a typical day at the office for a grad student:
8-9 am: check email, scan the contents of my favorite journals for any new papers of interest
9am-noon: look over the results from the previous day's experiments or simulations (often, these run overnight). Usually, this is when you realize that your experiment failed (or your simulation crashed, or whatever).
12-1: Lunch! Read some of the papers that I found in my quick morning scan. Be amazed by how smart the paper-writers seem to be.
1-4 pm: set up more experiments (or simulations). Most of this time is spent de-bugging, figuring out why the thing isn't working
4-5 pm: go to a lecture by a visiting scientist. Be impressed by how smart (s)he is.
5-7 pm: commute home, make dinner, eat dinner, make conversation with housemate(s)
7pm-midnight: think about science, either actively or passively (maybe brainstorming in a quiet room, or watching TV).
Now, you'll notice that nowhere in this typical day is there "Eureka! I understand the brain now!" You'll also notice that the typical day also doesn't contain "win a prize for being awesome" or "get compliments on how smart you are" or anything resembling a "reward" that would motivate getting out of bed and putting forth your best scientific efforts.
To understand why myself (and my colleagues!) keep getting up to go to work in the morning, let's consider an old experiment by a guy named B. F. Skinner. In his experiments, he put a mouse in a box with a lever. When the mouse pushed the lever, he (let's assume it's a male mouse for now) may or may not get a food pellet as a reward.
If you give him a pellet with every lever press (consistent reward), he learns that the food is there waiting for him, and he presses the lever sometimes. No surprises here.
If you never give him a pellet, he learns to not bother pressing the lever. Also unsurprising.
So, what happens if you sometimes give him pellets for lever presses? You might guess that the result would be somehwere in the middle: he presses it less often than when the reward is consistent, but still sometimes. If you did make that guess, you would be wrong. Very, very wrong!
Here's the interesting part: if you give the mouse pellets for some, but not all lever presses, he learns that pressing the lever is good, but that he can't just rely on the lever giving him food. The result? The mouse frantically presses the lever, over and over again.
These experiments give a lot of insight into motivation. For the scientist, even though most days are pretty frustrating, the rare day (maybe one in 100 if you're really successful) when you win a grant (or fellowship, or whatever), or discover something new and exciting, are just frequent enough to make you keep doing it in the interim. To complete the analogy, scientists are mice, their labs are Skinner's boxes, and their lab equipment is the lever.
For the uninitiated, let me first give you a quick run down of a typical day at the office for a grad student:
8-9 am: check email, scan the contents of my favorite journals for any new papers of interest
9am-noon: look over the results from the previous day's experiments or simulations (often, these run overnight). Usually, this is when you realize that your experiment failed (or your simulation crashed, or whatever).
12-1: Lunch! Read some of the papers that I found in my quick morning scan. Be amazed by how smart the paper-writers seem to be.
1-4 pm: set up more experiments (or simulations). Most of this time is spent de-bugging, figuring out why the thing isn't working
4-5 pm: go to a lecture by a visiting scientist. Be impressed by how smart (s)he is.
5-7 pm: commute home, make dinner, eat dinner, make conversation with housemate(s)
7pm-midnight: think about science, either actively or passively (maybe brainstorming in a quiet room, or watching TV).
Now, you'll notice that nowhere in this typical day is there "Eureka! I understand the brain now!" You'll also notice that the typical day also doesn't contain "win a prize for being awesome" or "get compliments on how smart you are" or anything resembling a "reward" that would motivate getting out of bed and putting forth your best scientific efforts.
To understand why myself (and my colleagues!) keep getting up to go to work in the morning, let's consider an old experiment by a guy named B. F. Skinner. In his experiments, he put a mouse in a box with a lever. When the mouse pushed the lever, he (let's assume it's a male mouse for now) may or may not get a food pellet as a reward.
If you give him a pellet with every lever press (consistent reward), he learns that the food is there waiting for him, and he presses the lever sometimes. No surprises here.
If you never give him a pellet, he learns to not bother pressing the lever. Also unsurprising.
So, what happens if you sometimes give him pellets for lever presses? You might guess that the result would be somehwere in the middle: he presses it less often than when the reward is consistent, but still sometimes. If you did make that guess, you would be wrong. Very, very wrong!
Here's the interesting part: if you give the mouse pellets for some, but not all lever presses, he learns that pressing the lever is good, but that he can't just rely on the lever giving him food. The result? The mouse frantically presses the lever, over and over again.
These experiments give a lot of insight into motivation. For the scientist, even though most days are pretty frustrating, the rare day (maybe one in 100 if you're really successful) when you win a grant (or fellowship, or whatever), or discover something new and exciting, are just frequent enough to make you keep doing it in the interim. To complete the analogy, scientists are mice, their labs are Skinner's boxes, and their lab equipment is the lever.
Tuesday, September 14, 2010
learning causal connections
"Correlation does not imply causality." Makes sense, right? Well, your brain doesn't think so.
Imagine that there are a bunch of neurons (nerve cells that process information in the brain), labeled A,B,C, and so on, and that there are connections between them. If neuron A emits a "spike" of activity, and then (shortly afterwards), neuron B spikes, the connection from A->B is strengthened, and the reverse connection (from B->A) is weakened.
What does that mean? Well, the next time neuron A spikes, it is more likely that it will cause B to spike (because the A->B connection is strengthened), but the next time B spikes, it is less likely that it will cause A to spike. So your brain is learning the causal structure of the world ("A causes B"), in some sense. And, as explained above, the "signal" that it uses to find this structure is the temporal correlation between activities; which one comes first.
This effect is called "spike timing dependent plasticity" (STDP) and it remains one of the most significant discoveries in neuroscience.
Maybe that's why people are so quick to assume causality when they see correlations. Could it be that we are hard-wired to make logical fallacies? I dunno, but I sure would like to find out.
Imagine that there are a bunch of neurons (nerve cells that process information in the brain), labeled A,B,C, and so on, and that there are connections between them. If neuron A emits a "spike" of activity, and then (shortly afterwards), neuron B spikes, the connection from A->B is strengthened, and the reverse connection (from B->A) is weakened.
What does that mean? Well, the next time neuron A spikes, it is more likely that it will cause B to spike (because the A->B connection is strengthened), but the next time B spikes, it is less likely that it will cause A to spike. So your brain is learning the causal structure of the world ("A causes B"), in some sense. And, as explained above, the "signal" that it uses to find this structure is the temporal correlation between activities; which one comes first.
This effect is called "spike timing dependent plasticity" (STDP) and it remains one of the most significant discoveries in neuroscience.
Maybe that's why people are so quick to assume causality when they see correlations. Could it be that we are hard-wired to make logical fallacies? I dunno, but I sure would like to find out.
Monday, September 13, 2010
who is danger cat?
Well, I can't tell you who danger cat is, but I, for one, am back from Reno.
I left work a bit early on Friday (yay for grad school!) to drive up to Reno with some friends. Friday night involved $0.8 shots at the CalNeva in Reno (how is that even possible?!), and taking note of how depressing the Reno casinos are. I don't recall seeing a single smiling face, aside from those of Heather, Kati, Will, and myself.
After 2 hours of sleep, we scraped ourselves out of bed to go to the Reno hot air balloon festival. It was, in a word, awesome. A few minutes before sunrise, the "dawn patrol" of 5 balloons rose up over the desert. After the sun came up, we spent a couple hours wandering around the field, where 100 other hot air balloons were being unfurled and inflated for take-off. Watching all 100 of them take to the skies within the next hour was pretty much awesome.
A post-ballooning nap was in order, followed by lunch, and a short drive up to Kati's cabin in Graeagle.
The afternoon and evening were filled with a short hike in the mountains followed by a BBQ, many beers, and some much-needed sleep.
I am excited to be back in the bay area where (surprise!) it's cold and cloudy. Today should be a fun day of sciencing: first up, install developper tools on my mac so I can compile some C libraries. Not as glamorous as you might hope, but that's how it goes sometimes.
Expect me to discuss some actual science (or academic stuff) in my next post.
I left work a bit early on Friday (yay for grad school!) to drive up to Reno with some friends. Friday night involved $0.8 shots at the CalNeva in Reno (how is that even possible?!), and taking note of how depressing the Reno casinos are. I don't recall seeing a single smiling face, aside from those of Heather, Kati, Will, and myself.
After 2 hours of sleep, we scraped ourselves out of bed to go to the Reno hot air balloon festival. It was, in a word, awesome. A few minutes before sunrise, the "dawn patrol" of 5 balloons rose up over the desert. After the sun came up, we spent a couple hours wandering around the field, where 100 other hot air balloons were being unfurled and inflated for take-off. Watching all 100 of them take to the skies within the next hour was pretty much awesome.
A post-ballooning nap was in order, followed by lunch, and a short drive up to Kati's cabin in Graeagle.
The afternoon and evening were filled with a short hike in the mountains followed by a BBQ, many beers, and some much-needed sleep.
I am excited to be back in the bay area where (surprise!) it's cold and cloudy. Today should be a fun day of sciencing: first up, install developper tools on my mac so I can compile some C libraries. Not as glamorous as you might hope, but that's how it goes sometimes.
Expect me to discuss some actual science (or academic stuff) in my next post.
Friday, September 10, 2010
Stay in School
I am a doctoral student in the physics department at UC Berkeley, working in theoretical neuroscience.
This blog will contain anecdotes about things I am learning (incidentally, I am currently learning about mechanisms underlying the learning process, both in machines, and in biological systems), the process of doing science, and the sorts of things that scientists do when they aren't sciencing (yes, "science" can be a verb!).
Being a grad student is awesome. I get to work on fun things, with a very flexible schedule, while meeting incredibly interesting (and smart!) people. I would strongly recommend it.
That's it for my first (somewhat boring) post. I'm off to Reno with some friends this weekend for the hot air balloon festival.
This blog will contain anecdotes about things I am learning (incidentally, I am currently learning about mechanisms underlying the learning process, both in machines, and in biological systems), the process of doing science, and the sorts of things that scientists do when they aren't sciencing (yes, "science" can be a verb!).
Being a grad student is awesome. I get to work on fun things, with a very flexible schedule, while meeting incredibly interesting (and smart!) people. I would strongly recommend it.
That's it for my first (somewhat boring) post. I'm off to Reno with some friends this weekend for the hot air balloon festival.
Subscribe to:
Posts (Atom)