Thanks to the regular readers of this blog!
Since I started this blog, I've become a lot busier, and readership of my blog has decreased a bit, such that blogging can't effectively compete for my time right now.
So, I am sorry to say that I will no longer be posting here.
I look forward to a time when I can dedicate more energy towards blogging.
In the meantime, here are some great science blogs, of which I am a fan:
Bad Science
Cosmic Variance
Not Exactly Rocket Science
discussing topics in neuroscience, the process of doing science, and the everyday ennui associated with being a grad student
Monday, October 10, 2011
Thursday, September 8, 2011
how many people even read scientific papers, anyway?
Turns out this can be a hard question to answer. Most journals don't publish that information. But, at the end of the day, I think it's a pretty important question to answer, in terms of assessing how much (if any) impact my work has had on other people, so I'm going to try and address it here.
Now, for basically any published paper, it's pretty easy (using web of science, scopus, or google scholar) to find out how many other papers cited that one. So, what remains is to find a conversion from number of citations (C) to number of readers (R).
Fortunately, there are a few (rare!) journals that actually do publish the number of page views for the on-line versions of journal articles. In particular, the PLoS (public library of science) journals do just that. These are (pretty highly regarded) open-access journals, predominantly featuring biomedical research. So, using the PLoS data, we can take a stab at estimating how many readers one should associate with a given number of citations.
We'll do this by looking at randomly selected papers from PLoS biology, from 2008 and 2007. The (more than around 1 year) age is important because, for very recent papers, there hasn't been enough time for articles to be written that cite them, and for those citations to be cataloged, so the apparent number of citations would be erroneously small. We'll also restrict ourselves to research articles. Most journals (PLoS included) publish a lot of different types of content (including reviews of various kinds), but I'm mainly interested in research articles for this blog post.
For each paper, we'll take the number of citations from Web of Science ('cuz we need to pick some source, and they're as good as any).
I ended up selecting 50 papers, mainly because I got bored of doing data entry after that long. I did check, however, and the numerical conclusions I draw (below) are the same as I found from an analysis of the first 30 papers I looked at, so they're probably reasonably robust to the sample size.
The results are actually pretty surprising.
First up, the average number of citations was 29.48, while the average number of readers was (gasp) 6804.2.
That's way more readers than I might have naively thought. For a more in-depth analysis, we'll need to look at the relationship between number of citations and number of page views, on a paper-by-paper basis. That data is shown in the scatter plot below, on which each dot represents one paper. The red line is a best-fit line that is required to go through the origin (so a paper that's never been read can't be cited: seems reasonable, no?).
The first thing to notice is that the points don't really lie on that line. In particular, the correlation between number of readers and number of citations is actually pretty weak. We can quantify this by measuring the linear correlation coefficient, which would be 1 if the blue dots lay perfectly on the red line (so number of readers was a perfect predictor of number of citations), 0 if there was no relationship between these numbers, and possibly negative if fewer people cited more widely-read papers. For the data I show here, that correlation coefficient is 0.373, which confirms what we see visually: more highly read papers are more highly cited, but not by that much.
Finally, the slope of the red line tells us (roughly) how many people read a given paper for every one time that paper is cited. That number is 1/323. This agrees pretty well with a comparison between the averages above, which would suggest 230.8 readers per citation.
So, let's wrap this up by saying: for every time your paper gets cited, you can guess that several hundred people read it, but it could very well be way more (or less) than that number, because of the relatively weak correlation between readers and citations.
To wrap up this post, I'll point out a few potential flaws in my analysis:
1) I didn't consider any journals besides PLoS Biology, so I don't know how well my conclusions generalize.
2) I analyzed a pretty small number of articles.
3) There is at least one data point on that plot that looks like an "outlier", which probably could be excluded from this analysis.
Now, for basically any published paper, it's pretty easy (using web of science, scopus, or google scholar) to find out how many other papers cited that one. So, what remains is to find a conversion from number of citations (C) to number of readers (R).
Fortunately, there are a few (rare!) journals that actually do publish the number of page views for the on-line versions of journal articles. In particular, the PLoS (public library of science) journals do just that. These are (pretty highly regarded) open-access journals, predominantly featuring biomedical research. So, using the PLoS data, we can take a stab at estimating how many readers one should associate with a given number of citations.
We'll do this by looking at randomly selected papers from PLoS biology, from 2008 and 2007. The (more than around 1 year) age is important because, for very recent papers, there hasn't been enough time for articles to be written that cite them, and for those citations to be cataloged, so the apparent number of citations would be erroneously small. We'll also restrict ourselves to research articles. Most journals (PLoS included) publish a lot of different types of content (including reviews of various kinds), but I'm mainly interested in research articles for this blog post.
For each paper, we'll take the number of citations from Web of Science ('cuz we need to pick some source, and they're as good as any).
I ended up selecting 50 papers, mainly because I got bored of doing data entry after that long. I did check, however, and the numerical conclusions I draw (below) are the same as I found from an analysis of the first 30 papers I looked at, so they're probably reasonably robust to the sample size.
The results are actually pretty surprising.
First up, the average number of citations was 29.48, while the average number of readers was (gasp) 6804.2.
That's way more readers than I might have naively thought. For a more in-depth analysis, we'll need to look at the relationship between number of citations and number of page views, on a paper-by-paper basis. That data is shown in the scatter plot below, on which each dot represents one paper. The red line is a best-fit line that is required to go through the origin (so a paper that's never been read can't be cited: seems reasonable, no?).
The first thing to notice is that the points don't really lie on that line. In particular, the correlation between number of readers and number of citations is actually pretty weak. We can quantify this by measuring the linear correlation coefficient, which would be 1 if the blue dots lay perfectly on the red line (so number of readers was a perfect predictor of number of citations), 0 if there was no relationship between these numbers, and possibly negative if fewer people cited more widely-read papers. For the data I show here, that correlation coefficient is 0.373, which confirms what we see visually: more highly read papers are more highly cited, but not by that much.
Finally, the slope of the red line tells us (roughly) how many people read a given paper for every one time that paper is cited. That number is 1/323. This agrees pretty well with a comparison between the averages above, which would suggest 230.8 readers per citation.
So, let's wrap this up by saying: for every time your paper gets cited, you can guess that several hundred people read it, but it could very well be way more (or less) than that number, because of the relatively weak correlation between readers and citations.
To wrap up this post, I'll point out a few potential flaws in my analysis:
1) I didn't consider any journals besides PLoS Biology, so I don't know how well my conclusions generalize.
2) I analyzed a pretty small number of articles.
3) There is at least one data point on that plot that looks like an "outlier", which probably could be excluded from this analysis.
Tuesday, August 30, 2011
We're all (sort of) blind, even if we can "see"
When your brain processes visual inputs, some information is ignored / discarded. This is pretty well known, and most of us have had experiences where we've failed to notice something that was right in front of us (for example, the "where's Waldo?" books).
As a (slightly weak, but really cool) example, consider the pictures on this website. The first picture shows a man standing in front of a shelf in a supermarket aisle. It's not hard to imagine that, if you didn't know he was there, and looked pretty quickly at the scene, you might miss him.
Now, the question that I want to know the answer to (and that Freeman and Simoncelli have helped answer), is "what information is used, and what information is discarded?".
To help answer this question, they hypothesized a certain set of statistics that might characterize an image. The details of these statistics are technical, and are based on a model of visual cortex.
Then, they took real images, and for each image they computed their statistics, and then generated synthetic images that had all of their statistics correct, but were otherwise as random as possible.
They then had human subjects perform a discrimination task, where they were shown one picture (a real one), then another one shortly after, and were asked whether the two images were the same or not.
What they found was there were certain (pretty severe!) image manipulations for which their subjects couldn't tell the difference between synthetic and real images, thus performing at chance levels (50%) on the discrimination task.
The structure of the un-noticeable manipulations they performed let them infer several properties of the visual system, which agree well with what others have measured by using invasive electrophysiology techniques.
So, next time you look out your window, and think you are seeing all the "stuff" that's out there, think again! You're actually only seeing a (very!) impoverished fraction of the available information.
As a (slightly weak, but really cool) example, consider the pictures on this website. The first picture shows a man standing in front of a shelf in a supermarket aisle. It's not hard to imagine that, if you didn't know he was there, and looked pretty quickly at the scene, you might miss him.
Now, the question that I want to know the answer to (and that Freeman and Simoncelli have helped answer), is "what information is used, and what information is discarded?".
To help answer this question, they hypothesized a certain set of statistics that might characterize an image. The details of these statistics are technical, and are based on a model of visual cortex.
Then, they took real images, and for each image they computed their statistics, and then generated synthetic images that had all of their statistics correct, but were otherwise as random as possible.
They then had human subjects perform a discrimination task, where they were shown one picture (a real one), then another one shortly after, and were asked whether the two images were the same or not.
What they found was there were certain (pretty severe!) image manipulations for which their subjects couldn't tell the difference between synthetic and real images, thus performing at chance levels (50%) on the discrimination task.
The structure of the un-noticeable manipulations they performed let them infer several properties of the visual system, which agree well with what others have measured by using invasive electrophysiology techniques.
So, next time you look out your window, and think you are seeing all the "stuff" that's out there, think again! You're actually only seeing a (very!) impoverished fraction of the available information.
Monday, August 29, 2011
"bad science" is really good
Frequent readers of this blog may remember an earlier post, in which I discussed the problem of publication bias in medical literature.
Recently, I came accross an excellent blog called bad science that chronicles the issues of communicating statistical results (specifically about medical research) to the broader community, and especially the difficulties that arise when (oftentimes sensationalist) media are involved.
The posts are generally very accessible, and serve to highlight the (growing?) enormity of this issue. Kudos to Goldacre!
Recently, I came accross an excellent blog called bad science that chronicles the issues of communicating statistical results (specifically about medical research) to the broader community, and especially the difficulties that arise when (oftentimes sensationalist) media are involved.
The posts are generally very accessible, and serve to highlight the (growing?) enormity of this issue. Kudos to Goldacre!
Monday, August 8, 2011
Howard Hughes is my Patron (?)
For those who don't know, Howard Hughes was an eccentric american gazillionaire, who founded the Howard Hughes Medical Institute (HHMI), and subsequently bequeathed his substantial fortune to HHMI upon his demise.
HHMI currently funds huge amounts of biological and medical research, predominantly in the US.
Recently, HHMI executives decided to start offered PhD fellowships to foreign graduate students, to support them for the last 2 or 3 years of their doctoral studies. I was lucky enough to be chosen as one of the recipients of this new award.
I am pretty excited about this for a few reasons:
1) I know some of the other students who won (and a few who were turned down) for this award, and they are a pretty talented bunch, so it's an honor to be included in this group
2) I can finish my studies at Berkeley without worrying about how to pay my tuition and salary
3) Unlike most PhD fellowships (NSF, for example), this HHMI grant includes a (modest) budget
for travel to professional meetings. With the current state of Berkeley economics, I probably wouldn't get to go to many neat conferences otherwise.
4) I think this recognition will help me get grants and/or jobs in the future (although I could always be mistaken).
Anyhow, many thanks to Howard Hughes for ponying up the cash to support my studies! If you are interesting, the HHMI press release has more details about the fellowship.
Also, if you are a foreign graduate student, doing a PhD in the US in a biology-related field, definitely consider applying for next year's competition!
HHMI currently funds huge amounts of biological and medical research, predominantly in the US.
Recently, HHMI executives decided to start offered PhD fellowships to foreign graduate students, to support them for the last 2 or 3 years of their doctoral studies. I was lucky enough to be chosen as one of the recipients of this new award.
I am pretty excited about this for a few reasons:
1) I know some of the other students who won (and a few who were turned down) for this award, and they are a pretty talented bunch, so it's an honor to be included in this group
2) I can finish my studies at Berkeley without worrying about how to pay my tuition and salary
3) Unlike most PhD fellowships (NSF, for example), this HHMI grant includes a (modest) budget
for travel to professional meetings. With the current state of Berkeley economics, I probably wouldn't get to go to many neat conferences otherwise.
4) I think this recognition will help me get grants and/or jobs in the future (although I could always be mistaken).
Anyhow, many thanks to Howard Hughes for ponying up the cash to support my studies! If you are interesting, the HHMI press release has more details about the fellowship.
Also, if you are a foreign graduate student, doing a PhD in the US in a biology-related field, definitely consider applying for next year's competition!
Treating Parkinson's with Math
So... I'm back in the USofA now, after a long-ish trip to Sweden for the CNS conference. Overall, the meeting was pretty good, and there was some great science presented! On top of that, Stockholm is a gorgeous city, and well worth a visit.
One of the keynote talks at this meeting was by a German physicist-turned-neuroscientist (much like myself), on a very exciting new treatment for Parkinson's Disease.
For those of you who don't know, Parkinson's is a debilitating condition often associated with uncontrolled shaking of the limbs, and difficulty in controlling movement.
They key to treatment is the realization that Parkinson's arises from overly synchronized neural activity in the midbrain, often caused by a lack of dopamine-producing cells. Normally, neurons fire relatively asynchronously (not all at the same time), so that synchrony is a clear atypical situation.
The question is, then, can that synchrony be removed, and if so, will that restore functionality for the Parkinson's patient? Schockingly, the answer is yes!
This, on it's own, is nothing really new. In particular, a technique called deep brain stimulation (DBS) has been around for awhile, and amounts to implanting something akin to a pacemaker in the brain. While that is already a big advance in Parkinson's treatment, it's not really a cure because as soon as one turns off the pacemaker, the symptoms return, and the effectiveness of the pacemaker often decreases over time.
What Tass and his colleagues did, however, is a bit more interesting. They started by modeling the diseased condition as a set of coupled oscillators (a standard physicsy thing to do), wherein the couplings were affected by the neural activity (via STDP, a well-known form of neural plasticity that is though to underly learning and adaptation).
They then realized that, if they could co-activate subsets of these oscillators, the STDP adaptation would, over time, break those connections that were forcing the synchronous activity.
So far, I think it's a fairly neat story, but not an unusual one: a physicist sees some real-world thing and says "ah... I think that's easy to model", and writes down some equations.
However, Tass took this a bit further, and invented a device to perform that neural co-activitation, leading to a technique he calls Coordinated Reset stimulation. He got permission to implant it into some Parkinson's patients, and studied their outcomes.
The results were surprising: after only a short period of treatment, the Parkinson's symptoms were gone, and they did not return when the treatment ended (much unlike the standard DBS pacemaker treatements).
A summary of this talk is available online. I think it's a great reminder to physicists to keep tackling real-world problems, and not to stop once the equations are solved, but rather to keep pushing until the solution is implemented, or it becomes apparent that it is not implementable.
One of the keynote talks at this meeting was by a German physicist-turned-neuroscientist (much like myself), on a very exciting new treatment for Parkinson's Disease.
For those of you who don't know, Parkinson's is a debilitating condition often associated with uncontrolled shaking of the limbs, and difficulty in controlling movement.
They key to treatment is the realization that Parkinson's arises from overly synchronized neural activity in the midbrain, often caused by a lack of dopamine-producing cells. Normally, neurons fire relatively asynchronously (not all at the same time), so that synchrony is a clear atypical situation.
The question is, then, can that synchrony be removed, and if so, will that restore functionality for the Parkinson's patient? Schockingly, the answer is yes!
This, on it's own, is nothing really new. In particular, a technique called deep brain stimulation (DBS) has been around for awhile, and amounts to implanting something akin to a pacemaker in the brain. While that is already a big advance in Parkinson's treatment, it's not really a cure because as soon as one turns off the pacemaker, the symptoms return, and the effectiveness of the pacemaker often decreases over time.
What Tass and his colleagues did, however, is a bit more interesting. They started by modeling the diseased condition as a set of coupled oscillators (a standard physicsy thing to do), wherein the couplings were affected by the neural activity (via STDP, a well-known form of neural plasticity that is though to underly learning and adaptation).
They then realized that, if they could co-activate subsets of these oscillators, the STDP adaptation would, over time, break those connections that were forcing the synchronous activity.
So far, I think it's a fairly neat story, but not an unusual one: a physicist sees some real-world thing and says "ah... I think that's easy to model", and writes down some equations.
However, Tass took this a bit further, and invented a device to perform that neural co-activitation, leading to a technique he calls Coordinated Reset stimulation. He got permission to implant it into some Parkinson's patients, and studied their outcomes.
The results were surprising: after only a short period of treatment, the Parkinson's symptoms were gone, and they did not return when the treatment ended (much unlike the standard DBS pacemaker treatements).
A summary of this talk is available online. I think it's a great reminder to physicists to keep tackling real-world problems, and not to stop once the equations are solved, but rather to keep pushing until the solution is implemented, or it becomes apparent that it is not implementable.
Wednesday, July 13, 2011
White is the color of... LGN?
A lot of computational neuroscientists use something called information theory to try and understand how the parts of the brain communicate with each other. Info theory is a relatively young field, dating back to some work by Claude Shannon in the mid 1900's, and basically formalizes (mathematically) a lot of ideas about how much one could learn from a signal.
The goal of this blog post is to understand a beautiful experimental result published in 1996 by Yang Dan and colleagues. To understand this, we need to first understand how redundancy affects information transfer efficiency.
Let's imagine that you and I are in a conversation, and I choose to repeat every word twice (so it starts as "Hi Hi how how are are you you doing doing today today??"). Now clearly that is not an efficient use of my speech, because we know of a simple way I could have said the same thing in less (1/2 as many) words. One way to formalize that notion is by observing that, the way I spoke, you could predict every 2nd word, once you knew the odd-numbered words, so 1/2 the words are redundant.
What Yang Dan and colleagues showed is that the outputs of LGN neurons have the minimum possible amount of redundancy (like in the case where I only say "Hi how are you doing today?" instead of repeating myself), when presented with naturalistic movies; they showed Casablanca to their subjects.
Now, on it's own, that might seem unimpressive: maybe LGN is just set up so that it always has non-redundant outputs. Well, they did a great control experiment to show that that's not true: they presented their subjects with white noise stimuli (like the static you might see on old-timey televisions when the cable is out), and found that, in that case, LGN outputs were highly redundant! What gives?
Well, it turns out that movies (and images) of real-world stuff (forests, cities, animals, etc.) all have very similar statistical properties. This means that, if you were to make a system for communicating those signals, you could set it up in a way that removes all the redundancies that occur in those movies (like, for example, nearby parts of an image tend to be the same brightness). But, if you took that highly engineered system and applied it to movies with different redundancies, it wouldn't work quite right.
The result of Yang Dan's experiment suggests that, by adapting to the natural environment (possibly over evolutionary time scales), our brains are set up so as to do the most efficient possible job for typical real-world movies!
This remains to me one of the best success stories of systems neuroscience, in which a combination of mathematics (understanding information theory) and experimentation lead us to better understand how it is that our brains work.
The goal of this blog post is to understand a beautiful experimental result published in 1996 by Yang Dan and colleagues. To understand this, we need to first understand how redundancy affects information transfer efficiency.
Let's imagine that you and I are in a conversation, and I choose to repeat every word twice (so it starts as "Hi Hi how how are are you you doing doing today today??"). Now clearly that is not an efficient use of my speech, because we know of a simple way I could have said the same thing in less (1/2 as many) words. One way to formalize that notion is by observing that, the way I spoke, you could predict every 2nd word, once you knew the odd-numbered words, so 1/2 the words are redundant.
What Yang Dan and colleagues showed is that the outputs of LGN neurons have the minimum possible amount of redundancy (like in the case where I only say "Hi how are you doing today?" instead of repeating myself), when presented with naturalistic movies; they showed Casablanca to their subjects.
Now, on it's own, that might seem unimpressive: maybe LGN is just set up so that it always has non-redundant outputs. Well, they did a great control experiment to show that that's not true: they presented their subjects with white noise stimuli (like the static you might see on old-timey televisions when the cable is out), and found that, in that case, LGN outputs were highly redundant! What gives?
Well, it turns out that movies (and images) of real-world stuff (forests, cities, animals, etc.) all have very similar statistical properties. This means that, if you were to make a system for communicating those signals, you could set it up in a way that removes all the redundancies that occur in those movies (like, for example, nearby parts of an image tend to be the same brightness). But, if you took that highly engineered system and applied it to movies with different redundancies, it wouldn't work quite right.
The result of Yang Dan's experiment suggests that, by adapting to the natural environment (possibly over evolutionary time scales), our brains are set up so as to do the most efficient possible job for typical real-world movies!
This remains to me one of the best success stories of systems neuroscience, in which a combination of mathematics (understanding information theory) and experimentation lead us to better understand how it is that our brains work.
Where have I been?
So... not much blogging has happened in awhile, and that's a bit uncool on my part.
However, since late April, I have been on a tear, spending 5 days in SoCal (Death Valley: I guess it's really southeast Cal), 5 days in DC, 10 days in Canada, 3 days cruising around SF bay on my boat, and 3 days in Sequoia National park. Add in trying to get some research done, and not much blogging has happened. But, for those aspiring grad students out there, let this be informative: being in Grad school and having fun traveling are totally not mutually exclusive!
However, since late April, I have been on a tear, spending 5 days in SoCal (Death Valley: I guess it's really southeast Cal), 5 days in DC, 10 days in Canada, 3 days cruising around SF bay on my boat, and 3 days in Sequoia National park. Add in trying to get some research done, and not much blogging has happened. But, for those aspiring grad students out there, let this be informative: being in Grad school and having fun traveling are totally not mutually exclusive!
Thursday, April 28, 2011
Canada STEM Award for Americans
This post is mainly intended for undergrads who are thinking about going to grad school.
I did my undergrad degree in Canada, and was subsequently very fortunate to receive one of the US Fulbright science and tech PhD fellowships to attend UC Berkeley. These are great fellowships, and if you are a non-american, and interested in coming to the US for PhD studies, I strongly encourage you to look into that program.
Recently, I became aware of a new program which is basically the inverse of the one I am currently a part of. This is a program run by Fulbright Canada to bring top US students to Canada's best universities to pursue PhD studies. The benefits are many, so I would encourage any potential PhD students to investigate more fully.
Even if you've never considered studying in Canada, I urge you to think about it. From my experiences in materials science, nuclear physics, astrophysics, and particle physics, the research facilities in Canada are top-notch, and Canada has some of the world's most liveable cities. Fortunately, our best universities also tend to be in our nicest cities!
Best of luck!
I did my undergrad degree in Canada, and was subsequently very fortunate to receive one of the US Fulbright science and tech PhD fellowships to attend UC Berkeley. These are great fellowships, and if you are a non-american, and interested in coming to the US for PhD studies, I strongly encourage you to look into that program.
Recently, I became aware of a new program which is basically the inverse of the one I am currently a part of. This is a program run by Fulbright Canada to bring top US students to Canada's best universities to pursue PhD studies. The benefits are many, so I would encourage any potential PhD students to investigate more fully.
Even if you've never considered studying in Canada, I urge you to think about it. From my experiences in materials science, nuclear physics, astrophysics, and particle physics, the research facilities in Canada are top-notch, and Canada has some of the world's most liveable cities. Fortunately, our best universities also tend to be in our nicest cities!
Best of luck!
Tuesday, April 19, 2011
Uncertainty and decision making
So.... here is a post about my first biology publication: "how should prey animals respond to uncertain threats?".
I'll summarize very briefly some ideas about gambling, and the Kelly criterion, and then discuss what that has to do with prey animals.
Let's start our discussion by imagining that you and I are going to gamble on coin flips. We will flip a coin, and bet at even odds (so if it's heads, I pay you the amount of the bet, and it it's tails, you pay me that same amount). But, the coin is biased in your favor, so that it comes up heads 55% of the time, and tails 45% of the time. This means that you have a 10% edge on the bet: on average, you expect to get back 110% of the bet, for each bet you make.
If we only do one coin flip, and you want to maximize your expected profit, you would bet everything you have. You might lose, but your expected profit is positive.
Instead, let's consider the case where we keep flipping the coin over and over again, and you try to maximize your long-term profit. In that case, it would be silly to bet all of your money on the first coin flip because, if you lost that one, you would lose the ability to make money on future bets (because you would be broke and not able to keep betting). Back in the 1950's, J Kelly demonstrated that the best possible strategy in this case is to bet 10% of your money on each coin flip. As your bankroll grows, you bet more. This strategy provides the best balance between betting big (since you expect to make money on each bet, and bigger bets mean more profit), and avoiding going bankrupt (which gets rid of any chance of future profit).
In my paper, I discuss a semi-related problem, which is as follows.
Imagine that you are a deer, in a forest. You spot movement out of the corner of your eye, but you don't know for sure what is causing it. If it's a wolf (or whatever predator), you should run away to avoid being killed, but if it's not a predator (say, just some leaves blowing in the wind), then running away would waste energy, and cost you whatever mating or foraging opportunities were presently available to you.
Now we want to figure out what the deer (you) should do in that situation.
Interestingly, much like in our gambling example, the "correct" decision (the one that would be favored by evolution; the one that allows the deer to have the most offspring in its lifetime) is very heavily influenced by the uncertainty of the outcome. So, even if it might be immediately (on average) advantageous to "risk it", and not flee, when you are uncertain about whether or not a predator is present, the fact that you lose all future mating chances if you are wrong makes the "correct" decision strategy more cautious.
This issue - the influence of uncertainty on prey escape decisions - was not previously understood in the behavioral ecology models, but I am hopeful that future work in this field will be influenced by my result.
I'll summarize very briefly some ideas about gambling, and the Kelly criterion, and then discuss what that has to do with prey animals.
Let's start our discussion by imagining that you and I are going to gamble on coin flips. We will flip a coin, and bet at even odds (so if it's heads, I pay you the amount of the bet, and it it's tails, you pay me that same amount). But, the coin is biased in your favor, so that it comes up heads 55% of the time, and tails 45% of the time. This means that you have a 10% edge on the bet: on average, you expect to get back 110% of the bet, for each bet you make.
If we only do one coin flip, and you want to maximize your expected profit, you would bet everything you have. You might lose, but your expected profit is positive.
Instead, let's consider the case where we keep flipping the coin over and over again, and you try to maximize your long-term profit. In that case, it would be silly to bet all of your money on the first coin flip because, if you lost that one, you would lose the ability to make money on future bets (because you would be broke and not able to keep betting). Back in the 1950's, J Kelly demonstrated that the best possible strategy in this case is to bet 10% of your money on each coin flip. As your bankroll grows, you bet more. This strategy provides the best balance between betting big (since you expect to make money on each bet, and bigger bets mean more profit), and avoiding going bankrupt (which gets rid of any chance of future profit).
In my paper, I discuss a semi-related problem, which is as follows.
Imagine that you are a deer, in a forest. You spot movement out of the corner of your eye, but you don't know for sure what is causing it. If it's a wolf (or whatever predator), you should run away to avoid being killed, but if it's not a predator (say, just some leaves blowing in the wind), then running away would waste energy, and cost you whatever mating or foraging opportunities were presently available to you.
Now we want to figure out what the deer (you) should do in that situation.
Interestingly, much like in our gambling example, the "correct" decision (the one that would be favored by evolution; the one that allows the deer to have the most offspring in its lifetime) is very heavily influenced by the uncertainty of the outcome. So, even if it might be immediately (on average) advantageous to "risk it", and not flee, when you are uncertain about whether or not a predator is present, the fact that you lose all future mating chances if you are wrong makes the "correct" decision strategy more cautious.
This issue - the influence of uncertainty on prey escape decisions - was not previously understood in the behavioral ecology models, but I am hopeful that future work in this field will be influenced by my result.
Tuesday, April 5, 2011
criticality
After a long hiatus, I am back to blogging.
Yesterday's physics colloquium was given by Bill Bialek, physicist and theoretical biologist at Princeton (and the PhD thesis advisor of my PhD thesis advisor). His talk was based on a recent paper titled "Are biological systems poised at criticality?". In the context of neuroscience, Bialek's basic observation is that, yes, neural systems appear to have this special "critical" property.
In particular, the observed correlations between the activities of two neurons are such that, if they were any stronger, the brain would be epileptic (recall that, in epileptics, the activities of neurons are amplified such that you get huge cascades of activity, resulting in seizures), but if those correlations were any weaker, the brain would effectively be "dead" (there would be no significant collective behavior).
Now, Bialek's work also discusses criticality in protein sequences, and collective animal behavior, but my interest is mainly in the brain.
Now, from a purely functional standpoint, this "criticality" seems to be sensible, and I could imagine it arising as a product of evolution; animals with more strongly correlated neurons would be epileptic, and they would die off, but so would those with less strongly correlated neurons, as they might be unable to effectively process information.
However, the brain is not static over the lifetime of the animal. We learn and adapt, and as we do, the correlations between neurons in our brains change.
How, then, is this criticality maintained? In other words, is there some kind of homeostatic mechanism that adjusts the correlations (or synaptic connection strengths that, presumably, alter these correlations), to keep them at this critical point?
These are, admittedly, ill-formed ideas at present, but I may very well get back to them when I have a chance.
In other news, my first biology paper was just accepted for publication in "frontiers in computational neuroscience." I will post a link to the paper when it is all copy edited and ready for public consumption.
Wednesday, February 23, 2011
Utah!
Off to Salt Lake City tomorrow for the annual computational and systems neuroscience (cosyne) meeting. I'm giving a talk on Friday, with an expected audience of many hundreds of people. Definitely a great opportunity to spread the word about my latest results, but clearly also a nerve-wracking experience.
I'm also pretty excited to see what other people are up to, and to spend some time in the mountains!
I'm also pretty excited to see what other people are up to, and to spend some time in the mountains!
Wednesday, February 2, 2011
networking for dummies (and scientists)
If you're anything like me, you've been told your whole life that it's crucial that you "network". No one really knows what this means, but it is apparently critical to future job getting. Today's post is about what I think about networking for scientists.
The basic idea is that, if people know you, like you, and respect your abilities, they will want to work with you. So, when they have job openings, they may remember you and call on you.
Case in point, I was recently sailing with an old friend who works for (insert local tech firm's name here), and they are looking for staff. He mentioned that he might be able to set something up if I am interested. Now, I'm staying in grad school 'till I'm done this PhD thing, but clearly having potential job opportunities is great, and is (in some sense) the "goal" of what people mean when they say "networking".
So, how does this good thing (possible job offering) arise, and how do you get there?
The canonical advice is "meet people who can do things for you, and make them remember you". That's why science conferences (and other places, I'm sure) are full of eager young go-getters foisting their business cards on anyone who will take one. I posit that this is an ineffective strategy, because those interactions lack meaning.
My advice is instead to do fun things, and to make friends who have similar hobbies (ideally who work in a diverse set of businesses). That way, you make meaningful connections with people, based on something real (as opposed to the fake friendliness that arises when you want something from them). Forget about networking! Go have fun!
Much later, when you are looking for work, feel free to call up people you know, especially those with connections in the industry in which you want to work.
Now, about conferences: obviously, science conferences are great places to meet smart people who share your interests (and may be able to offer you jobs). Clearly, they have value in this whole "networking" world. Thus, you should indeed go to conferences eager to share your work with others, and to learn what they are working on.
But, instead of trying to play every angle to give out your business card, I suggest you focus instead on learning, having fun, and meeting people for the sake of making friends. Once you have friends, the networking game is basically solved.
Hopefully, a lot of this is obvious. But, I have been given a lot of advice in the past that is quite contrary to what I have written here, so I think it's worth putting on (virtual) paper.
Best of luck!
The basic idea is that, if people know you, like you, and respect your abilities, they will want to work with you. So, when they have job openings, they may remember you and call on you.
Case in point, I was recently sailing with an old friend who works for (insert local tech firm's name here), and they are looking for staff. He mentioned that he might be able to set something up if I am interested. Now, I'm staying in grad school 'till I'm done this PhD thing, but clearly having potential job opportunities is great, and is (in some sense) the "goal" of what people mean when they say "networking".
So, how does this good thing (possible job offering) arise, and how do you get there?
The canonical advice is "meet people who can do things for you, and make them remember you". That's why science conferences (and other places, I'm sure) are full of eager young go-getters foisting their business cards on anyone who will take one. I posit that this is an ineffective strategy, because those interactions lack meaning.
My advice is instead to do fun things, and to make friends who have similar hobbies (ideally who work in a diverse set of businesses). That way, you make meaningful connections with people, based on something real (as opposed to the fake friendliness that arises when you want something from them). Forget about networking! Go have fun!
Much later, when you are looking for work, feel free to call up people you know, especially those with connections in the industry in which you want to work.
Now, about conferences: obviously, science conferences are great places to meet smart people who share your interests (and may be able to offer you jobs). Clearly, they have value in this whole "networking" world. Thus, you should indeed go to conferences eager to share your work with others, and to learn what they are working on.
But, instead of trying to play every angle to give out your business card, I suggest you focus instead on learning, having fun, and meeting people for the sake of making friends. Once you have friends, the networking game is basically solved.
Hopefully, a lot of this is obvious. But, I have been given a lot of advice in the past that is quite contrary to what I have written here, so I think it's worth putting on (virtual) paper.
Best of luck!
Friday, January 7, 2011
why your brain loves raves (even if you don't take ecstasy)
Does anyone remember this ridiculous thing called a "rave"?
Waaaayyyy back in the nether years of my youth, these parties were semi-popular excuses to take copious amounts of MDMA (ecstasy) and listen to really bad techno music.
Aside from the love of MDMA, why were these things so popular? As a more general question, why do people enjoy music, and music with lots of heavy bass in particular? Tony Bell turned me on to one interesting idea a few months back, that I will explain to you now.
Your brain is a big mass of nerves cells called neurons that emit pulses of electrical activity to communication with each other. At a close glance, this activity appears chaotic, but there is some underlying structure to it; there are waves of activity called neural oscillations (brain waves) that travel across your brain. There are different kinds (frequencies) of brain waves: alpha waves oscillate 8-12 times per second, delta waves oscillate 1-4 times per second, and so on.
Lots of research has shown that these waves help to synchronize the activity of neurons, enhancing cognitive processes like memory and attention.
What, you might ask, do neural oscaillations have to do with raves?
Well, it has been demonstrated that your brain waves synchronize (lock in to) external stimuli, like music, when that stimulus has the right frequency. Listen closely to the bass component (like the bass drum) of your favorite bass-heavy song, and you should notice that it has a few beats per second. In other words, it occurs at precisely the frequency of delta waves in your brain.
For example, if you google "rave music", you find this youtube clip. Try listening to the bass line and counting how many beats you get in a 5-second window. It's probably around 10-14, depending on where you are in the song. In other words, right in the delta range, that would allow your attention to lock on to it.
If you pay close attention, you'll find beats that fall neatly into the alpha range as well (8-12 beats per second).
So.. what's my point? Well, maybe these brain oscillations are one of the reasons why bass can have such a strong effect on people.
I hope this provides some food for thought when you're at burning man this fall, or your own favorite music scene.
Waaaayyyy back in the nether years of my youth, these parties were semi-popular excuses to take copious amounts of MDMA (ecstasy) and listen to really bad techno music.
Aside from the love of MDMA, why were these things so popular? As a more general question, why do people enjoy music, and music with lots of heavy bass in particular? Tony Bell turned me on to one interesting idea a few months back, that I will explain to you now.
Your brain is a big mass of nerves cells called neurons that emit pulses of electrical activity to communication with each other. At a close glance, this activity appears chaotic, but there is some underlying structure to it; there are waves of activity called neural oscillations (brain waves) that travel across your brain. There are different kinds (frequencies) of brain waves: alpha waves oscillate 8-12 times per second, delta waves oscillate 1-4 times per second, and so on.
Lots of research has shown that these waves help to synchronize the activity of neurons, enhancing cognitive processes like memory and attention.
What, you might ask, do neural oscaillations have to do with raves?
Well, it has been demonstrated that your brain waves synchronize (lock in to) external stimuli, like music, when that stimulus has the right frequency. Listen closely to the bass component (like the bass drum) of your favorite bass-heavy song, and you should notice that it has a few beats per second. In other words, it occurs at precisely the frequency of delta waves in your brain.
For example, if you google "rave music", you find this youtube clip. Try listening to the bass line and counting how many beats you get in a 5-second window. It's probably around 10-14, depending on where you are in the song. In other words, right in the delta range, that would allow your attention to lock on to it.
If you pay close attention, you'll find beats that fall neatly into the alpha range as well (8-12 beats per second).
So.. what's my point? Well, maybe these brain oscillations are one of the reasons why bass can have such a strong effect on people.
I hope this provides some food for thought when you're at burning man this fall, or your own favorite music scene.
Subscribe to:
Posts (Atom)