I've been reading a fair bit lately about agent-based modeling. Basically, these are models of interactions between agents, each of whom decides for themselves how they will behave.
One interesting question that comes up is "why should I be nice to people, when that niceness has an associated cost?" For example, imagine that I share my lunch with someone. In that case, I end up with less lunch. So, it would seem that the most successful agents would not engage in such sharing activities.
But, by and large, people are kind to each other, which begs the question "why do people sacrifice in order to help others?"
In a very cold economic sense, the answer is that it is beneficial to sacrifice some resources, in order to help others, because those people will remember your kindness and repay you in future with kindness of their own (or you get a reputation as a good person, and other people are kind to you in future). And that payoff makes it worthwhile to be nice to other people.
Lately, I have spent a fair bit of time interacting with an elderly faculty member at UC Berkeley. These conversations typically start with me reminding this individual of who I am (which is not surprising, given his age, and the fact that I am by no means an "important" person in the Berkeley physics scene).
This got me to thinking about how altruism might play out in a world where people do not remember your good deeds, and thus there is no chance of you being repayed for your kindness.
In such a world, the economic value of kindness is significantly mitigated. If people make purely economic decisions, then, they would likely not engage in altruistic behavior.
I wonder how far such thinking would go in helping us to understand the workings of communities in which individuals are highly anonymous (if you are completely anonymous, you effectively interact with a memoryless populace, since no one knows who you are and thus cannot link your actions to some identity). These might include on-line communities, as well as large cities.
What's my point? Well, it might be a good idea to introduce yourself to your neighbors, and maybe to smile when you do so.
discussing topics in neuroscience, the process of doing science, and the everyday ennui associated with being a grad student
Wednesday, December 22, 2010
Saturday, December 18, 2010
gold rush!
So... discovery channel loves to run TV shows about roughnecks doing roughneck things. And they do a fine job of it.
Deadliest catch was an old favorite of mine; it's a documentary type series about crab fisherman in the arctic, and is very addictive to watch.
Last night, I watched a new show of theirs (in a rare moment of not sciencing) called Gold Rush. It's awesome. Basically, a bunch of unemployed men from Oregon got sick of sitting around being unemployed. So they sold all their stuff to raise $100 K, and used that to buy a couple of old backhoes, drove up to Alaska, and started digging for gold. Of course, none of them know anything about mining.
Anyhow, great show, highly recommended for those non-sciencing times.
Deadliest catch was an old favorite of mine; it's a documentary type series about crab fisherman in the arctic, and is very addictive to watch.
Last night, I watched a new show of theirs (in a rare moment of not sciencing) called Gold Rush. It's awesome. Basically, a bunch of unemployed men from Oregon got sick of sitting around being unemployed. So they sold all their stuff to raise $100 K, and used that to buy a couple of old backhoes, drove up to Alaska, and started digging for gold. Of course, none of them know anything about mining.
Anyhow, great show, highly recommended for those non-sciencing times.
Monday, December 13, 2010
predictions, FTW!
In grade school, we were all taught the scientific method, right?
The idea is that you observe something, and that makes you have some thought about how it works, and that thought gives you ideas about what other things might be true that you could observe, and then you go and look for those things, thus making more observations, and the cycle continues.
All too often, though, it becomes very hard to "have the thought about how it works" (come up with a compelling and parsimonious theory), and then the next stage of predicting observations that would be true, if your theory is correct, falls by the wayside.
In a really compelling paper, Peter Lipton explains why it's not okay to just by-pass this seemingly hard step (theorizing).
His basic argument is that, once you have all the facts, you can contort a theory in any way you want to get it to fit all the data. So the fact that the theory agrees with data is not necessarily impressive.
However, if you are predicting the results of experiments that haven't been done yet, you don't have that luxury, which forces the predictions to be justified by firm logic rather than "it fits the data" (because the data hasn't been collected yet!)
After months of hard work, I have finally succeeded in making some concrete predictions for doable experiments. I will discuss this a bit more when the paper (eventually) comes out, although those who were at my talk today in the Redwood Center already have some idea.
The idea is that you observe something, and that makes you have some thought about how it works, and that thought gives you ideas about what other things might be true that you could observe, and then you go and look for those things, thus making more observations, and the cycle continues.
All too often, though, it becomes very hard to "have the thought about how it works" (come up with a compelling and parsimonious theory), and then the next stage of predicting observations that would be true, if your theory is correct, falls by the wayside.
In a really compelling paper, Peter Lipton explains why it's not okay to just by-pass this seemingly hard step (theorizing).
His basic argument is that, once you have all the facts, you can contort a theory in any way you want to get it to fit all the data. So the fact that the theory agrees with data is not necessarily impressive.
However, if you are predicting the results of experiments that haven't been done yet, you don't have that luxury, which forces the predictions to be justified by firm logic rather than "it fits the data" (because the data hasn't been collected yet!)
After months of hard work, I have finally succeeded in making some concrete predictions for doable experiments. I will discuss this a bit more when the paper (eventually) comes out, although those who were at my talk today in the Redwood Center already have some idea.
Thursday, December 9, 2010
The play-off beard
Apologies to regular followers of my blog for the relative dearth of content lately, and especially the shortage of pop-science posts.
Today's post will continue in that vein, and be about superstition, hockey, and undergrads.
Most hockey fans are familiar with the concept of a play-off beard. Basically, if you are a hockey player, and your team is in the championship rounds of play, you grow a beard. Typically, this is seen as some sort of team bonding (and maybe superstition) thing.
Has anyone noticed that undergrads do the same thing around exam time? I mean, wandering around Berkeley this week, you would swear that it was
Stanley cup season, and that the San Jose sharks (our "local" NHL team) had just signed thousands of scrawny, glasses-wearing forwards.
This probably has less to do with team bonding (since Berkeley's "official" policy is that undergrad exams are not a team excercise, although I have seen some students attempting to make them more collaborative, often with poor results for all involved), and more to do with sheer laziness.
Whatever the reason, I propose the term "play-off beard" to refer to the general air of unkemptedness one sees in undergrads around exam time. Any takers?
Today's post will continue in that vein, and be about superstition, hockey, and undergrads.
Most hockey fans are familiar with the concept of a play-off beard. Basically, if you are a hockey player, and your team is in the championship rounds of play, you grow a beard. Typically, this is seen as some sort of team bonding (and maybe superstition) thing.
Has anyone noticed that undergrads do the same thing around exam time? I mean, wandering around Berkeley this week, you would swear that it was
Stanley cup season, and that the San Jose sharks (our "local" NHL team) had just signed thousands of scrawny, glasses-wearing forwards.
This probably has less to do with team bonding (since Berkeley's "official" policy is that undergrad exams are not a team excercise, although I have seen some students attempting to make them more collaborative, often with poor results for all involved), and more to do with sheer laziness.
Whatever the reason, I propose the term "play-off beard" to refer to the general air of unkemptedness one sees in undergrads around exam time. Any takers?
Thursday, December 2, 2010
what's in a name?
At my qualifying exam yesterday (long oral exam you need to pass in order to become a PhD candidate), it became apparent that there is lots I don't yet know about the anatomy of the brain, although I know enough about my subfield that I passed my exam: yay!
This experience reminded me of the value of factual knowledge, in addition to technical skill.
Now, physicists often make disparaging comments about fields that require lots of factual knowledge. The famous nuclear physicist Ernest Rutherford for example once remarked that "all science is either physics or stamp collecting". In some sense, this is a valid criticism of the way biology was approached in the past: our goal as scientists is not simply to identify and name phenomena in the natural world, but rather to seek out elegant, simple explanations for why things are the way they are. For that reason, physicists value theories with simple premises and broad predictive power above all else.
However, at some point, knowledge of the names and properties of all the stamps out there can help to inform theories that simplify that list: maybe there are fundamental properties that are common to all stamps, or at least certain classes of stamps that would be missed if you just ignored all the "stampiness" in the world. And, in such a situation, an arrogant attitude of "if I can't derive it from first principles it must not be important" is no longer productive.
So far in my work, none of these fine details have been very important, which is probably why I haven't bothered to learn them yet. But, as I move forward, and attempt to make more and more realistic models, there will come a time when I will need to know these things.
So, that is my vow for the next 6 months, which I am putting in writing, so as to force myself to do it. I will learn the functional anatomy of the brain. In particular, I will be able to:
1) Identify, on a diagram of the brain, the major regions (medial temporal lobe, auditory cortex, etc.), and describe (at least briefly) our current knowledge of their functions
2) Describe the different types of cells present in primate cortex, along with how they are identified (ie: the differences in their appearances under a microscope), and what differs between their physiology, and how they are connected.
This experience reminded me of the value of factual knowledge, in addition to technical skill.
Now, physicists often make disparaging comments about fields that require lots of factual knowledge. The famous nuclear physicist Ernest Rutherford for example once remarked that "all science is either physics or stamp collecting". In some sense, this is a valid criticism of the way biology was approached in the past: our goal as scientists is not simply to identify and name phenomena in the natural world, but rather to seek out elegant, simple explanations for why things are the way they are. For that reason, physicists value theories with simple premises and broad predictive power above all else.
However, at some point, knowledge of the names and properties of all the stamps out there can help to inform theories that simplify that list: maybe there are fundamental properties that are common to all stamps, or at least certain classes of stamps that would be missed if you just ignored all the "stampiness" in the world. And, in such a situation, an arrogant attitude of "if I can't derive it from first principles it must not be important" is no longer productive.
So far in my work, none of these fine details have been very important, which is probably why I haven't bothered to learn them yet. But, as I move forward, and attempt to make more and more realistic models, there will come a time when I will need to know these things.
So, that is my vow for the next 6 months, which I am putting in writing, so as to force myself to do it. I will learn the functional anatomy of the brain. In particular, I will be able to:
1) Identify, on a diagram of the brain, the major regions (medial temporal lobe, auditory cortex, etc.), and describe (at least briefly) our current knowledge of their functions
2) Describe the different types of cells present in primate cortex, along with how they are identified (ie: the differences in their appearances under a microscope), and what differs between their physiology, and how they are connected.
Subscribe to:
Posts (Atom)