So.... the blogosphere is alight this morning with reports that more intelligent children grow up to consume more alcohol as adults than their less-clever peers.
Some data is presented, and that seems fairly compelling. In particular, this data comes from longitudinal studies that survey people and collect data on them repeatedly from childhood until adulthood.
This result has led a lot of people to speculate wildly about "why smart people drink more", but I think the issue is not quite as cut-and-dry as it is made to seem.
For example, one of the results of the same longitudinal (UNC) study is that alcohol consumption is negatively correlated with academic achievement in high school. So, it's not like drinking makes you smart, or anything. It's also not true that being smart makes you want to drink. If those things were true, you would expect to see high GPA and alcohol consumption positively correlated.
But, smarter kids drink more as adults. And, given the high-school study, it appears to be a late-onset effect (the smart kids don't drink more than everyone else in high-school, they wait until they are older).
What's my point? Well, for one thing, a lot of science reporting likes to pick up the flashiest headline they can ("OMG! drinking makes you smart", for example), to get people to read their stuff. And a lot of people browsing the news just skim through headlines to get a quick sense of the relevant information. But this whole process ignores the inherent messiness of scientific results, and can be very misleading.
So, before you rush out to put your kids in beer-chugging lessons, take a deep breath, and let the hype die down a bit.
discussing topics in neuroscience, the process of doing science, and the everyday ennui associated with being a grad student
Thursday, October 28, 2010
the sound of settling
I've been running a lot of computer simulations lately.
These start in random initial conditions and (eventually) learn image features. When the simulation has figured it all out, its dictionary of features stops changing: we say that the simulation has "converged", or "settled".
This can, and often is, a long process (several days, up to a few weeks), which is frustrating if you just want to know the answer!
So, what is the sound of a simulation as it settles? Well, it sounds like hope and despair, all set to the gentle hum of the computer's cooling fan. Ben Gibbard would be proud.
These start in random initial conditions and (eventually) learn image features. When the simulation has figured it all out, its dictionary of features stops changing: we say that the simulation has "converged", or "settled".
This can, and often is, a long process (several days, up to a few weeks), which is frustrating if you just want to know the answer!
So, what is the sound of a simulation as it settles? Well, it sounds like hope and despair, all set to the gentle hum of the computer's cooling fan. Ben Gibbard would be proud.
Sunday, October 24, 2010
who will watch the watchmen?
Thanks to my dad for sending me an excellent article.
For concreteness, the article in question is a popularized discussion of a paper published in PLos Medicine (a high profile medical journal) entitled Why most published research findings are false.
I think that Ioannidis (the author of the PLoS paper) makes some excellent points, but I am more confident in the quality of scientific publications than is he. Should you agree with me? I'll let you judge for yourself, once you understand the idea behind the argument.
So here's the basic idea. As a scientist, you impress funding agencies and hiring committees (and secure yourself a career), at least in part, by publishing in highly selective journals. Those journals only want to publish results that are "surprising" in some way. Now, on their own, both of these things are completely reasonable.
However, combining these properties, you get the result is that surprising work is more often published, and has more impact in the scientific community than does less surprising work. Here's a quick example (from Ioannidis, quoted by Freedman in the Atlantic article) to show how this works, which I modify slightly for my purposes.
The results of most experiments have some intrinsic randomness associated with them. So, if you repeat the experiment a few times, you expect to get a different, but (probably) similar, result each time. If you repeat the experiment enough times, you eventually get an answer that is very different from the norm. If you are repeating the experiment yourself, you know this, and identify the unusual result as being a statistical fluke. When you report your result, you include all of the trials (or even omit the outlier), and the reader has a good knowledge of the typical result, and the variation they can expect. This is good science, and is not a problem.
Now imagine that the experiment is very long and costly to perform, so you only do it once. With high probability, you get the typical (maybe boring) result, and either publish it in a low-ranking journal (where not many people read it), or not at all. However, there is some (maybe small) chance that you will discover something exciting, and will not know that the result is atypical, inasmuch as the result would not occur often if the experiment were repeated many times. If you do get the "exciting" (surprising) result, you publish it in a high-profile journal. Here, as in the first example, you are still not doing anything "wrong" as a scientist. Since the experiment can't be repeated, you can't say if the result is typical or not, but that's how it goes.You just report thoroughly what you observed, and how you did the measurement, and any relevant interpretations you made, and leave it to your readers to make responsible use of your results.
But, to save time in wading through the mountains of work being published, most scientists (myself included) start by reading the "important" journals, and don't spend as much time digging through the lesser ones.
Interestingly, the end result for the community seems to be that statistically atypical results have more prominence than do more typical ones. And no one has to do anything overtly "wrong" for it to happen: it's a natural consequence of giving more exposure to more "surprising" research.
So that's Ioannidis' argument, and it's pretty compelling.
As a theorist, I like to imagine that I am immune from such things (since I don't really do experiments, these "randomness" effects from experimentation don't affect my work in the same way). However, when I sit down to formulate new theories, they are often heavily guided by the observations of experimenters. And I, too, spend more time reading high-profile papers than low-profile ones. So, in some sense, I am as vulnerable as anyone else.
What to do about this? Well, I think we, as a scientific community, should be more prone to publish negative results (ie: "I didn't see anything interesting happen"), as well as positive ones (ie: "OMG! It totally turned blue!", or whatever). We should probably also not put such a premium on papers from high profile journals, especially in terms of what we read to direct our research.
So, this is my mid-October resolution: I will spend more time reading results from low-profile journals, and give those results the same amount of thought that I put into higher-profile ones.
For concreteness, the article in question is a popularized discussion of a paper published in PLos Medicine (a high profile medical journal) entitled Why most published research findings are false.
I think that Ioannidis (the author of the PLoS paper) makes some excellent points, but I am more confident in the quality of scientific publications than is he. Should you agree with me? I'll let you judge for yourself, once you understand the idea behind the argument.
So here's the basic idea. As a scientist, you impress funding agencies and hiring committees (and secure yourself a career), at least in part, by publishing in highly selective journals. Those journals only want to publish results that are "surprising" in some way. Now, on their own, both of these things are completely reasonable.
However, combining these properties, you get the result is that surprising work is more often published, and has more impact in the scientific community than does less surprising work. Here's a quick example (from Ioannidis, quoted by Freedman in the Atlantic article) to show how this works, which I modify slightly for my purposes.
The results of most experiments have some intrinsic randomness associated with them. So, if you repeat the experiment a few times, you expect to get a different, but (probably) similar, result each time. If you repeat the experiment enough times, you eventually get an answer that is very different from the norm. If you are repeating the experiment yourself, you know this, and identify the unusual result as being a statistical fluke. When you report your result, you include all of the trials (or even omit the outlier), and the reader has a good knowledge of the typical result, and the variation they can expect. This is good science, and is not a problem.
Now imagine that the experiment is very long and costly to perform, so you only do it once. With high probability, you get the typical (maybe boring) result, and either publish it in a low-ranking journal (where not many people read it), or not at all. However, there is some (maybe small) chance that you will discover something exciting, and will not know that the result is atypical, inasmuch as the result would not occur often if the experiment were repeated many times. If you do get the "exciting" (surprising) result, you publish it in a high-profile journal. Here, as in the first example, you are still not doing anything "wrong" as a scientist. Since the experiment can't be repeated, you can't say if the result is typical or not, but that's how it goes.You just report thoroughly what you observed, and how you did the measurement, and any relevant interpretations you made, and leave it to your readers to make responsible use of your results.
But, to save time in wading through the mountains of work being published, most scientists (myself included) start by reading the "important" journals, and don't spend as much time digging through the lesser ones.
Interestingly, the end result for the community seems to be that statistically atypical results have more prominence than do more typical ones. And no one has to do anything overtly "wrong" for it to happen: it's a natural consequence of giving more exposure to more "surprising" research.
So that's Ioannidis' argument, and it's pretty compelling.
As a theorist, I like to imagine that I am immune from such things (since I don't really do experiments, these "randomness" effects from experimentation don't affect my work in the same way). However, when I sit down to formulate new theories, they are often heavily guided by the observations of experimenters. And I, too, spend more time reading high-profile papers than low-profile ones. So, in some sense, I am as vulnerable as anyone else.
What to do about this? Well, I think we, as a scientific community, should be more prone to publish negative results (ie: "I didn't see anything interesting happen"), as well as positive ones (ie: "OMG! It totally turned blue!", or whatever). We should probably also not put such a premium on papers from high profile journals, especially in terms of what we read to direct our research.
So, this is my mid-October resolution: I will spend more time reading results from low-profile journals, and give those results the same amount of thought that I put into higher-profile ones.
Thursday, October 21, 2010
it pays to be sparse
Today's post will be about sparseness.
The basic idea is that, if you look in my brain while it's processing an image, there will only be a small number of nerve cells active at any time. So, while the input image comes in as millions of numbers (the activity values of all the photoreceptors on my retina), my visual cortex is representing that image in terms of a much smaller number of variables.
This is good for a lot of reasons: it reduces the amount of energy I need to spend on image processing (small number of active neurons means less energy, and my brain takes up a lot of my body's energy budget), reduces the number of values that need to be passed on to the next stage of sensory processing, and it makes the input "simpler".
What do I mean by simpler? Well, on some level, my brain is seeking to "explain" the input image, in terms of a (usually small) number of relevant "causes". As an example, my desk right now contains a laptop, a coffee cup, and a picture of my girlfriend. If I want to make behavioral decisions, that's probably enough information for me: I don't need to actively consider all of the messy details of each of those objects, although I can figure them out of I want to.
So, by maintaining a sparse representation, my brain is forcing itself to find the relevant information, while filtering away a lot of the unnecessary details. For this reason, sparseness is one of the most important ideas in all of unsupervised learning.
Indeed, almost every paper published in the last 15 years about coding of sensory inputs boils down to seeking sparse representations of naturalistic stimuli.
The cool thing is that the guys who invented this notion work just down the hall for me. Berkeley FTW!
The basic idea is that, if you look in my brain while it's processing an image, there will only be a small number of nerve cells active at any time. So, while the input image comes in as millions of numbers (the activity values of all the photoreceptors on my retina), my visual cortex is representing that image in terms of a much smaller number of variables.
This is good for a lot of reasons: it reduces the amount of energy I need to spend on image processing (small number of active neurons means less energy, and my brain takes up a lot of my body's energy budget), reduces the number of values that need to be passed on to the next stage of sensory processing, and it makes the input "simpler".
What do I mean by simpler? Well, on some level, my brain is seeking to "explain" the input image, in terms of a (usually small) number of relevant "causes". As an example, my desk right now contains a laptop, a coffee cup, and a picture of my girlfriend. If I want to make behavioral decisions, that's probably enough information for me: I don't need to actively consider all of the messy details of each of those objects, although I can figure them out of I want to.
So, by maintaining a sparse representation, my brain is forcing itself to find the relevant information, while filtering away a lot of the unnecessary details. For this reason, sparseness is one of the most important ideas in all of unsupervised learning.
Indeed, almost every paper published in the last 15 years about coding of sensory inputs boils down to seeking sparse representations of naturalistic stimuli.
The cool thing is that the guys who invented this notion work just down the hall for me. Berkeley FTW!
Monday, October 18, 2010
the mating game
I am back from the neuroscience retreat in Lake Tahoe. I had a lot of fun, and have some new ideas for science. These involve semi-autonomous sensorimotor control systems, and will not be discussed in this blog post.
Both nights of the retreat, the neuro grad students threw a big party for all of us. It was a great opportunity to drink a few beers, and do some networking.
At one of said parties, I was discussing some recent work I did on escape decisions for prey animals with imperfect information, and my colleague inquired about whether or not I had considered the issue of mating opportunities with imperfect information.
That question is the topic of this blog post.
Imagine that you are a lady-deer (doe), and that it's mating season. You will be in heat for 10 days, after which it's too late for you (you have to wait until next year to mate).
Imagine that you get to mate once and only once this mating season and that, each day, you get the chance to inspect one randomly selected man-deer (buck), and choose whether or not to mate with him. Also imagine that you can assess the quality of the man-deer from your interaction, and that not all men-deer are equal (some are better potential mates). What selection strategy can you use to mate with the best possible male, and how does that strategy change as the season progresses?
I think the answer is pretty simple, and we can figure it out by working backwards from the last day. On the last day, you should mate with whatever male you see, because it is your last chance to mate (and even a poor quality mating opportunity is better than none at all, right?!).
On the second-to-last-day, you should mate with the male if he is better than average (in other words, better than the expectation value of quality of the male you will see the next day).
One the third-to-last day, you should mate with the male if he is better than 2/3 of the population. To be more rigorous, I would say "mate if the male is better than the expectation value of the max. quality of two randomly selected males", but the 2/3 rule is fine for our current purposes.
Clearly, with more time left in the mating season, we can afford to be more selective.
Formally, I think the optimal strategy is "mate with the male if they are better than the maximum quality in a group of n randomly selected males, where n is the number of days left in the mating season."
I suspect that this result is both easy to prove, and that it has probably already been done by someone (although I am too lazy to find out whom).
Anyhow, next time you are people-watching at a club, and you see people pairing up with strangers, look at the clock, calculate how long until "last call", and consider the subtle mathematics behind "the mating game."
Both nights of the retreat, the neuro grad students threw a big party for all of us. It was a great opportunity to drink a few beers, and do some networking.
At one of said parties, I was discussing some recent work I did on escape decisions for prey animals with imperfect information, and my colleague inquired about whether or not I had considered the issue of mating opportunities with imperfect information.
That question is the topic of this blog post.
Imagine that you are a lady-deer (doe), and that it's mating season. You will be in heat for 10 days, after which it's too late for you (you have to wait until next year to mate).
Imagine that you get to mate once and only once this mating season and that, each day, you get the chance to inspect one randomly selected man-deer (buck), and choose whether or not to mate with him. Also imagine that you can assess the quality of the man-deer from your interaction, and that not all men-deer are equal (some are better potential mates). What selection strategy can you use to mate with the best possible male, and how does that strategy change as the season progresses?
I think the answer is pretty simple, and we can figure it out by working backwards from the last day. On the last day, you should mate with whatever male you see, because it is your last chance to mate (and even a poor quality mating opportunity is better than none at all, right?!).
On the second-to-last-day, you should mate with the male if he is better than average (in other words, better than the expectation value of quality of the male you will see the next day).
One the third-to-last day, you should mate with the male if he is better than 2/3 of the population. To be more rigorous, I would say "mate if the male is better than the expectation value of the max. quality of two randomly selected males", but the 2/3 rule is fine for our current purposes.
Clearly, with more time left in the mating season, we can afford to be more selective.
Formally, I think the optimal strategy is "mate with the male if they are better than the maximum quality in a group of n randomly selected males, where n is the number of days left in the mating season."
I suspect that this result is both easy to prove, and that it has probably already been done by someone (although I am too lazy to find out whom).
Anyhow, next time you are people-watching at a club, and you see people pairing up with strangers, look at the clock, calculate how long until "last call", and consider the subtle mathematics behind "the mating game."
Thursday, October 14, 2010
Retreat!
Today will be a rare two-post day.
Tomorrow, all the neuroscientists (myself included) are off to lake Tahoe for our annual neuro retreat.
What, you might ask, does one do at a neuroscience retreat? Well, it's kind of like what you would get if a frat party mated with a science conference and their children lived in the woods.
We'll present the stuff we've been working on to the other Berkeley neuro people, drink a few beers, maybe go swimming or canoeing (or whatever).
Should be good times. Expect a more thorough post next week.
Tomorrow, all the neuroscientists (myself included) are off to lake Tahoe for our annual neuro retreat.
What, you might ask, does one do at a neuroscience retreat? Well, it's kind of like what you would get if a frat party mated with a science conference and their children lived in the woods.
We'll present the stuff we've been working on to the other Berkeley neuro people, drink a few beers, maybe go swimming or canoeing (or whatever).
Should be good times. Expect a more thorough post next week.
It pays to be submissive
It pays to be submissive... of fellowship and scholarship applications, that is.
Most students apply for some form of scholarship, bursary, or fellowship at some point in their lives. Often, they apply because: a) someone told them they would be a good candidate, or b) they read the qualities for which the award is given and thought they'd be a good match.
Those are excellent reasons to apply for stuff, but simply using a) and b) as your criteria for what to apply for results in a lot of missed opportunities.
When I was an undergrad, I was pretty shameless in applying for every scrap of money for which I wasn't explicitly ineligible (as a white male, some of the "for visible minorities", etc. awards were just not gonna go to me). In fact, the University Women's club of Vancouver gives out an annual scholarship (several thousand dollars), for which the criteria don't explicitly state that the recipient must be female. I applied, and subsequently won the award!
Here is my point, and some A+ advice for academic success: often the cost of applying for stuff (in terms of time and effort) is very low compared to the benefit that you get if you win (ie: 30 minutes of work to apply for a $5,000 scholarship is a pretty good hourly rate!). On those grounds alone, you should apply for anything you have even a remote chance of winning.
But there's another, potentially more important, effect that I like to call the "cash snowball". You see, most awards you apply for ask you to list the other awards you've won. And most committees look at that list and use it to decide how "good" you are. So, if you've won lots of stuff, you will tend to be more successful in winning future stuff.
I suspect that this trend still holds, even if the committee has never heard of the awards on your list. So, they don't know how prestigious (or not) the award was: they just know that someone else thought you were a winner.
So, applying for lots of (even un-prestigious) awards early on in your academic career can be a solid way to set yourself up for future success. It's not a guaranteed strategy for success, but it sure can help.
I suspect the same is true for non-academics: since the cost of finding a better job is low compared to the value of having a better job, it is probably a good idea to always keep your eyes open for new opporunities and to be shameless in pursuing them.
I will refrain from giving relationship advice.
Most students apply for some form of scholarship, bursary, or fellowship at some point in their lives. Often, they apply because: a) someone told them they would be a good candidate, or b) they read the qualities for which the award is given and thought they'd be a good match.
Those are excellent reasons to apply for stuff, but simply using a) and b) as your criteria for what to apply for results in a lot of missed opportunities.
When I was an undergrad, I was pretty shameless in applying for every scrap of money for which I wasn't explicitly ineligible (as a white male, some of the "for visible minorities", etc. awards were just not gonna go to me). In fact, the University Women's club of Vancouver gives out an annual scholarship (several thousand dollars), for which the criteria don't explicitly state that the recipient must be female. I applied, and subsequently won the award!
Here is my point, and some A+ advice for academic success: often the cost of applying for stuff (in terms of time and effort) is very low compared to the benefit that you get if you win (ie: 30 minutes of work to apply for a $5,000 scholarship is a pretty good hourly rate!). On those grounds alone, you should apply for anything you have even a remote chance of winning.
But there's another, potentially more important, effect that I like to call the "cash snowball". You see, most awards you apply for ask you to list the other awards you've won. And most committees look at that list and use it to decide how "good" you are. So, if you've won lots of stuff, you will tend to be more successful in winning future stuff.
I suspect that this trend still holds, even if the committee has never heard of the awards on your list. So, they don't know how prestigious (or not) the award was: they just know that someone else thought you were a winner.
So, applying for lots of (even un-prestigious) awards early on in your academic career can be a solid way to set yourself up for future success. It's not a guaranteed strategy for success, but it sure can help.
I suspect the same is true for non-academics: since the cost of finding a better job is low compared to the value of having a better job, it is probably a good idea to always keep your eyes open for new opporunities and to be shameless in pursuing them.
I will refrain from giving relationship advice.
Tuesday, October 12, 2010
you gotta know when to fold 'em
I'm a scientist.
By definition that means that I am always trying to do things that have never been done before, and may not be possible. Sometimes, that impossibility is bound to creep up on me.
In fact, the more interesting the research question is, the more likely it is that it's not solvable (because, if it's interesting and possible to solve, it's likely that someone will already have solved it).
The problem is that it's very very rarely obvious that a problem is actually not solvable. There's always the chance that, if I only had some new insight, or was a little smarter, I could figure out whatever it is that I'm toiling over. And, once I've sunk months into some question, it gets tough to just jump ship and move on.
For some good advice on this issue, I turn to country music singer Kenny Rogers. The real question is, how do you know when to walk away, and when to run? Unfortunately, Kenny can't answer that question, and neither can I.
By definition that means that I am always trying to do things that have never been done before, and may not be possible. Sometimes, that impossibility is bound to creep up on me.
In fact, the more interesting the research question is, the more likely it is that it's not solvable (because, if it's interesting and possible to solve, it's likely that someone will already have solved it).
The problem is that it's very very rarely obvious that a problem is actually not solvable. There's always the chance that, if I only had some new insight, or was a little smarter, I could figure out whatever it is that I'm toiling over. And, once I've sunk months into some question, it gets tough to just jump ship and move on.
For some good advice on this issue, I turn to country music singer Kenny Rogers. The real question is, how do you know when to walk away, and when to run? Unfortunately, Kenny can't answer that question, and neither can I.
Friday, October 8, 2010
computer codes killed the analytical math star
I'm an awkward code writer and I ain't gonna lie, but I'll be damned if that means that I ain't gonna try
When I started university, I had no idea how to write code, and I was sure that I didn't really want to. But, the SFU physics department required me to take a programming class in order to get my degree (and many years later, I'm glad they did!).
When I was first taught to write code, I understood how to do it, but saw it as something that was probably not necessary for my career. It was just a hoop to jump through en route to getting a degree.
My first summer research job was in a materials chemistry lab. I spent my days mixing chemicals, etc. That experience strengthened my conviction that computer programming wasn't necessary.
My next summer research job was in a nuclear physics lab. Most of what I actually accomplished that summer was write a computer program to simulate reactions in the apparatus. I was glad that I knew how to program computers, but was still pretty sure that this was a one-time hassle.
Since then, I've worked in particle physics, astrophysics, and now theoretical neuroscience. In all of these fields, most of my day-to-day activities revolved around writing code to analyze data, or to simulate complicated math problems.
I'm still not great at coding, and I don't love writing code (although I like it more than I used to!), but I do love the power of being able to solve mathematical problems that are so complex I'd have no hope of solving them by hand.
I guess I'm in this code writing thing for life.
To the young kids out there eager to be physicists, I suggest that you learn to be an expert computer programmer. In fact, learn to love programming. It'll make things much easier for you down the road.
When I started university, I had no idea how to write code, and I was sure that I didn't really want to. But, the SFU physics department required me to take a programming class in order to get my degree (and many years later, I'm glad they did!).
When I was first taught to write code, I understood how to do it, but saw it as something that was probably not necessary for my career. It was just a hoop to jump through en route to getting a degree.
My first summer research job was in a materials chemistry lab. I spent my days mixing chemicals, etc. That experience strengthened my conviction that computer programming wasn't necessary.
My next summer research job was in a nuclear physics lab. Most of what I actually accomplished that summer was write a computer program to simulate reactions in the apparatus. I was glad that I knew how to program computers, but was still pretty sure that this was a one-time hassle.
Since then, I've worked in particle physics, astrophysics, and now theoretical neuroscience. In all of these fields, most of my day-to-day activities revolved around writing code to analyze data, or to simulate complicated math problems.
I'm still not great at coding, and I don't love writing code (although I like it more than I used to!), but I do love the power of being able to solve mathematical problems that are so complex I'd have no hope of solving them by hand.
I guess I'm in this code writing thing for life.
To the young kids out there eager to be physicists, I suggest that you learn to be an expert computer programmer. In fact, learn to love programming. It'll make things much easier for you down the road.
Wednesday, October 6, 2010
nice guys finish last, sort of
In this paper, the authors considered the problem of a group of bacteria living together. The bacteria can make proteins, which are needed for metabolizing sugars ("cooperators"), but which cost energy to make, or they can simply use the proteins surrounding them while producing none ("cheats"). They then did experiments to work out which balance of co-operators and cheaters allowed the population to grow the fastest.
The result is quite surprising: adding some cheaters makes the population grow faster than a population of all cooperators.
Essentially, what happens is that, when there is lots of proteins around, the cooperators have plenty of sugar, so they slow down protein production. When there's a shortage of sugars however, the cooperators produce more of the proteins.
Adding some cheats to the population reduces the sugar supply, driving the cooperators to produce more of the proteins, allowing the population to get more sugar, and thus to grow faster.
This is a very interesting result, and may tell us a lot about group dynamics in competitive-cooperative environments.
Tuesday, October 5, 2010
a dirty free-for-all
This past weekend was the annual Sonoma county harvest fair.
H. and I drove up Sunday morning for a relaxing day of rural pursuits, including tastings of the winning wines from the harvest fair wine competition (btw. the Stryker cab. was brilliant!), a sheep-herding contest, "llamas of wine country" (I kid you not), and the world championship grape stomp competition. That competition will be the focus of this blog post.
The contest itself is pretty simple. Each team consists of one stomper (who stands in a barrel full of grapes, and mashes them with their bare feet), and one person whose job is to collect the juice (with their bare hands). Each team has 30 lbs of grapes, and 3 minutes to collect the most juice. When you enter, you first compete in a qualifying round, and the winning team from each qualifier moves on to the final. The winner of the final gets $1000, and some plane tickets. Pretty straight-forward, right?
Well, the hole through which one attempts to extract the juice is several inches above the bottom of the barrel, so the juice collection is a bit tricky. It turns out that it's quite straightforward to mash all the grapes (and that takes about 30 seconds), so the efficient collecting of juice is what really determines the winner.
Heather and I had the misfortune of competing against the defending champions (from 2004,2006,2008, and 2009) in the qualifier, and thusly did not advance to the finals. However, our experience in the contest (and watching a few of the rounds after ours) gave me some ideas on how to improve our juice-collecting.
In essence, the stomper needs to create a standing wave inside the barrel, with a maximum located right at the hole. That way, there's always juice pushing through the hole. The collector, then just needs to keep the hole from getting clogged with peels (and possibly use their hands to assist in maintaining this wave).
This may require some practice, but we've still got 364 days until next year's championship. Now, back to science.
H. and I drove up Sunday morning for a relaxing day of rural pursuits, including tastings of the winning wines from the harvest fair wine competition (btw. the Stryker cab. was brilliant!), a sheep-herding contest, "llamas of wine country" (I kid you not), and the world championship grape stomp competition. That competition will be the focus of this blog post.
The contest itself is pretty simple. Each team consists of one stomper (who stands in a barrel full of grapes, and mashes them with their bare feet), and one person whose job is to collect the juice (with their bare hands). Each team has 30 lbs of grapes, and 3 minutes to collect the most juice. When you enter, you first compete in a qualifying round, and the winning team from each qualifier moves on to the final. The winner of the final gets $1000, and some plane tickets. Pretty straight-forward, right?
Well, the hole through which one attempts to extract the juice is several inches above the bottom of the barrel, so the juice collection is a bit tricky. It turns out that it's quite straightforward to mash all the grapes (and that takes about 30 seconds), so the efficient collecting of juice is what really determines the winner.
Heather and I had the misfortune of competing against the defending champions (from 2004,2006,2008, and 2009) in the qualifier, and thusly did not advance to the finals. However, our experience in the contest (and watching a few of the rounds after ours) gave me some ideas on how to improve our juice-collecting.
In essence, the stomper needs to create a standing wave inside the barrel, with a maximum located right at the hole. That way, there's always juice pushing through the hole. The collector, then just needs to keep the hole from getting clogged with peels (and possibly use their hands to assist in maintaining this wave).
This may require some practice, but we've still got 364 days until next year's championship. Now, back to science.
Subscribe to:
Posts (Atom)