Take an African animal and put it in Norway , or Seattle. Do you think it would be happy in that climate? But that's us. Growing up in Pennsylvania, I was familiar with the long summer days and long winter nights of temperate regions - but the week between Christmas and New Year, standing in Regent's Park in London during my first visit to the U.K., and looking at the tired glow of the sun through the clouds barely shoulder-high on the horizon at 11 in the morning - I started to realize why humans are so eager to to reduce their latitudes.
In the U.S., much of which is at sunnier latitudes, estimates of sufferers of clinical seasonal affect disorder or just winter blues run 10-15%. There are also claims that alcoholism correlates positively with latitude.
So I was not surprised when I looked at a sunshine map of the U.S., and saw a stripe of cloudiness along the Appalachians that matched (somewhat) with a portion of the unhappy stripe in a recent happiness survey:
The unhappy stretch extends from the west side of the Appalachians in Tennessee and Kentucky all the way through Arkansas and Oklahoma, which aren't as gloomy. I often point out to whiners in the Pacific Northwest that they aren't any cloudier than the poor folks in West Virginia and central/western PA.
Oddly enough, I noticed earlier that the unhappy stripe correlated even better with voting Republican for President in the 2008 U.S. elections.
Wednesday, July 29, 2009
Tuesday, July 28, 2009
Institutions and Values Matter
This evening I was playing around with cost of living, quality of life, latitude and average temperature numbers. Basically I was investigating the idea that countries at very high or low latitudes had a worse quality-per-cost ratio than those in temperate regions; that is to say, sure, maybe Norway has a higher quality of life, but it costs a lot more to live there than Spain, and do you get that much more? There is a lot of forced investment in infrastructure because of the marginal environment that they don't have to worry quite as much about in Spain.
I found only extremely weak connections, so I won't bother posting the data and graphs. But this isn't the first time I've looked for such connections. In fact whenever I've looked for connections between some economic or social indicator on one hand, and a non-human aspect of a country's real estate on the other (latitude, resources, climate) I either find no relationship, or a relationship that cannot be separated from the confounding facts of history, like the inheritance of certain values and institutions, particularly from Enlightenment Europe as it was colonizing the West. In fact, the first question on examining such relationships is whether we should try to separate the trend from history.
More and more I find myself siding with the development experts who state that it's the institutions of a country that matter more than anything else - not its resources or climate, temporarily wealthy petro-theocracies notwithstanding. One assumes that the values of the people have to support such institutions. As the U.S. is learning, you can't just drop a democracy onto a culture that has no history of open discourse and personal responsibility, even before the Hussein regime, and in fact I would argue that Indo-European cultures in general have at their root a value of parliamentary decision-making and openness that is rare elsewhere; why would the world's oldest parliament, the Althing, have appeared in Norse Iceland in the guts of the brutal Middle Ages?
Another way of emphasizing the importance of institutions, and underlying values, is in traditional economics terms; that it's the labor and not the land that makes life better and generates wealth. This shouldn't be surprising; it tracks the development of technology, which continually increases the potential productivity of human beings and their power to shape their environment. The last school of economics that discounted the role of labor entirely was the physiocrats in the eighteenth century, and no one has been able to make that mistake since. This can also explain the (near) disappearance of slavery, from a purely cynical economic standpoint. Three thousand years ago, there wasn't a whole lot more you could contribute as a scribe or farmer than you could as a slave. By the nineteenth century, the institution had shrunk enough in importance that moral concerns could override whatever clout the related industries retained, first in England, and later in the U.S. In 2009, from an economic standpoint, the idea of forcibly feeding and housing a person so they can pick plants instead of voluntarily build a better microchip seems patently absurd. In 2109 it will seem moreso.
The question remains then of how to measure the goodness of institutions without the tautology of just saying that whatever raises per capita income and gross national happiness must be good. Measuring values would be trickier still. Encouraging values that support good institutions and therefore the elimination of misery is the most important and difficult question of all.
I found only extremely weak connections, so I won't bother posting the data and graphs. But this isn't the first time I've looked for such connections. In fact whenever I've looked for connections between some economic or social indicator on one hand, and a non-human aspect of a country's real estate on the other (latitude, resources, climate) I either find no relationship, or a relationship that cannot be separated from the confounding facts of history, like the inheritance of certain values and institutions, particularly from Enlightenment Europe as it was colonizing the West. In fact, the first question on examining such relationships is whether we should try to separate the trend from history.
More and more I find myself siding with the development experts who state that it's the institutions of a country that matter more than anything else - not its resources or climate, temporarily wealthy petro-theocracies notwithstanding. One assumes that the values of the people have to support such institutions. As the U.S. is learning, you can't just drop a democracy onto a culture that has no history of open discourse and personal responsibility, even before the Hussein regime, and in fact I would argue that Indo-European cultures in general have at their root a value of parliamentary decision-making and openness that is rare elsewhere; why would the world's oldest parliament, the Althing, have appeared in Norse Iceland in the guts of the brutal Middle Ages?
Another way of emphasizing the importance of institutions, and underlying values, is in traditional economics terms; that it's the labor and not the land that makes life better and generates wealth. This shouldn't be surprising; it tracks the development of technology, which continually increases the potential productivity of human beings and their power to shape their environment. The last school of economics that discounted the role of labor entirely was the physiocrats in the eighteenth century, and no one has been able to make that mistake since. This can also explain the (near) disappearance of slavery, from a purely cynical economic standpoint. Three thousand years ago, there wasn't a whole lot more you could contribute as a scribe or farmer than you could as a slave. By the nineteenth century, the institution had shrunk enough in importance that moral concerns could override whatever clout the related industries retained, first in England, and later in the U.S. In 2009, from an economic standpoint, the idea of forcibly feeding and housing a person so they can pick plants instead of voluntarily build a better microchip seems patently absurd. In 2109 it will seem moreso.
The question remains then of how to measure the goodness of institutions without the tautology of just saying that whatever raises per capita income and gross national happiness must be good. Measuring values would be trickier still. Encouraging values that support good institutions and therefore the elimination of misery is the most important and difficult question of all.
Friday, July 24, 2009
Proposal: Adopt a Universal Phonetic Alphabet Based on Roman Characters
See my original post at halfbakery.com, which includes follow-up comments.
The benefits of literate people around the world being able to communicate, regardless of spoken language, are obvious. When building a writing system, there are two possible approaches.
1) Use symbols based on meaning. In such systems there are necessarily a lot of these (in the thousands. Chinese uses this strategy.)
2) Use phonetic values (an alphabet or syllabary). English (and many other non-Roman-alphabet languages) use this system; the number of symbols is often substantially less than 100.
I propose an alphabet, rather than a system of ideograms, and specifically a phonetic version of the Roman alphabet, because
a) well over half of humans live in countries where the Roman alphabet has official or co-official status (3.8 billion)
b) alphabets are easier to learn (if you are a first-language Chinese-speaker and need 3,000 characters to read a newspaper, how hard can it be to learn 30 more?)
The main benefit will be facilitation of second-language learning, rather than universal communication (any monographic alphabet-user learning Japanese or Chinese can attest to this). Because of course different languages have different sounds, an expanded Roman alphabet could be used (mimicking the International Phonetic Alphabet?) That would be more fair, so everyone has to learn a somewhat new system, even people who already read Roman characters.
Someone posted a proposal here that all languages be written in ideograms. Beyond the difficulty of teaching ideograms to alphabet-readers, it's very difficult to adopt these symbols between languages, given differences in word order and grammar. The best-known example, Japanese, uses a klugey system of Chinese characters with home-grown syllabary characters scotch-taping them together within Japanese grammar; and the ideograms do drift from their original meanings, defeating the purpose anyway of adopting such a difficult system.
The problem of implementation is first and foremost a political one, of convincing the Chinese and Arabic-speaking governments of educating their citizenry to be at least bi-scriptural. But Turkey has done so, without which change it's doubtful whether there would even be an argument today over whether they could join the EU. [Added later: it turns out that there were serious proposals in Meiji Japan to Romanize Japanese; they would have beat Ataturk to the punch. There are actually books from this era written in Romanji-Japanese. A study of why it didn't catch on would be informative to this proposal.]
The benefits of literate people around the world being able to communicate, regardless of spoken language, are obvious. When building a writing system, there are two possible approaches.
1) Use symbols based on meaning. In such systems there are necessarily a lot of these (in the thousands. Chinese uses this strategy.)
2) Use phonetic values (an alphabet or syllabary). English (and many other non-Roman-alphabet languages) use this system; the number of symbols is often substantially less than 100.
I propose an alphabet, rather than a system of ideograms, and specifically a phonetic version of the Roman alphabet, because
a) well over half of humans live in countries where the Roman alphabet has official or co-official status (3.8 billion)
b) alphabets are easier to learn (if you are a first-language Chinese-speaker and need 3,000 characters to read a newspaper, how hard can it be to learn 30 more?)
The main benefit will be facilitation of second-language learning, rather than universal communication (any monographic alphabet-user learning Japanese or Chinese can attest to this). Because of course different languages have different sounds, an expanded Roman alphabet could be used (mimicking the International Phonetic Alphabet?) That would be more fair, so everyone has to learn a somewhat new system, even people who already read Roman characters.
Someone posted a proposal here that all languages be written in ideograms. Beyond the difficulty of teaching ideograms to alphabet-readers, it's very difficult to adopt these symbols between languages, given differences in word order and grammar. The best-known example, Japanese, uses a klugey system of Chinese characters with home-grown syllabary characters scotch-taping them together within Japanese grammar; and the ideograms do drift from their original meanings, defeating the purpose anyway of adopting such a difficult system.
The problem of implementation is first and foremost a political one, of convincing the Chinese and Arabic-speaking governments of educating their citizenry to be at least bi-scriptural. But Turkey has done so, without which change it's doubtful whether there would even be an argument today over whether they could join the EU. [Added later: it turns out that there were serious proposals in Meiji Japan to Romanize Japanese; they would have beat Ataturk to the punch. There are actually books from this era written in Romanji-Japanese. A study of why it didn't catch on would be informative to this proposal.]
A Proposal: Compile Constitutions in Programming Language So They're Consistent
This was originally posted at halfbakery.com, where you can see the follow-up comments.
Most democracies have largely secular, rational post-enlightenment systems of government whose power flows neither from gun barrels nor arguments from authority to continue operating. However, because of the advance of technology, the laws these governments pass (and the way they can operate) will continue running into situations that their founders couldn't possibly have anticipated.
Currently many of these problems are solved by court rulings, which establish precedents. These precedents accumulate until there are layers upon layers, not all of them consistent with each other. Laws passed by legislative bodies also can take the form of an inconsistent patchwork that fail to take into account what went before.
By writing a constitution in a logical programming language that generates new laws and automatically checks for internal conflicts, these inefficiencies and inconsistencies can be avoided. Governments would become much truer to the ideal of being made of laws, and not of men.
Most democracies have largely secular, rational post-enlightenment systems of government whose power flows neither from gun barrels nor arguments from authority to continue operating. However, because of the advance of technology, the laws these governments pass (and the way they can operate) will continue running into situations that their founders couldn't possibly have anticipated.
Currently many of these problems are solved by court rulings, which establish precedents. These precedents accumulate until there are layers upon layers, not all of them consistent with each other. Laws passed by legislative bodies also can take the form of an inconsistent patchwork that fail to take into account what went before.
By writing a constitution in a logical programming language that generates new laws and automatically checks for internal conflicts, these inefficiencies and inconsistencies can be avoided. Governments would become much truer to the ideal of being made of laws, and not of men.
Labels:
constitution law reason computer
Saturday, July 18, 2009
Inflection Points in History: 1965 and 1990
It's tempting to try to find a point in time when an old zeitgeist fled and a new one took over. Anyone who does this in print must recognize that they're generalizing. After all, even in periods of real tumult, the zeitgeist is really just a constellation of attention-grabbing characteristics that mostly move independently of one each other. Art history example: the Renaissance is widely considered to have become the Mannerist period by the time Michelangelo began work on the the Last Judgment in 1537. But can we find a transitional work and point at emerging themes and say, here, this is the inflection point? Doing this with culture is not quite as easy as in biology, where there must be a clear linear descent.
Defining an age and trying to find its joints is necessarily a sloppy business, but this doesn't stop us. John McWhorter is a linguist, formerly of UC Berkeley and now with the Manhattan Institute, who wrote Doing Our Own Thing: The Degradation of Language and Music and Why We Should, Like, Care. McWhorter is a lover of the English language and the pages of this book mostly take the form of an elegy for a formal style of rhetoric (or really, the existence of rhetoric as such) that has passed into history in the United States, evidenced by the lesser demands placed on modern music and public speech. Mass media provide sensible landmarks of public taste for these kinds of discussions because they're a shared experience. McWhorter expounds on why the shift might have occurred and repeatedly comes back to 1965 as the inflection point, going so far as to find a "transition species", a so-called cultural archaeopteryx, in a live performance by Sammy Davis Jr. that had one foot in the old, formal style and one in the new, structureless, self-indulgent informality. McWhorter argues that a host of values and attitudes shifted along with this sharply punctuated 1965 transition.
Thoughtful people interested in the cultural changes of their country (and where they fit into it) can't help but find these speculations engaging. Probably the most famous treatment of the shifting of attitudes is Strauss and Howe's Generations. They attempt to explain history in cyclic terms with 4 recurring generational types, each determined by the nurturing patterns of the previous generation.
Without addressing Strauss and Howe's generational types, I've often speculated that a more recent and perhaps less profound cultural transition took place around my own coming of age, and there's a link to McWhorter's 1965. The 1980s in the U.S. - when I passed from kindergarten to tenth grade - in retrospect seem an oddly conservative island, a repeat of the 50s sandwiched between the era of disco, drugs and Vietnam on one side and grunge and the early internet on the other. Why? The kids of the post-1965, post-formal generation weren't yet out in the world on their own, independently interacting with the world, spreading those post-formal values. If you got married in 1963, had your first kid in 1965, she would have started college in 1983, carrying forward her parents' pre-1965-transition values. On the other hand, if you met at Woodstock and had your first kid two years later, she would start college in 1989.
What Happened in 1990?
My inspiration to collapse my thoughts into this blog post was a post on Andrew Sullivan's blogs, showing a sharp positive change in public perception of gay people in 1990. On this specific topic, try watching a few "socially conscious" 1980s movies that wear their values on their sleeve; they're recent enough that you expect their values to be the same as yours, but they're not. (The same argument can be made for why I am annoyed by the characters' values as they relate to gender roles in Bronte and Austen novels in a way that I am not by, say, Chaucer.) But the pro-gay attitude shift is just a canary in the coal-mine. In the early 90s, suddenly kids were growing their hair long on masse again and wearing lots of black and talking about conformity, there was loud angry music and grunge everywhere, and pot use skyrocketed. Coincidence? Or the coordinated coming of age of the first post-formal generations' kids?
There's no one archaeopteryx for the 1990 shift, which in any event wasn't quite as dramatic as 1965, but here are a few: 1989 had Batman, which celebrated "dark" (new to mainstream American film audiences); 1990's Dances with Wolves had the first naturalistic and positive treatment of Native Americans (imagine that in 1985!) and in 1991, Terminator 2 shows us the badly-behaved punk kid (the young John Connor) whose criminal sensibilities end up serving him well. Imagine a pubescent criminal as protagonist and hero on the mainstream big screen even in 1988. Musically, in rock, late 80s thrash (underground, no airtime and little MTV exposure) gave way fully to grunge by 1992 (on MTV you couldn't get away from it).
Is It a 25-Year Cycle?
If the pattern is real, then we're due in 2015 for another shift. But I have my doubts that it will remain cohesive. The use of mass media as milestones is becoming potentially problematic, because the way we consume media (and create it) has changed so much. On the other hand, it's the spread of values that shapes these shifts, and thanks to technology that process has never moved so quickly. Unfortunately I can't make a prediction because I don't have a sense of which values will carry over from the early 90s and dominate the new zeitgeist, just as it would have been difficult in 1989 to make the same kind of call. Check back in 2017; by then such a shift should be obvious.
[Added later: Razib Khan separately notes an inflection point also in 1990 for another sexual more, black-white dating.]
Defining an age and trying to find its joints is necessarily a sloppy business, but this doesn't stop us. John McWhorter is a linguist, formerly of UC Berkeley and now with the Manhattan Institute, who wrote Doing Our Own Thing: The Degradation of Language and Music and Why We Should, Like, Care. McWhorter is a lover of the English language and the pages of this book mostly take the form of an elegy for a formal style of rhetoric (or really, the existence of rhetoric as such) that has passed into history in the United States, evidenced by the lesser demands placed on modern music and public speech. Mass media provide sensible landmarks of public taste for these kinds of discussions because they're a shared experience. McWhorter expounds on why the shift might have occurred and repeatedly comes back to 1965 as the inflection point, going so far as to find a "transition species", a so-called cultural archaeopteryx, in a live performance by Sammy Davis Jr. that had one foot in the old, formal style and one in the new, structureless, self-indulgent informality. McWhorter argues that a host of values and attitudes shifted along with this sharply punctuated 1965 transition.
Thoughtful people interested in the cultural changes of their country (and where they fit into it) can't help but find these speculations engaging. Probably the most famous treatment of the shifting of attitudes is Strauss and Howe's Generations. They attempt to explain history in cyclic terms with 4 recurring generational types, each determined by the nurturing patterns of the previous generation.
Without addressing Strauss and Howe's generational types, I've often speculated that a more recent and perhaps less profound cultural transition took place around my own coming of age, and there's a link to McWhorter's 1965. The 1980s in the U.S. - when I passed from kindergarten to tenth grade - in retrospect seem an oddly conservative island, a repeat of the 50s sandwiched between the era of disco, drugs and Vietnam on one side and grunge and the early internet on the other. Why? The kids of the post-1965, post-formal generation weren't yet out in the world on their own, independently interacting with the world, spreading those post-formal values. If you got married in 1963, had your first kid in 1965, she would have started college in 1983, carrying forward her parents' pre-1965-transition values. On the other hand, if you met at Woodstock and had your first kid two years later, she would start college in 1989.
What Happened in 1990?
My inspiration to collapse my thoughts into this blog post was a post on Andrew Sullivan's blogs, showing a sharp positive change in public perception of gay people in 1990. On this specific topic, try watching a few "socially conscious" 1980s movies that wear their values on their sleeve; they're recent enough that you expect their values to be the same as yours, but they're not. (The same argument can be made for why I am annoyed by the characters' values as they relate to gender roles in Bronte and Austen novels in a way that I am not by, say, Chaucer.) But the pro-gay attitude shift is just a canary in the coal-mine. In the early 90s, suddenly kids were growing their hair long on masse again and wearing lots of black and talking about conformity, there was loud angry music and grunge everywhere, and pot use skyrocketed. Coincidence? Or the coordinated coming of age of the first post-formal generations' kids?
There's no one archaeopteryx for the 1990 shift, which in any event wasn't quite as dramatic as 1965, but here are a few: 1989 had Batman, which celebrated "dark" (new to mainstream American film audiences); 1990's Dances with Wolves had the first naturalistic and positive treatment of Native Americans (imagine that in 1985!) and in 1991, Terminator 2 shows us the badly-behaved punk kid (the young John Connor) whose criminal sensibilities end up serving him well. Imagine a pubescent criminal as protagonist and hero on the mainstream big screen even in 1988. Musically, in rock, late 80s thrash (underground, no airtime and little MTV exposure) gave way fully to grunge by 1992 (on MTV you couldn't get away from it).
Is It a 25-Year Cycle?
If the pattern is real, then we're due in 2015 for another shift. But I have my doubts that it will remain cohesive. The use of mass media as milestones is becoming potentially problematic, because the way we consume media (and create it) has changed so much. On the other hand, it's the spread of values that shapes these shifts, and thanks to technology that process has never moved so quickly. Unfortunately I can't make a prediction because I don't have a sense of which values will carry over from the early 90s and dominate the new zeitgeist, just as it would have been difficult in 1989 to make the same kind of call. Check back in 2017; by then such a shift should be obvious.
[Added later: Razib Khan separately notes an inflection point also in 1990 for another sexual more, black-white dating.]
High Growth Rates: Nature Abhors a Vacuum
When confronted with China's recent brilliant growth rates, a cynic might say China had an unfair advantage: it had room to grow. That is, it's easy to grow your GDP at 6.46% annually since 1980 if you start out with a per capita income of US$305.46. Labor is cheap, you have no legacy infrastructure to deal with, and your exports are extremely competitive. Of course, this begs the question that there are lots of countries with low PCI, and not all of them grow at such robust rates - but let's come back to that. I was also once challenged by a European that the U.S. grew slightly faster than Europe not because of any decision we've made to embrace free markets, but because of our good land and wide open spaces which are cheaper to develop. Because of population density rather than PCI, we have room to grow.
This interested me, so I pulled together some IMF and UN figures for 179 countries and territories; most growth rates are annualized since 1980. First let's look at population density's relationship to income growth, if any (source for population and area data here and here resp.) For viewability purposes, the scatter plot below excludes the 11 most densely populated countries/territories (Singapore, Hong Kong, Malta, Bangladesh, Bahrain, Maldives, Taiwan, Mauritius, Barbados, Korea, and Lebanon, all > 400 people/km^2).
The red circle contains 18 countries, all of which have had at least 10% annual growth since 1992: Armenia, Kazakhstan, Estonia, Latvia, Lithuania, Turkmenistan, Bosnia and Herzegovina, Equatorial Guinea, Russia, Azerbaijan, Belarus, Tajikistan, Cambodia, Ukraine, Slovakia, Moldova, Czech Republic, and Croatia (Bosnia-Herzegovina data since 1994, Equatorial Guinea data since 1980, Cambodia data since 1986). There's a trend there: of these 18 countries, fully 17 transitioned from a closed communist in the last decade; 16 of 18 were Soviet satellites. The trend on display is the effect of markets, not low population density. Not the effect I was looking to call out, but interesting that it's so apparent here.
It's worth pointing out that, for the 12 countries that grew at greater than 15%, the average density was 45/km^2; for the rest that grew at less than 15%, the average population density was 218/km^2. The same statistics using 10% as the break point are 62/km^2 for 18 countries >10% growth and 222 for the rest. Breaking the other way, the 58 countries with greater than 120 people/km^2 density grew at 4.15%; the rest that have less than 120 people/km^2 grew at 4.34%. There does seem to be some effect. (These figures include the 11 densest countries taken out of the scatter plot.)
The picture for growth rate and PCI could fool you into thinking it's some sort of normal distribution, but it's not. PCI is taken from 1999 for all countries because it was the first year that IMF had data for all 179 countries.
Interestingly, the vast majority of high-PCI countries have a middle-road growth rate of around 5%. Low PCI countries are more widely distributed. The 18 countries with growth above 10% have average PCI of US$1,836.08; those with growth less than 10%, US$6,505.82. Then again, the 32 countries with negative growth rates clocked in at average PCI US$1,703.99, vs. positive growers at US$6,979.31. Breaking the other way, those with 1999 PCI below US$5,000 had average growth of 4.17% (131 countries), while those above US$5,000 PCI had average growth of 4.55%. Of course, again the low PCI, high growth countries were all ex-communist but one. Who are the low PCI low-growth countries (i.e. < $5,000 PCI and negative growth)? Georgia, Congo-Zaire, Ghana, Mongolia, Yemen, Niger, Sierra Leone, Burundi, Eritrea, Madagascar, Papua New Guinea, Nigeria, Ivory Coast, Gambia, Solomon Islands, Namibia, Zambia, Myanmar, Malawi, Paraguay, Syria, Togo, Uganda, Ethiopia, Suriname, Guyana, Rwanda, and Guinea. This is a grab-bag, but many of the countries were victims of civil wars (10 of 28) and a few resource-cursed ones.
There is a weak inverse correlation between growth and both population density and per capita income, although it is swamped by the signal from the post-communist states. The lesson here? Those states were left with strong institutions, which visibly benefit them (particularly in the case of the growth-PCI comparison). So perhaps it's true: China's low initial PCI and its strong institutions, as well as the U.S.'s open spaces, are both advantages to growth.
This interested me, so I pulled together some IMF and UN figures for 179 countries and territories; most growth rates are annualized since 1980. First let's look at population density's relationship to income growth, if any (source for population and area data here and here resp.) For viewability purposes, the scatter plot below excludes the 11 most densely populated countries/territories (Singapore, Hong Kong, Malta, Bangladesh, Bahrain, Maldives, Taiwan, Mauritius, Barbados, Korea, and Lebanon, all > 400 people/km^2).
The red circle contains 18 countries, all of which have had at least 10% annual growth since 1992: Armenia, Kazakhstan, Estonia, Latvia, Lithuania, Turkmenistan, Bosnia and Herzegovina, Equatorial Guinea, Russia, Azerbaijan, Belarus, Tajikistan, Cambodia, Ukraine, Slovakia, Moldova, Czech Republic, and Croatia (Bosnia-Herzegovina data since 1994, Equatorial Guinea data since 1980, Cambodia data since 1986). There's a trend there: of these 18 countries, fully 17 transitioned from a closed communist in the last decade; 16 of 18 were Soviet satellites. The trend on display is the effect of markets, not low population density. Not the effect I was looking to call out, but interesting that it's so apparent here.
It's worth pointing out that, for the 12 countries that grew at greater than 15%, the average density was 45/km^2; for the rest that grew at less than 15%, the average population density was 218/km^2. The same statistics using 10% as the break point are 62/km^2 for 18 countries >10% growth and 222 for the rest. Breaking the other way, the 58 countries with greater than 120 people/km^2 density grew at 4.15%; the rest that have less than 120 people/km^2 grew at 4.34%. There does seem to be some effect. (These figures include the 11 densest countries taken out of the scatter plot.)
The picture for growth rate and PCI could fool you into thinking it's some sort of normal distribution, but it's not. PCI is taken from 1999 for all countries because it was the first year that IMF had data for all 179 countries.
Interestingly, the vast majority of high-PCI countries have a middle-road growth rate of around 5%. Low PCI countries are more widely distributed. The 18 countries with growth above 10% have average PCI of US$1,836.08; those with growth less than 10%, US$6,505.82. Then again, the 32 countries with negative growth rates clocked in at average PCI US$1,703.99, vs. positive growers at US$6,979.31. Breaking the other way, those with 1999 PCI below US$5,000 had average growth of 4.17% (131 countries), while those above US$5,000 PCI had average growth of 4.55%. Of course, again the low PCI, high growth countries were all ex-communist but one. Who are the low PCI low-growth countries (i.e. < $5,000 PCI and negative growth)? Georgia, Congo-Zaire, Ghana, Mongolia, Yemen, Niger, Sierra Leone, Burundi, Eritrea, Madagascar, Papua New Guinea, Nigeria, Ivory Coast, Gambia, Solomon Islands, Namibia, Zambia, Myanmar, Malawi, Paraguay, Syria, Togo, Uganda, Ethiopia, Suriname, Guyana, Rwanda, and Guinea. This is a grab-bag, but many of the countries were victims of civil wars (10 of 28) and a few resource-cursed ones.
There is a weak inverse correlation between growth and both population density and per capita income, although it is swamped by the signal from the post-communist states. The lesson here? Those states were left with strong institutions, which visibly benefit them (particularly in the case of the growth-PCI comparison). So perhaps it's true: China's low initial PCI and its strong institutions, as well as the U.S.'s open spaces, are both advantages to growth.
Labels:
PCI growth population density
Friday, July 17, 2009
Mexico's Resource Curse: The United States
"¡Pobre Mexico! ¡Tan lejos de Dios y tan cerca de los Estados Unidos!"
– Porfirio Diaz
To a developing country, a long border with a wealthy, industrialized neighbor might seem like a blessing. But we have at least one pairing where this is anything but obvious: Mexico and the United States.
There are other cases where a national asset that seems on its face to be a big advantage turns out to be anything but; the famous example is the resource curse. You would think that a developing country fortunate enough to be sitting on mineral wealth would be able to use that wealth to its advantage – especially if it's oil. Nigeria is the textbook case.
These countries experience a vicious cycle of incredibly corrupt juntas uninterested in developing other industries or indeed doing anything except pocketing the proceeds from mineral extraction being conducted by foreign companies. In these institutionless post-colonial kleptocracies, the only options for the ambitious are to get out, or try to get in on the next coup. It's hard to believe that these governments could keep the figurative lights on for even one minute after the oil and diamonds stopped coming out of the ground. These are failed states with an allowance. They're Somalias waiting to happen.
What started me thinking about this was the recent arguments I've seen from several quarters that Mexico is increasingly showing alarming characteristics of a failed state. Combine this speculation with the interesting observation that the U.S.-Mexican border is easily the longest one in the world between a developing and an industrialized country, and you may begin to wonder if there's a connection. If Mexico had started off with a decent-sized middle class and relatively transparent institutions, they may have joined in the ongoing growth that the Anglophone parts of the continent have enjoyed since the industrial revolution.
Among the generally agreed upon characteristics of failed states are these: economic decline; loss of control of and security in their territory; deterioration of basic services; inability to make and execute decisions, in both domestic and international arenas. I don't think Mexico is there yet. It has elections, the lights shine and the toilets flush, and it's a productive member of the international community. It's even tied for 72 out of 180 in Transparency International's 2008 index – not great, but pretty good for a supposedly failing state. Still, the increasingly brazen coordinated paramilitary attacks on the police are a bad sign. Mexico is, in fact, losing control of and security in large stretches of its territory. Where? The states closest to the U.S. To whom? Drug cartels. Coincidence?
The truth is that Mexico is resource-cursed, and the resource is drug-consuming Americans. To be more accurate, the resource is drugs and the market is the U.S., but the situation is in some ways worse than in Africa's resource-cursed states. Imagine that Nigeria had a long land border with the EU. Now imagine that oil is outlawed as a result of global-warming legislation. The oil would still flow north - but it would become contraband, and the trade would be an entirely criminal activity. Because the drug trade is internationally sanctioned, the Mexican government (unlike Nigeria's) can't openly profit from the trade as it does with legal oil in Nigeria - because even graft and kickbacks are parasitized from activities that are at least legal in and of themselves. So the business becomes the domain of paramilitary drug cartels and some corrupt officials that allow them to flourish. It's worth pointing out that, although it doesn't border the United States, Colombia is the other perilously-close-to-failing state in Latin America, though it's improved in recent years. Still, it's had large tracts of its territory not under its control for years on end – and those tracts were controlled by organizations in the same trade as the paramilitary groups in Mexico. And they had the same end market.
A reasonable objection is that there are income disparities across other borders in the world; surely Mexico and the U.S. aren't the only odd couple, yet there are no paramilitary drug groups forming elsewhere. I suspected there were reasons why this didn't happen elsewhere, so I compiled a list of 279 sets of two-country shared land borders, and ordered it in terms of absolute nominal per capita income difference. Out of 279, here are the top ten:
Source: IMF World Economic Outlook Database 2009, except Liechtenstein from CIA World Factbook April 2009. Border ranking process did not include exclaves (e.g. Ceuta, Kaliningrad)
It's immediately interesting that the U.S.-Mexico border pops up as 9th out of 279, and is one of the longest on this top-ten list. Three qualifiers in order here. a) There is less incentive to engage in risky activities if basic needs are met. An Austrian might know his neighbors are wealthier, but it's different when your PCI is over US$50,000 as opposed to just over US$10,000. If you're comfortable, you're probably less likely to consider running drugs to Liechtenstein. b) Not all borders are as easy to cross (legally or otherwise) as the one between the U.S. and Mexico. Many countries don't have the same freedom of movement (like Russia), many countries don't have as well-developed a road system as the U.S. or Mexico, and even if a border is about as long as the U.S. Mexican border (as in Finland and Russia), it may be even less hospitable than the often-mild desert between the U.S. and Mexico. c) Many of the extremes of per capita income reported by the IMF would be flattened if a median calculation were used instead of a mean. Few subjects of the UAE come close to the reported per capita incomes listed here.
What's the solution? I don't anticipate Americans' consumption of drugs will stop any time soon, nor will Mexicans' willingness to supply them; after all, markets are markets. The part of the equation we can control is a choice that we've made which forces the profits from the drug trade underground. That is to say, if the United States decriminalizes, suddenly Mexico's unique resource curse can at least benefit Mexicans and their institutions openly. Sounds like a pie-in-the-sky solution, right? Wrong. One country – Portugal, a civilized EU country no less – has already done exactly this, and "judging by every metric, decriminalization in Portugal has been a resounding success."
– Porfirio Diaz
To a developing country, a long border with a wealthy, industrialized neighbor might seem like a blessing. But we have at least one pairing where this is anything but obvious: Mexico and the United States.
There are other cases where a national asset that seems on its face to be a big advantage turns out to be anything but; the famous example is the resource curse. You would think that a developing country fortunate enough to be sitting on mineral wealth would be able to use that wealth to its advantage – especially if it's oil. Nigeria is the textbook case.
These countries experience a vicious cycle of incredibly corrupt juntas uninterested in developing other industries or indeed doing anything except pocketing the proceeds from mineral extraction being conducted by foreign companies. In these institutionless post-colonial kleptocracies, the only options for the ambitious are to get out, or try to get in on the next coup. It's hard to believe that these governments could keep the figurative lights on for even one minute after the oil and diamonds stopped coming out of the ground. These are failed states with an allowance. They're Somalias waiting to happen.
What started me thinking about this was the recent arguments I've seen from several quarters that Mexico is increasingly showing alarming characteristics of a failed state. Combine this speculation with the interesting observation that the U.S.-Mexican border is easily the longest one in the world between a developing and an industrialized country, and you may begin to wonder if there's a connection. If Mexico had started off with a decent-sized middle class and relatively transparent institutions, they may have joined in the ongoing growth that the Anglophone parts of the continent have enjoyed since the industrial revolution.
Among the generally agreed upon characteristics of failed states are these: economic decline; loss of control of and security in their territory; deterioration of basic services; inability to make and execute decisions, in both domestic and international arenas. I don't think Mexico is there yet. It has elections, the lights shine and the toilets flush, and it's a productive member of the international community. It's even tied for 72 out of 180 in Transparency International's 2008 index – not great, but pretty good for a supposedly failing state. Still, the increasingly brazen coordinated paramilitary attacks on the police are a bad sign. Mexico is, in fact, losing control of and security in large stretches of its territory. Where? The states closest to the U.S. To whom? Drug cartels. Coincidence?
The truth is that Mexico is resource-cursed, and the resource is drug-consuming Americans. To be more accurate, the resource is drugs and the market is the U.S., but the situation is in some ways worse than in Africa's resource-cursed states. Imagine that Nigeria had a long land border with the EU. Now imagine that oil is outlawed as a result of global-warming legislation. The oil would still flow north - but it would become contraband, and the trade would be an entirely criminal activity. Because the drug trade is internationally sanctioned, the Mexican government (unlike Nigeria's) can't openly profit from the trade as it does with legal oil in Nigeria - because even graft and kickbacks are parasitized from activities that are at least legal in and of themselves. So the business becomes the domain of paramilitary drug cartels and some corrupt officials that allow them to flourish. It's worth pointing out that, although it doesn't border the United States, Colombia is the other perilously-close-to-failing state in Latin America, though it's improved in recent years. Still, it's had large tracts of its territory not under its control for years on end – and those tracts were controlled by organizations in the same trade as the paramilitary groups in Mexico. And they had the same end market.
A reasonable objection is that there are income disparities across other borders in the world; surely Mexico and the U.S. aren't the only odd couple, yet there are no paramilitary drug groups forming elsewhere. I suspected there were reasons why this didn't happen elsewhere, so I compiled a list of 279 sets of two-country shared land borders, and ordered it in terms of absolute nominal per capita income difference. Out of 279, here are the top ten:
Rank | Country1 | Country2 | PCI1 | PCI2 | PCI Diff |
1 | Austria | Liechtenstein | 50,098 | 145,734 | 95,636 |
2 | Norway | Russia | 95,062 | 11,807 | 83,255 |
3 | Switzerland | Liechtenstein | 67,385 | 145,734 | 78,349 |
4 | Saudi Arabia | Qatar | 19,345 | 93,204 | 73,859 |
5 | Finland | Norway | 51,989 | 95,062 | 43,073 |
6 | Iraq | Kuwait | 2,989 | 45,920 | 42,931 |
7 | Norway | Sweden | 95,062 | 52,790 | 42,272 |
8 | Finland | Russia | 51,989 | 11,807 | 40,182 |
9 | USA | Mexico | 46,859 | 10,235 | 36,624 |
10 | UAE | Oman | 54,607 | 18,988 | 35,619 |
Source: IMF World Economic Outlook Database 2009, except Liechtenstein from CIA World Factbook April 2009. Border ranking process did not include exclaves (e.g. Ceuta, Kaliningrad)
It's immediately interesting that the U.S.-Mexico border pops up as 9th out of 279, and is one of the longest on this top-ten list. Three qualifiers in order here. a) There is less incentive to engage in risky activities if basic needs are met. An Austrian might know his neighbors are wealthier, but it's different when your PCI is over US$50,000 as opposed to just over US$10,000. If you're comfortable, you're probably less likely to consider running drugs to Liechtenstein. b) Not all borders are as easy to cross (legally or otherwise) as the one between the U.S. and Mexico. Many countries don't have the same freedom of movement (like Russia), many countries don't have as well-developed a road system as the U.S. or Mexico, and even if a border is about as long as the U.S. Mexican border (as in Finland and Russia), it may be even less hospitable than the often-mild desert between the U.S. and Mexico. c) Many of the extremes of per capita income reported by the IMF would be flattened if a median calculation were used instead of a mean. Few subjects of the UAE come close to the reported per capita incomes listed here.
What's the solution? I don't anticipate Americans' consumption of drugs will stop any time soon, nor will Mexicans' willingness to supply them; after all, markets are markets. The part of the equation we can control is a choice that we've made which forces the profits from the drug trade underground. That is to say, if the United States decriminalizes, suddenly Mexico's unique resource curse can at least benefit Mexicans and their institutions openly. Sounds like a pie-in-the-sky solution, right? Wrong. One country – Portugal, a civilized EU country no less – has already done exactly this, and "judging by every metric, decriminalization in Portugal has been a resounding success."
Thursday, July 16, 2009
Batting Averages for the U.S. Major Parties
On my political I had calculated some statistics about U.S. presidential elections, specifically about how the Electoral College affected the outcome in terms of the two modern major parties.
I thought it would be interesting to do a quick calculation about time-in-office and number of terms, starting with 1860 (the first year when it was a real Republican-Democrat election).
Since then, counting the current term, 15 out of 38 terms (39%) have been Democrat administrations, and 23 out of 38 terms (61%) have been Republican. Not counting the current term, the average time for a Republican in office is 4.84 years, vs 6.98 years for Democrats. Yes, FDR throws it off; but to come down to the GOP average, you have to take out FDR, Truman, and all 3 two-termer Democrats (Clinton, Wilson and Grover "Mr. Non-Contiguous" Cleveland). To bring the GOP's average up to the Democrats, you have to throw out Garfield, Harding, Ford, Arthur, Johnson, Hayes, Harrison, Taft, Hoover, Bush I, Lincoln, and McKinley.) Looking at the list, it struck me that the iconic JFK is the shortest-term Democrat since the Civil War, and the second shortest of either party.
From the start of Lincoln's term until today, the GOP has had 1104 months as presidient, vs. 821 months for Democrats. If you look at contiguous administrations (whether or not it was the same individual running them), Republicans have an average streak of 2.875, and Democrats 2. Take away the post-Civil War era from the GOP and it falls to 2.43; take away FDR and Truman and the Democrats fall to 1.5. One interesting idea would be to look at the same data over the same period for Congressional Districts. Besides showing the trend of Democrats getting elected in areas that vote Republican for president, there's more data and therefore less noise.
The interesting thing about looking at the data over time is that it appears to converge. Streak-length appears to moderate too, at the same point, right after Truman (administration names provided below for reference because the X-axis is totally irregular with respect to time). What's interesting about this is that the "brands" the two represent, i.e. the demographics they capture based on their message and political climate of the time, have changed radically over a century and a half. Will these graphs still look this way after another century?
I thought it would be interesting to do a quick calculation about time-in-office and number of terms, starting with 1860 (the first year when it was a real Republican-Democrat election).
Since then, counting the current term, 15 out of 38 terms (39%) have been Democrat administrations, and 23 out of 38 terms (61%) have been Republican. Not counting the current term, the average time for a Republican in office is 4.84 years, vs 6.98 years for Democrats. Yes, FDR throws it off; but to come down to the GOP average, you have to take out FDR, Truman, and all 3 two-termer Democrats (Clinton, Wilson and Grover "Mr. Non-Contiguous" Cleveland). To bring the GOP's average up to the Democrats, you have to throw out Garfield, Harding, Ford, Arthur, Johnson, Hayes, Harrison, Taft, Hoover, Bush I, Lincoln, and McKinley.) Looking at the list, it struck me that the iconic JFK is the shortest-term Democrat since the Civil War, and the second shortest of either party.
From the start of Lincoln's term until today, the GOP has had 1104 months as presidient, vs. 821 months for Democrats. If you look at contiguous administrations (whether or not it was the same individual running them), Republicans have an average streak of 2.875, and Democrats 2. Take away the post-Civil War era from the GOP and it falls to 2.43; take away FDR and Truman and the Democrats fall to 1.5. One interesting idea would be to look at the same data over the same period for Congressional Districts. Besides showing the trend of Democrats getting elected in areas that vote Republican for president, there's more data and therefore less noise.
The interesting thing about looking at the data over time is that it appears to converge. Streak-length appears to moderate too, at the same point, right after Truman (administration names provided below for reference because the X-axis is totally irregular with respect to time). What's interesting about this is that the "brands" the two represent, i.e. the demographics they capture based on their message and political climate of the time, have changed radically over a century and a half. Will these graphs still look this way after another century?
Wednesday, July 15, 2009
When Your Hear a New Name or Concept, Do You Often Hear It More than Once?
Have you ever had the experience that you hear of a new person, or a new word or concept for the first time - and then you hear it again in a relatively short period? I'm not talking about a concept or celebrity that is genuinely new, or even obviously enjoying a revival, but rather a concept that appears not be making the rounds with any greater frequency and just happens to be new to you. This has happened to me frequently in my life, though of course I can never remember any examples. It seems trivial, but for the meme-minded, it fcries out for an explanation. So when it happened to me again today, I said to myself: I have to document this on my blog.
And here's the concept that I was exposed to twice. Last night I watched Confederate States of America, a really clever alternative history movie that deserves to be better-known. It's produced as if you're watching a British documentary from a parallel universe about the Confederate nation that rose from the ashes of the American Civil War, complete with commercial breaks that feature ads for slave-tracking collars. Granted, sometimes the tone is tongue-in-cheek, but overall it's very well-done, and highly recommended.
In the film, a real nineteenth-century physician, Samuel Cartwright, is cited for his "discovery" of drapetomania, a mental disorder exhibited by slaves running away from their masters. Yes, to us it sounds like these escapees were of perfectly sound mind, but the point of studying the topic is that the Cartwrights of the world were and are committing a grave injustice to science and to humanity by giving veneers of scientific respectability to depraved institutions like slavery.
A mere 14 hours later, I was listening to NPR and caught a This American Life story called Pro Se, about legal self defense and which involved psychiatry. I almost crashed my car when they mentioned Cartwright and drapetomania.
I provide the full context to emphasize that there is no clear connection between the two exposures. That is, there doesn't seem to be an ongoing press campaign about drapetomania, and if there were, I'm impressed that they got me to rent the relevant movie several years after its release, and on the day before the NPR story to boot. If this was the first time such a double-exposure had happened to me, you could rightly accuse me of confirmation bias. But I'm probably better-read that the average bear, so it's not all that frequently that I hear terms in mainstream media outlets that jump out as unfamiliar. If I were ten years old and hearing unfamiliar terms all the time, you'd have more of a point. But this has happened to me enough - over the course of my adult life, several times a year, at least - that confirmation bias becomes a harder argument to make.
Here are the explanations people have offered so far:
1) It's random but I'm human, so I notice it. If I work out the numbers, I would see that I hear x new terms per unit time, so there's a small but nonzero chance that the first time I hear it, I would hear the same term within forty-eight hours. Statistically, if it works out that this would happen several times a year, then it's nothing special.
2) I've heard the term before but didn't notice it until now. I find this one difficult to believe. I've been hearing "drapetomania" on radio and TV occasionally for years, and only now do I pay heed and recognize it as a word I don't understand?
3) It really is nonrandom. This would be easier to believe if (for example) I'd read drapetomania from a college friend of mine in Charlotte in an email yesterday, and then overheard someone in a restaurant talking about it today. Maybe my friend is using the word a lot, he knows other people in San Francisco and emailed them, and the meme spreads that way. Sometimes this explanation seems like a plausible candidate, but it's harder to explain with information sources that you consume nonpassively. If you hear the same term on the evening news on three different network afiliates, not so special. If you read it for the first time in a book published in 1919, and then see it in a movie released in 1983 and a newscast the same night, that's special.
Note that by posting this to my blog, if I hear "drapetomania" again in the next few days from friends, I won't know if I've influenced the spread. It's worth noting that when this has happened in the past I've sometimes heard the term three or more times in a short period, making the probability of #1 lower.
A more speculative version of #3 above is that there are strange macro-patterns operating in human behavior that we're not yet aware of, and trivial though it is, this is one of them. But without a guess at how the pattern operates that allows me to make falsifiable predictions, this is the same as no explanation at all.
[Original post, 15 July 2009]
[Added 21 December 2009 - the next time I was aware of such an instance was in the past week, when I read about lipograms on Wikipedia. 10 hours later I ran across a mention of lipograms in a James Fallows piece. This one seems more easily explained, since I was reading about lipograms because I had read something on Marginal Revolution, and bloggers and readers of East Coast literary and economic-conservative blogs are a pretty small subset of total meme hosts. Still, it puts a ceiling on the frequency of these events - if you count this one, 0.2 month^-1).
[Additional examples: I ran across Rex Wockner for the first time the last week of December 2009, and then again the same week: first, because he mentioned Point Loma and I was searching for the name of a mountain you can see from there, and then in a casual mention on Andrew Sullivan's blog. The first week of January 2010 I twice ran across the term Phallos (in reference to East Indian mythology) and twice across the term "wattle" in usages I was unfamiliar with not having to do with hanging skin (twice within six hours (in Tihkal by Sasha Shulgin, and the name of a winery.)]
And here's the concept that I was exposed to twice. Last night I watched Confederate States of America, a really clever alternative history movie that deserves to be better-known. It's produced as if you're watching a British documentary from a parallel universe about the Confederate nation that rose from the ashes of the American Civil War, complete with commercial breaks that feature ads for slave-tracking collars. Granted, sometimes the tone is tongue-in-cheek, but overall it's very well-done, and highly recommended.
In the film, a real nineteenth-century physician, Samuel Cartwright, is cited for his "discovery" of drapetomania, a mental disorder exhibited by slaves running away from their masters. Yes, to us it sounds like these escapees were of perfectly sound mind, but the point of studying the topic is that the Cartwrights of the world were and are committing a grave injustice to science and to humanity by giving veneers of scientific respectability to depraved institutions like slavery.
A mere 14 hours later, I was listening to NPR and caught a This American Life story called Pro Se, about legal self defense and which involved psychiatry. I almost crashed my car when they mentioned Cartwright and drapetomania.
I provide the full context to emphasize that there is no clear connection between the two exposures. That is, there doesn't seem to be an ongoing press campaign about drapetomania, and if there were, I'm impressed that they got me to rent the relevant movie several years after its release, and on the day before the NPR story to boot. If this was the first time such a double-exposure had happened to me, you could rightly accuse me of confirmation bias. But I'm probably better-read that the average bear, so it's not all that frequently that I hear terms in mainstream media outlets that jump out as unfamiliar. If I were ten years old and hearing unfamiliar terms all the time, you'd have more of a point. But this has happened to me enough - over the course of my adult life, several times a year, at least - that confirmation bias becomes a harder argument to make.
Here are the explanations people have offered so far:
1) It's random but I'm human, so I notice it. If I work out the numbers, I would see that I hear x new terms per unit time, so there's a small but nonzero chance that the first time I hear it, I would hear the same term within forty-eight hours. Statistically, if it works out that this would happen several times a year, then it's nothing special.
2) I've heard the term before but didn't notice it until now. I find this one difficult to believe. I've been hearing "drapetomania" on radio and TV occasionally for years, and only now do I pay heed and recognize it as a word I don't understand?
3) It really is nonrandom. This would be easier to believe if (for example) I'd read drapetomania from a college friend of mine in Charlotte in an email yesterday, and then overheard someone in a restaurant talking about it today. Maybe my friend is using the word a lot, he knows other people in San Francisco and emailed them, and the meme spreads that way. Sometimes this explanation seems like a plausible candidate, but it's harder to explain with information sources that you consume nonpassively. If you hear the same term on the evening news on three different network afiliates, not so special. If you read it for the first time in a book published in 1919, and then see it in a movie released in 1983 and a newscast the same night, that's special.
Note that by posting this to my blog, if I hear "drapetomania" again in the next few days from friends, I won't know if I've influenced the spread. It's worth noting that when this has happened in the past I've sometimes heard the term three or more times in a short period, making the probability of #1 lower.
A more speculative version of #3 above is that there are strange macro-patterns operating in human behavior that we're not yet aware of, and trivial though it is, this is one of them. But without a guess at how the pattern operates that allows me to make falsifiable predictions, this is the same as no explanation at all.
[Original post, 15 July 2009]
[Added 21 December 2009 - the next time I was aware of such an instance was in the past week, when I read about lipograms on Wikipedia. 10 hours later I ran across a mention of lipograms in a James Fallows piece. This one seems more easily explained, since I was reading about lipograms because I had read something on Marginal Revolution, and bloggers and readers of East Coast literary and economic-conservative blogs are a pretty small subset of total meme hosts. Still, it puts a ceiling on the frequency of these events - if you count this one, 0.2 month^-1).
[Additional examples: I ran across Rex Wockner for the first time the last week of December 2009, and then again the same week: first, because he mentioned Point Loma and I was searching for the name of a mountain you can see from there, and then in a casual mention on Andrew Sullivan's blog. The first week of January 2010 I twice ran across the term Phallos (in reference to East Indian mythology) and twice across the term "wattle" in usages I was unfamiliar with not having to do with hanging skin (twice within six hours (in Tihkal by Sasha Shulgin, and the name of a winery.)]
Is Wine Uniquely Nuanced Among Fermented Fruit Drinks?
Is there anything special about wine that gives it its depth? Or is it just the millennia of accumulated culture that make it seem special, and fermented apple juice would have been just as promising as a snob drink?
Historical arguments miss the point. Yes, things might have been different if the apple had first been cultivated further west in Georgia (along with the first vines) instead of in Kazakhstan. But if there's something special about fermented grape juice that makes it so neat-o, then how could this have mattered in the long run?
I submit that currently, we have no good reason to think there's anything special or nuanced or detailed about wine, relative to other fruit-beverages, that gave it its snob status today. That beer has no such a place in the modern West has more to do with European history than it does with anything about the drinks themselves - it's all about the impact of the Romans and then that empire after them which did so much defining of dining, the French, versus the smaller and until the last few centuries more modest city-states of northern Europe, quietly drinking their beer. Yes, "a few centuries" is a long time, but prestige signals are a giant coordination game, and they change only very slowly. Gold is another good example.
I think that the relative prestige values of beer and wine, and the converging prestige-values of each, further weakens the "wine is innately special" argument. Starting in the last few decades, attitudes toward beer even in the legendarily philistine U.S. have begun to change. Beer used to be something that you chugged after mowing the lawn and don't think about very much beyond that, but now it's become a beverage that is properly the subject of adjudicated festivals. I'd like to give my craft beer-making countrymen some of the credit for that, as well as improved technology that allows what are effectively the centuries-old craft brews of Northern Europe's villages to be enjoyed outside Northern Europe.
Now back to the question of nuance. It's an empirical fact that beer is chemically far more complex than wine - there are around 200 compounds in many beer, versus maybe 20 in wine. It's apparently only in the last decade that beer has been systematically run out through columns to see what's in there, though I have my doubts that this is really true since chemists are not infrequently also beer enthusiasts. The point is that, if it's nuance you're looking for, then beer has it all. Beer is closing the prestige gap but it's not quite there yet. There are festivals, but men still show off wine knowledge to their dates instead of beer knowledge - but given the relative chemical richness of the two drinks, it's impossible to argue that wine's appeal is a result of its innate character, as opposed to its history.
Having said all this, I'm just as suckered as anybody by some of the interesting episodes in the history of winemaking. To some degree, I have to be; they'd throw me out of California otherwise. But there really is a lot you can do with it, and for a New World barbarian with the good sense to ignore convention, the possibilities are astounding, if you're innovative and willing to fail now and again, like some winemakers. Commercial winemaking in California really got off the ground with the European phylloxera epidemic in the second half of the nineteenth century, one of the few times a New World organism infected the Old (although there is some concern that Douglas firs, eminently well-suited to marine climates, are today in the process of becoming another invasive in Europe that also went New>Old World). The European wine industry almost collapsed, saved only by producing resistant strains, usually by some technique involving crossing with American vines. It's oddly underappreciated that the only place the old pure-European strains survive is growing on the sandy soils of a few Mediterranean islands, although I've heard rumors that there are isolated purebred Spanish vines growing in monastery gardens in a few mountain valleys of Mexico. Another odd bit of American wine history: one of the earliest successful California wineries in Fremont was destroyed in the 1906 quake, and you can still see earthen mounds at the site, covering piles of never-removed debris.
See? That's just one corner of wine history in one state, and cognitive hedonists can't avoid thinking about all that (and enjoying thinking about it) while drinking wine. It becomes part of the experience. But crucially, there's still nothing in any of this that could not have happened in some form with fermented apple juice (or beer). Even if some brilliant genetic engineer were able to make up for three millennia of underdevelopment of apple-wine in one year and develop a fully-nuanced 2009 red delicious, you still couldn't take your finacee on a tour of the vineyard and talk about its history, how the Count of Lyzanxia used to walk there when Magritte came to visit, et cetera. And that accumulated history is critical, because signaling taste and knowledge is where the real game is. Even assuming that there is special nuance in wine, and you happen to be among the gifted few that can tell the difference - if the drinking experience is what matters, the pure sense-pleasure you get from the taste of the wine - why would you care if others know how goddamn good you are?
The following doesn't address the original question of whether fermented grape juice is a better substrate for nuance, but it does support the signaling hypothesis. A study of members of the Stanford University wine club looked at brain activity in wine drinkers in response to two wines. Unbeknownst to these connoisseurs, they were actually being given the same wine twice, but they were told it was two different wines, one $5 a bottle, the other $45. They reported greater pleasure on drinking the "forty-five dollar" wine, and their brain lit up in a way consistent with greater pleasure. So they probably weren't even lying; an effective self-deception strategy if it's not the taste experience, but rather your own erudition and willingness to conform to fancy-pants values that you're trying to signal with your preference for the more expensive wine. Then again, the emperor's new clothes are for signaling, not for warmth. Try it yourself! Have a blinded wine tasting party at your house, and you'll find out how inconsistent people's answers are - in a 12-bottle tasting, the same bottle from the same winery will be given positions 3 and 11. In the end, maybe there are wine-tasters who actually know what they're doing. What's clear is that most people who think they know what they're doing, don't. One possible function left intact even after we consider all this to be hedonic wheel-spinning is signaling, part of which can be accomplished through conspicuous consumption. Boy those forty-five dollars go down smooth!
I like wine. I'm not attacking it, really. But so much is written about this one consumible that you can't help but wonder if there really is more information in a glass of wine than any other drink produced by natural processes and therefore possessed of a robust chemical composition. In the interest of full disclosure, I freely admit to being a much bigger fan of beer, and specifically unfiltered beers (like Belgian ones); enough so that I'm posting this preference on a blog long after it became cool (i.e., a useful signal) to declare it. I have a pretty severe sweet tooth and I'm probably picking up the simple sugars in these beers. It's probably also why I prefer nigori sake, which is an awesome preference to have, and here's why. Nigori is unfiltered cheap poor-man's sake; you can keep your $200 a bottle bullshit sake to yourself in your no-foreigners-bars in Kyoto. (Anti-meta-signal: I'm so refined that I don't need to signal, and I overtly reject your signaling value system. ZING!)
I apply the same dismissal to wine as I do to sake. I've come to the conclusion that intentionally refining one's palate is a form of masochism that any self-respecting hedonist should reject. Why the hell would I ever deliberately make my palate more difficult to please? By developing your taste, you're intentionally making your marginal unit of pleasure more expensive - you're making yourself more difficult to please. If you have a bad case of wine signal-itis and you enjoy announcing to dining compatriots all the flaws you've found in the wine on the table in front of you, you might put it in perspective this way. If you're American, chances are that your grandparents only had wine on a few special occasions in their lives, and that it was almost certainly disgusting. They'd smack you silly for not only complaining about a wine that's a little too fruity but for making it harder on yourself to enjoy eating and drinking. That's why I'm intentionally letting what little refinement I've achieved go fallow, and I automatically order the cheapest table wine on the menu. Or I don't, and get a Coke.
Historical arguments miss the point. Yes, things might have been different if the apple had first been cultivated further west in Georgia (along with the first vines) instead of in Kazakhstan. But if there's something special about fermented grape juice that makes it so neat-o, then how could this have mattered in the long run?
I submit that currently, we have no good reason to think there's anything special or nuanced or detailed about wine, relative to other fruit-beverages, that gave it its snob status today. That beer has no such a place in the modern West has more to do with European history than it does with anything about the drinks themselves - it's all about the impact of the Romans and then that empire after them which did so much defining of dining, the French, versus the smaller and until the last few centuries more modest city-states of northern Europe, quietly drinking their beer. Yes, "a few centuries" is a long time, but prestige signals are a giant coordination game, and they change only very slowly. Gold is another good example.
I think that the relative prestige values of beer and wine, and the converging prestige-values of each, further weakens the "wine is innately special" argument. Starting in the last few decades, attitudes toward beer even in the legendarily philistine U.S. have begun to change. Beer used to be something that you chugged after mowing the lawn and don't think about very much beyond that, but now it's become a beverage that is properly the subject of adjudicated festivals. I'd like to give my craft beer-making countrymen some of the credit for that, as well as improved technology that allows what are effectively the centuries-old craft brews of Northern Europe's villages to be enjoyed outside Northern Europe.
Now back to the question of nuance. It's an empirical fact that beer is chemically far more complex than wine - there are around 200 compounds in many beer, versus maybe 20 in wine. It's apparently only in the last decade that beer has been systematically run out through columns to see what's in there, though I have my doubts that this is really true since chemists are not infrequently also beer enthusiasts. The point is that, if it's nuance you're looking for, then beer has it all. Beer is closing the prestige gap but it's not quite there yet. There are festivals, but men still show off wine knowledge to their dates instead of beer knowledge - but given the relative chemical richness of the two drinks, it's impossible to argue that wine's appeal is a result of its innate character, as opposed to its history.
Having said all this, I'm just as suckered as anybody by some of the interesting episodes in the history of winemaking. To some degree, I have to be; they'd throw me out of California otherwise. But there really is a lot you can do with it, and for a New World barbarian with the good sense to ignore convention, the possibilities are astounding, if you're innovative and willing to fail now and again, like some winemakers. Commercial winemaking in California really got off the ground with the European phylloxera epidemic in the second half of the nineteenth century, one of the few times a New World organism infected the Old (although there is some concern that Douglas firs, eminently well-suited to marine climates, are today in the process of becoming another invasive in Europe that also went New>Old World). The European wine industry almost collapsed, saved only by producing resistant strains, usually by some technique involving crossing with American vines. It's oddly underappreciated that the only place the old pure-European strains survive is growing on the sandy soils of a few Mediterranean islands, although I've heard rumors that there are isolated purebred Spanish vines growing in monastery gardens in a few mountain valleys of Mexico. Another odd bit of American wine history: one of the earliest successful California wineries in Fremont was destroyed in the 1906 quake, and you can still see earthen mounds at the site, covering piles of never-removed debris.
See? That's just one corner of wine history in one state, and cognitive hedonists can't avoid thinking about all that (and enjoying thinking about it) while drinking wine. It becomes part of the experience. But crucially, there's still nothing in any of this that could not have happened in some form with fermented apple juice (or beer). Even if some brilliant genetic engineer were able to make up for three millennia of underdevelopment of apple-wine in one year and develop a fully-nuanced 2009 red delicious, you still couldn't take your finacee on a tour of the vineyard and talk about its history, how the Count of Lyzanxia used to walk there when Magritte came to visit, et cetera. And that accumulated history is critical, because signaling taste and knowledge is where the real game is. Even assuming that there is special nuance in wine, and you happen to be among the gifted few that can tell the difference - if the drinking experience is what matters, the pure sense-pleasure you get from the taste of the wine - why would you care if others know how goddamn good you are?
The following doesn't address the original question of whether fermented grape juice is a better substrate for nuance, but it does support the signaling hypothesis. A study of members of the Stanford University wine club looked at brain activity in wine drinkers in response to two wines. Unbeknownst to these connoisseurs, they were actually being given the same wine twice, but they were told it was two different wines, one $5 a bottle, the other $45. They reported greater pleasure on drinking the "forty-five dollar" wine, and their brain lit up in a way consistent with greater pleasure. So they probably weren't even lying; an effective self-deception strategy if it's not the taste experience, but rather your own erudition and willingness to conform to fancy-pants values that you're trying to signal with your preference for the more expensive wine. Then again, the emperor's new clothes are for signaling, not for warmth. Try it yourself! Have a blinded wine tasting party at your house, and you'll find out how inconsistent people's answers are - in a 12-bottle tasting, the same bottle from the same winery will be given positions 3 and 11. In the end, maybe there are wine-tasters who actually know what they're doing. What's clear is that most people who think they know what they're doing, don't. One possible function left intact even after we consider all this to be hedonic wheel-spinning is signaling, part of which can be accomplished through conspicuous consumption. Boy those forty-five dollars go down smooth!
I like wine. I'm not attacking it, really. But so much is written about this one consumible that you can't help but wonder if there really is more information in a glass of wine than any other drink produced by natural processes and therefore possessed of a robust chemical composition. In the interest of full disclosure, I freely admit to being a much bigger fan of beer, and specifically unfiltered beers (like Belgian ones); enough so that I'm posting this preference on a blog long after it became cool (i.e., a useful signal) to declare it. I have a pretty severe sweet tooth and I'm probably picking up the simple sugars in these beers. It's probably also why I prefer nigori sake, which is an awesome preference to have, and here's why. Nigori is unfiltered cheap poor-man's sake; you can keep your $200 a bottle bullshit sake to yourself in your no-foreigners-bars in Kyoto. (Anti-meta-signal: I'm so refined that I don't need to signal, and I overtly reject your signaling value system. ZING!)
I apply the same dismissal to wine as I do to sake. I've come to the conclusion that intentionally refining one's palate is a form of masochism that any self-respecting hedonist should reject. Why the hell would I ever deliberately make my palate more difficult to please? By developing your taste, you're intentionally making your marginal unit of pleasure more expensive - you're making yourself more difficult to please. If you have a bad case of wine signal-itis and you enjoy announcing to dining compatriots all the flaws you've found in the wine on the table in front of you, you might put it in perspective this way. If you're American, chances are that your grandparents only had wine on a few special occasions in their lives, and that it was almost certainly disgusting. They'd smack you silly for not only complaining about a wine that's a little too fruity but for making it harder on yourself to enjoy eating and drinking. That's why I'm intentionally letting what little refinement I've achieved go fallow, and I automatically order the cheapest table wine on the menu. Or I don't, and get a Coke.
Sunday, July 12, 2009
The Decline of Overtly Signaled Subcultures
"...entire subcultures could rise overnight, thrive
for a dozen weeks, and then vanish utterly." - William Gibson's Neuromancer
The web is full of essays with nostalgic hat tips to the optimistic naivete of 1950s science fiction, but I feel the same way about the naive pessimism of 70s and 80s science fiction - not only cyberpunk but the whole body of sf with the theme of "the future will be disjointed and schizophrenic and incomprehensible and alienating" beginning in the 70s and extending through the 80s, exemplified by works like Shockwave Rider. It's 2009, and guess what? Yes, things move a little quicker; we have a recession and some wars; and on the whole, not many people pine away for the simple 70s - a time when, if you wanted to learn about a company, you had to drive to the library and hope you could learn something from of the two printed paragraphs about them you might find if you dug through microfilm for three hours.
One prediction of the dark, scatterbrained future that seems to be exactly wrong is the fragmentation of subcultures. When was the last time you saw a kid with a metal shirt on? Compare that to ten years ago. I have a special sensitivity to this issue, because as a reformed (but not retired) metal fan, I've noticed the trend - and it seems to extend to every strongly-signaled subculture (in terms of clothing or speech or hair). I don't think that's a coincidence.
Youth subcultures are about establishing identity - not only in opposition to your parents, but to the rest of the people in your own generation. I never once wished there were more metal kids at my school - I didn't have much contact with the other kids who were, and I kind of liked being "special". In fact, at my first Metallica show I had the uncomfortable realization that I was no longer "the metal guy", because suddenly I was surrounded by 10,000 Klingon-looking guys that were indisputably more metal than me.
But what did I get out of the long hair and the scary facial hair and the T-shirts? That signalled my specialness, and (stupidly) as a teenaged male, made me feel good that here, finally, was a way I could make others react - by signal that "I am a member of this clan of loudness and drinking and aggression." Contrary to one folk belief, most metalheads actually do like the music - it's not just about shocking people, and in fact it's the only part of the deal that I retain today. But would it have been as much "fun" if I couldn't signal my membership in this subgroup of unknown values to strangers? Of course not.
You can apply these same arguments to any non-mainstream subculture of the 80s or 90s whose members behaved or dressed in such a way as to mark themselves; I think kids do it for the same reason, and I think that signaling has faded for the same reason. That reason is, what else, the internet.
Kids in 2009 don't expect to be able to shock guys my age or older by coloring their hair or putting something on a shirt. They know we have access to the same websites they do; driving by a high school one day you see a weird-looking kid wearing a T-shirt with some incomprehensible expression on it, you go home and Google it, and you say "Oh, that's all it means." Any subculture that would rise and fluorish - or even survive - must be able to do so even after being catalogued and described and compared by the peers and families of the teenagers that would adopt them. References to taboo subjects are irreversibly weakened when you can start reading Wikipedia articles about them. That's why, if you're a teenager now, you know instinctively the futility of trying to establish an incomprehensible artistic and dress code, because it'll be on Youtube tomorrow. And in particular, any subculture that tries to build a feeling of power in its members through the intimidation that its other-ness creates is doomed to failure ab initio. Know why? Snopes.com. No kid, your Satan shirt doesn't scare me, because there's never been a real sacrifice. Now go back to gym class and see if you can run a mile in under 10 minutes.
In a way I feel sorry for kids who are 15 right now who, had they been born 20 years earlier, would have been punks and metalheads and goths. But there are still pockets of America where one can view these endangered species. The last time I saw a group of teens in full I'm-not-mainstream regalia during business hours (i.e. T-shirts and black trenchcoats not right outside a concert) was in Cody, Wyoming in August 2008. Go there, would-be Goths and punks and metalheads! Be free!
for a dozen weeks, and then vanish utterly." - William Gibson's Neuromancer
The web is full of essays with nostalgic hat tips to the optimistic naivete of 1950s science fiction, but I feel the same way about the naive pessimism of 70s and 80s science fiction - not only cyberpunk but the whole body of sf with the theme of "the future will be disjointed and schizophrenic and incomprehensible and alienating" beginning in the 70s and extending through the 80s, exemplified by works like Shockwave Rider. It's 2009, and guess what? Yes, things move a little quicker; we have a recession and some wars; and on the whole, not many people pine away for the simple 70s - a time when, if you wanted to learn about a company, you had to drive to the library and hope you could learn something from of the two printed paragraphs about them you might find if you dug through microfilm for three hours.
One prediction of the dark, scatterbrained future that seems to be exactly wrong is the fragmentation of subcultures. When was the last time you saw a kid with a metal shirt on? Compare that to ten years ago. I have a special sensitivity to this issue, because as a reformed (but not retired) metal fan, I've noticed the trend - and it seems to extend to every strongly-signaled subculture (in terms of clothing or speech or hair). I don't think that's a coincidence.
Youth subcultures are about establishing identity - not only in opposition to your parents, but to the rest of the people in your own generation. I never once wished there were more metal kids at my school - I didn't have much contact with the other kids who were, and I kind of liked being "special". In fact, at my first Metallica show I had the uncomfortable realization that I was no longer "the metal guy", because suddenly I was surrounded by 10,000 Klingon-looking guys that were indisputably more metal than me.
But what did I get out of the long hair and the scary facial hair and the T-shirts? That signalled my specialness, and (stupidly) as a teenaged male, made me feel good that here, finally, was a way I could make others react - by signal that "I am a member of this clan of loudness and drinking and aggression." Contrary to one folk belief, most metalheads actually do like the music - it's not just about shocking people, and in fact it's the only part of the deal that I retain today. But would it have been as much "fun" if I couldn't signal my membership in this subgroup of unknown values to strangers? Of course not.
You can apply these same arguments to any non-mainstream subculture of the 80s or 90s whose members behaved or dressed in such a way as to mark themselves; I think kids do it for the same reason, and I think that signaling has faded for the same reason. That reason is, what else, the internet.
Kids in 2009 don't expect to be able to shock guys my age or older by coloring their hair or putting something on a shirt. They know we have access to the same websites they do; driving by a high school one day you see a weird-looking kid wearing a T-shirt with some incomprehensible expression on it, you go home and Google it, and you say "Oh, that's all it means." Any subculture that would rise and fluorish - or even survive - must be able to do so even after being catalogued and described and compared by the peers and families of the teenagers that would adopt them. References to taboo subjects are irreversibly weakened when you can start reading Wikipedia articles about them. That's why, if you're a teenager now, you know instinctively the futility of trying to establish an incomprehensible artistic and dress code, because it'll be on Youtube tomorrow. And in particular, any subculture that tries to build a feeling of power in its members through the intimidation that its other-ness creates is doomed to failure ab initio. Know why? Snopes.com. No kid, your Satan shirt doesn't scare me, because there's never been a real sacrifice. Now go back to gym class and see if you can run a mile in under 10 minutes.
In a way I feel sorry for kids who are 15 right now who, had they been born 20 years earlier, would have been punks and metalheads and goths. But there are still pockets of America where one can view these endangered species. The last time I saw a group of teens in full I'm-not-mainstream regalia during business hours (i.e. T-shirts and black trenchcoats not right outside a concert) was in Cody, Wyoming in August 2008. Go there, would-be Goths and punks and metalheads! Be free!
Wednesday, July 8, 2009
Caesar's Civil War, or Early Heuristics
I just finished reading Caesar's account of the Roman Civil War, which for me was an interesting exercise in first, trying to read between the lines of what was certainly some level of self-promotion and historical revision; and second, admiring the clear-thinkingness of the famous people of classical antiquity. Even if it was mostly B.S., it's very well-constructed B.S. that shows a cunning insight into human nature that Kahneman and Tversky would envy.
Next time I'm in Spain or Marseilles or North Africa, I'd love to visit some of the battle sites, but the real value in the work for me are Caesar's occasional incisive and pragmatic asides on human nature and the workings of the universe:
"In war, trivial causes can exert great consequences."
"It is a quirk of human nature that the unfamiliar or the unusual can cause overconfidence or anxiety."
"We are very ready to believe what we want to believe, and expect others to think as we do."
Next time I'm in Spain or Marseilles or North Africa, I'd love to visit some of the battle sites, but the real value in the work for me are Caesar's occasional incisive and pragmatic asides on human nature and the workings of the universe:
"In war, trivial causes can exert great consequences."
"It is a quirk of human nature that the unfamiliar or the unusual can cause overconfidence or anxiety."
"We are very ready to believe what we want to believe, and expect others to think as we do."
Subscribe to:
Posts (Atom)