Australia is often used by demographers as an example of a highly urbanized country, but the contrast between Canada and the U.S. is perhaps more interesting, because we're on the same continent. And indeed, looking at a biggest cities comparison list of both countries, it's striking: of the 3 biggest cities in North America, 2 are in Canada. But as you include more cities of decreasing size, the % of Canadian cities in the rank list converges toward 0.1. (Y axis is cumulative % of Canadian cities in the list of biggest cities in North America, of cities with at least 100,000 inhabitants.) Canada's smaller population is enriched for bigger cities.
Even better than the rank list is the difference in urbanization, i.e. the percent population in cities by size. And there it's even more striking: Canada's biggest city alone contains 16.2% of the country's population. To get to that level of % urbanized population in the U.S. you have to add up the first 60 cities.Y axis is % of population in cities, counting cities of decreasing size up to that point (again, as far down as cities of population 100,000).
Climate? Less of a homesteading policy in Canadian history? Less aggressive property purchase incentives in the automobile era? Or Canadian agriculture taking off only post-mechanization (and hence never requiring settlement by large numbers of people in the first place) since the climate is harsher? The thing to do to start distinguishing between these alternatives would be to generate a 3D surface of this last plot with historical data to see if there are any obvious inflection points. And with the last possibility, the question is whether the prairie provinces have areas losing population in the same way that the post-mechanization Midwest has in the U.S. (More on agriculture and settlement of central North America here.)
Monday, April 22, 2013
Monday, April 15, 2013
The GOP's Push to Change the World at the State Level
There's been a recent, clear push by the GOP to aggressively focus on social conservative issues at the state level.
Sam Brownback's call to do so in Kansas could not have been missed, and quickly we had a restrictive abortion law passed in North Dakota, an abortive tax plan in Louisiana (along with further rumbling about damage to science education), and an attempt to legislate Christianity in North Carolina. Focusing on change at the state level is a smart maneuver for the cast-out GOP, and it's not unprecedented in American political thought. The states (and smaller municipalities) are the laboratories of democracy we have to thank for things like gay marriage, marijuana decriminalization and euthanasia. If we wait for these political footballs to gain traction at the Federal level, they'll never get anywhere.
There are a few incongruities here, not least of which is that so far, the state-level social issues have been those dearest to progressives (and real libertarians); this focus of social cons on social issues is new. At least since 1861. A glance from the map of seceding states to the GOP-voting states in the 2012 election shows an uncanny resemblance, and American libertarians' discussions of "states' rights" are oddly bound up in this history - and serve as a dog whistle to both ends of the political spectrum. And it remains a total mystery to most people (including many libertarians) why it's acceptable for any non-state organization, or even for certain levels of government to take your rights, but not the Federal government. One piece in Reason several years ago, cited here, calls this "inverse state worship". That is: if the U.S. government picks a religion we all have to follow, that's oppression. But if your state does it - well, that's states' rights! Freedom!
Where many of the social experiments that states do have no impact either way on growth, some of those being pushed by social conservatives (particularly regarding science education) promise to have a negative one. Do we think that Brownback, Jindal et al will strengthen, or harm science education and research in their states? And that's one of the main problems with social conservatism - its goals, even when they're expressly declared, are rarely more than just a focus on justifying and extending itself. Ideally these states-as-laboratories experiments, across all parts of the political spectrum, should make measurable predictions about what impact they expect to have in the real world. But as with pundits, politicians aren't in the business of accountability, at least when they can avoid it (which is when we let them). Even if Brownback et al believe that they're doing something more than just manipulating how demographic inputs are read, there will be no feedback loop for their experiment, and in the minds of local electorates the further ruin of the heartland (the cycle of poor investment in education and technology, the crony capitalism instead of real growth, and the focus on social issues that don't matter) will again be seen as the central government's fault - and Kansas can move closer to being America's own Kazakhstan. Fortunately California and New York are here to support them with state-to-state welfare.
There are a few incongruities here, not least of which is that so far, the state-level social issues have been those dearest to progressives (and real libertarians); this focus of social cons on social issues is new. At least since 1861. A glance from the map of seceding states to the GOP-voting states in the 2012 election shows an uncanny resemblance, and American libertarians' discussions of "states' rights" are oddly bound up in this history - and serve as a dog whistle to both ends of the political spectrum. And it remains a total mystery to most people (including many libertarians) why it's acceptable for any non-state organization, or even for certain levels of government to take your rights, but not the Federal government. One piece in Reason several years ago, cited here, calls this "inverse state worship". That is: if the U.S. government picks a religion we all have to follow, that's oppression. But if your state does it - well, that's states' rights! Freedom!
Where many of the social experiments that states do have no impact either way on growth, some of those being pushed by social conservatives (particularly regarding science education) promise to have a negative one. Do we think that Brownback, Jindal et al will strengthen, or harm science education and research in their states? And that's one of the main problems with social conservatism - its goals, even when they're expressly declared, are rarely more than just a focus on justifying and extending itself. Ideally these states-as-laboratories experiments, across all parts of the political spectrum, should make measurable predictions about what impact they expect to have in the real world. But as with pundits, politicians aren't in the business of accountability, at least when they can avoid it (which is when we let them). Even if Brownback et al believe that they're doing something more than just manipulating how demographic inputs are read, there will be no feedback loop for their experiment, and in the minds of local electorates the further ruin of the heartland (the cycle of poor investment in education and technology, the crony capitalism instead of real growth, and the focus on social issues that don't matter) will again be seen as the central government's fault - and Kansas can move closer to being America's own Kazakhstan. Fortunately California and New York are here to support them with state-to-state welfare.
Labels:
economics,
libertarian,
politics
Sunday, April 14, 2013
Human Capital-Intensive Fields That Sort Talent Poorly
Since human capital (Adam Smith's labor component) is becoming more and more important as economies develop, sorting talent has followed suit. It becomes a concern of economists to ask if the people with the potential to become the best doctors and pilots and software engineers are in fact becoming those things, or if there are barriers (passive and active) preventing this. No doubt we are nowhere near 100% in any field; education is still uneven, there are still active barriers like gender- and ethnicity-based discrimination in most parts of the world; and there are passive barriers, like crude talent-detection tools.
I'm just finishing my third year in medical school, and I've noticed something interesting about how people choose specialties (or how specialties choose them). A very brief description of how medical education works in the U.S. is in order: first, you go to college and get an undergraduate degree in something, usually some science. Then you apply to medical school, which lasts four years. During this time you get a general medical education with experience across most fields and specialties, regardless of whether you're pretty sure you know what you want to go into. (The surgeons have to do psychiatry and vice versa.) Then when you graduate medical school after those four years, you are a physician, but you haven't been trained in any one area. Toward the end of those four years of medical school, you choose - do you want to go into family practice, neurology, emergency medicine, etc. and you apply for a residency, where for 3-7 more years you are trained in that area. At the end of that, you are finally a family physician, or psychiatrist, or surgeon, or whatever.
Whether and where you get into medical school is based on your GPA as an undergrad and your MCAT, which is a general science and writing test. And it's my impression that this process does its job fairly well: medical students seem highly motivated, highly generally intelligent people capable of doing well in medicine. But what I find disillusioning is that there is not a really strong talent detection tool to select for aptitude at the next step, when people are choosing their specialties. So how is it determined what specialty you go into? For the most part, you choose what you liked the most during your third year of medical school, and except for the most competitive specialties, most American medical graduates will get into a program in their chosen field, somewhere in the country. What differs is how competitive of a program you get into, which does affect where you go from there. And the inputs there are the disillusioning ones. It's basically, in this order: First, how well you score on the first part of your boards and how competitive is the area you're going into. Second: did the people you worked under in that specialty during medical school like you? (Yes, it's pretty much that subjective; there are grades but they only vaguely reflect effort and competence.) A distant third: did you not screw up the specific test they gave you at the end of the relevant rotation in medical school? Notice how little effort there is to really discriminate the specific talent of the person in that particular specialty among their peers, and how under-emphasized it is in the grand scheme.
I'll address #2 first. Microsoft commissioned a study where they determined how long it took people to form their impressions of a professional's competence, and found that they often do it before the professional opens his or her mouth, based (apparently) entirely on how much the person matches preconceptions of the appearance and behavior of a someone in that profession. So imagine my chagrin when I overhear, repeatedly, students being told by their superiors that they (the superiors) thought the student was going into specialty X, because "you just look like an X". (Seriously.) No doubt personality match and culture in each specialty make a huge difference to how engaged and effective someone will be during their careers, but this seems to be missing something important. Older students are often valued in psychiatry, and fortunately my "life experience" will benefit me in the specialty that most appeals to me. But if I had innate talent as a surgeon and chose that path, I would have had a hard road, because a guy my age with a fully formed ego not bound up in medical status hierarchy doesn't fit the idea of a young nose-to-the-grindstone surgeon in training. I don't look like a surgeon, regardless how much I would've wanted to be one. (Fortunately for all involved I was never in any danger of becoming a surgeon.)
As for #1: certain specialties are much harder to get into than others (ortho surgery and dermatology are extremely competitive for example). So in a way, your board scores do determine which specialties you can go into, but only by how good they are. (It's not like programs really break the score down by subsection, they just look at the main number.) So if you do really well, you have a shot at anything you want. If not, well, you're not going to be a dermatologist. And here's the interesting thing: 20 years ago, you almost couldn't pay people to apply to derm residencies. But then the reimbursement structure of medicine changed quickly, plus medical students gradually got wise to this and realized that in derm, here's a specialty that had excellent hours and great pay, and to hell with the status hierarchy. What this means is that (no offense to the great dermatologists I've worked with) based on their scores, your average dermatology resident today is more competitive globally than a derm resident 20 years ago. Is this appropriate? That is, have there been so many advances in dermatology in 20 years that derm residents have had to get smarter to keep up, or is it something completely separate from the field itself - and therefore, the talent of the people going into it? This is all to say that stratifying applicants based on their general board score is not finding the best dermatologist or OBGYN. It's sorting people based on the general aptitude, and then there will be a tendency for the top scorers to go into whatever is perceived as high-status and/or well-reimbursed at the moment.
Two asides. First, letting people choose what they like is probably the best part of the whole process, although this doesn't tell you how many people are choosing on status and money in the specialty rather than talent. Also, obviously there are many medical skills you can't evaluate with a traditional test, so to really find out who the surgeons are (for example) vs. the medicine docs or the neurologists, we would need simulations. Yes, this is difficult and imposes more time and expense on the medical education process, but I think it's worth it so that institutions (and medical students) know that the best-suited people really are going into each specialty, and anyway we already do have a patient simulation test for step 2 of our board certification.
My point: of course the reality is these sorts of inefficient sorting mechanisms exist many places in life. But in human-capital-intensive fields like medicine with socially valuable outputs, you would hope that the search mechanisms would be more robust. I suspect this is partly an artifact of antiquated medical education conventions, and my hope is that it will improve.
I'm just finishing my third year in medical school, and I've noticed something interesting about how people choose specialties (or how specialties choose them). A very brief description of how medical education works in the U.S. is in order: first, you go to college and get an undergraduate degree in something, usually some science. Then you apply to medical school, which lasts four years. During this time you get a general medical education with experience across most fields and specialties, regardless of whether you're pretty sure you know what you want to go into. (The surgeons have to do psychiatry and vice versa.) Then when you graduate medical school after those four years, you are a physician, but you haven't been trained in any one area. Toward the end of those four years of medical school, you choose - do you want to go into family practice, neurology, emergency medicine, etc. and you apply for a residency, where for 3-7 more years you are trained in that area. At the end of that, you are finally a family physician, or psychiatrist, or surgeon, or whatever.
Whether and where you get into medical school is based on your GPA as an undergrad and your MCAT, which is a general science and writing test. And it's my impression that this process does its job fairly well: medical students seem highly motivated, highly generally intelligent people capable of doing well in medicine. But what I find disillusioning is that there is not a really strong talent detection tool to select for aptitude at the next step, when people are choosing their specialties. So how is it determined what specialty you go into? For the most part, you choose what you liked the most during your third year of medical school, and except for the most competitive specialties, most American medical graduates will get into a program in their chosen field, somewhere in the country. What differs is how competitive of a program you get into, which does affect where you go from there. And the inputs there are the disillusioning ones. It's basically, in this order: First, how well you score on the first part of your boards and how competitive is the area you're going into. Second: did the people you worked under in that specialty during medical school like you? (Yes, it's pretty much that subjective; there are grades but they only vaguely reflect effort and competence.) A distant third: did you not screw up the specific test they gave you at the end of the relevant rotation in medical school? Notice how little effort there is to really discriminate the specific talent of the person in that particular specialty among their peers, and how under-emphasized it is in the grand scheme.
I'll address #2 first. Microsoft commissioned a study where they determined how long it took people to form their impressions of a professional's competence, and found that they often do it before the professional opens his or her mouth, based (apparently) entirely on how much the person matches preconceptions of the appearance and behavior of a someone in that profession. So imagine my chagrin when I overhear, repeatedly, students being told by their superiors that they (the superiors) thought the student was going into specialty X, because "you just look like an X". (Seriously.) No doubt personality match and culture in each specialty make a huge difference to how engaged and effective someone will be during their careers, but this seems to be missing something important. Older students are often valued in psychiatry, and fortunately my "life experience" will benefit me in the specialty that most appeals to me. But if I had innate talent as a surgeon and chose that path, I would have had a hard road, because a guy my age with a fully formed ego not bound up in medical status hierarchy doesn't fit the idea of a young nose-to-the-grindstone surgeon in training. I don't look like a surgeon, regardless how much I would've wanted to be one. (Fortunately for all involved I was never in any danger of becoming a surgeon.)
As for #1: certain specialties are much harder to get into than others (ortho surgery and dermatology are extremely competitive for example). So in a way, your board scores do determine which specialties you can go into, but only by how good they are. (It's not like programs really break the score down by subsection, they just look at the main number.) So if you do really well, you have a shot at anything you want. If not, well, you're not going to be a dermatologist. And here's the interesting thing: 20 years ago, you almost couldn't pay people to apply to derm residencies. But then the reimbursement structure of medicine changed quickly, plus medical students gradually got wise to this and realized that in derm, here's a specialty that had excellent hours and great pay, and to hell with the status hierarchy. What this means is that (no offense to the great dermatologists I've worked with) based on their scores, your average dermatology resident today is more competitive globally than a derm resident 20 years ago. Is this appropriate? That is, have there been so many advances in dermatology in 20 years that derm residents have had to get smarter to keep up, or is it something completely separate from the field itself - and therefore, the talent of the people going into it? This is all to say that stratifying applicants based on their general board score is not finding the best dermatologist or OBGYN. It's sorting people based on the general aptitude, and then there will be a tendency for the top scorers to go into whatever is perceived as high-status and/or well-reimbursed at the moment.
Two asides. First, letting people choose what they like is probably the best part of the whole process, although this doesn't tell you how many people are choosing on status and money in the specialty rather than talent. Also, obviously there are many medical skills you can't evaluate with a traditional test, so to really find out who the surgeons are (for example) vs. the medicine docs or the neurologists, we would need simulations. Yes, this is difficult and imposes more time and expense on the medical education process, but I think it's worth it so that institutions (and medical students) know that the best-suited people really are going into each specialty, and anyway we already do have a patient simulation test for step 2 of our board certification.
My point: of course the reality is these sorts of inefficient sorting mechanisms exist many places in life. But in human-capital-intensive fields like medicine with socially valuable outputs, you would hope that the search mechanisms would be more robust. I suspect this is partly an artifact of antiquated medical education conventions, and my hope is that it will improve.
Why Do Sub-Cultures Dislike Luke-Warm Adherents
It's a sad time for sub-cultures in the developed world. Why? Many subcultures, particularly those with mostly young folks (and who else will go out of their way to spend money and effort and incur opportunity cost to signal their identity this way?) have traditionally relied on either seeming a little threatening, or being obscure. The internet has made that very difficult, and those kinds of subcultures have failed to remain coherent, because they have more difficulty preserving an us-and-them divide.
But people do genuinely differ in their talents and tastes and shared experiences, so SOME subcultures remain. And one that fascinates me is Burning Man, which I've written about before. I went to Burning Man once, in 2000. It was fun and a really interesting experience, but I don't see the reason to do it again; and partly too, the art legitimately impressed me, and I don't want to go back without contributing, and even if I had the talent I don't have the time to make something. But here's the interesting part. From self-described Burners who I meet casually (i.e., people who are not already friends), I've gotten some pretty thinly-veiled hostility at my casual attitude. The only way I can make sense of this is that here's something that forms a big part of their identity, and partly what they like is the specialness of it - not everyone is a Burner after all - and here I come in effect saying, it was fun, but I was able to partake, not be fundamentally changed by it, and not return, out of insufficient excitement, rather than outright rejection. (As a patriotic San Francisco adoptee, I have a little bit of the same reaction when someone visits San Francisco from elsewhere in the country and is unimpressed.)
This observation can likely be applied much more generally, but what makes me post about these occasional conversations with annoyed Burners is that part of what does seem so great about something like Burning Man is exactly the voluntariness of it; you want to go, great. You don't want to go, great. But that's not the way this attitude makes me feel and I imagine I'm not alone in that. It would be one thing if people shrugged and said, "Ih, it's not your cup of tea" but in several cases the comments have been more judgmental. Once you go, you apparently have to profess your love indefinitely. So much for voluntariness!
It could also be that there's a certain status associated with being a Burner, and when you're in contact with someone but don't play their status game, there are two ways to go: ignorance and conscious rejection. Ignorance is the clueless foreigner who has different status-determination rules. A proud Mercedes owner doesn't mind that said foreigner is not impressed with his Mercedes, because the foreigner doesn't "know any better", and therefore doesn't count. Much worse is the smartass that says "A Mercedes really isn't that great" or even worse, "I don't care what kind of car you drive." That boils down to, "Yes, I recognize that you value your car (or Burning Man experiences and friendships) highly, but to your face I'm telling you that not only am I not impressed, but I think you have poor values and have made a bad choice about how to measure yourself."
But people do genuinely differ in their talents and tastes and shared experiences, so SOME subcultures remain. And one that fascinates me is Burning Man, which I've written about before. I went to Burning Man once, in 2000. It was fun and a really interesting experience, but I don't see the reason to do it again; and partly too, the art legitimately impressed me, and I don't want to go back without contributing, and even if I had the talent I don't have the time to make something. But here's the interesting part. From self-described Burners who I meet casually (i.e., people who are not already friends), I've gotten some pretty thinly-veiled hostility at my casual attitude. The only way I can make sense of this is that here's something that forms a big part of their identity, and partly what they like is the specialness of it - not everyone is a Burner after all - and here I come in effect saying, it was fun, but I was able to partake, not be fundamentally changed by it, and not return, out of insufficient excitement, rather than outright rejection. (As a patriotic San Francisco adoptee, I have a little bit of the same reaction when someone visits San Francisco from elsewhere in the country and is unimpressed.)
This observation can likely be applied much more generally, but what makes me post about these occasional conversations with annoyed Burners is that part of what does seem so great about something like Burning Man is exactly the voluntariness of it; you want to go, great. You don't want to go, great. But that's not the way this attitude makes me feel and I imagine I'm not alone in that. It would be one thing if people shrugged and said, "Ih, it's not your cup of tea" but in several cases the comments have been more judgmental. Once you go, you apparently have to profess your love indefinitely. So much for voluntariness!
It could also be that there's a certain status associated with being a Burner, and when you're in contact with someone but don't play their status game, there are two ways to go: ignorance and conscious rejection. Ignorance is the clueless foreigner who has different status-determination rules. A proud Mercedes owner doesn't mind that said foreigner is not impressed with his Mercedes, because the foreigner doesn't "know any better", and therefore doesn't count. Much worse is the smartass that says "A Mercedes really isn't that great" or even worse, "I don't care what kind of car you drive." That boils down to, "Yes, I recognize that you value your car (or Burning Man experiences and friendships) highly, but to your face I'm telling you that not only am I not impressed, but I think you have poor values and have made a bad choice about how to measure yourself."
New Call to Regulate Drones: from Google Head Eric Schmidt
This is cross-posted at my technology blog, Speculative Nonfiction.
Article here. There's a clear motivation for governments and the enforcer class to have a monopoly on this technology, and Frank Fukuyama among others had predicted some time ago that governments would start creating this monopoly shortly (is this why Chris Squire put down his capital investment of a drone manufacturing facility across the border in Mexico?), but why from the private sector? I'm not sure what Schmidt is doing here. Is he just going on record stating his discomfort with drones so Google can distance itself from perceived vague connections to sure-to-come abuses of technology?
In any event, if you're uncomfortable with your neighbor having a drone, I'm ten times as nervous when the police are allowed to have drones but the rest of us are not.
Article here. There's a clear motivation for governments and the enforcer class to have a monopoly on this technology, and Frank Fukuyama among others had predicted some time ago that governments would start creating this monopoly shortly (is this why Chris Squire put down his capital investment of a drone manufacturing facility across the border in Mexico?), but why from the private sector? I'm not sure what Schmidt is doing here. Is he just going on record stating his discomfort with drones so Google can distance itself from perceived vague connections to sure-to-come abuses of technology?
In any event, if you're uncomfortable with your neighbor having a drone, I'm ten times as nervous when the police are allowed to have drones but the rest of us are not.
Preference Falsification and Cascades
Preference falsification is the masking of one's real preferences in order to conform to "received" preferences, that is to social norms, often to avoid a cost of nonconformity; e.g., social ostracism or criminal punishment. One of the effects of the phenomenon is that each person who is falsifying, that is who holds a preference different from the received preference, does not know who else might also be falsifying. They could be surrounded by other people who also think that social/government/religious belief X is B.S., or they might be the only one, but in this situation, it's hard to know, since even bringing up questions about X might reveal their actual preference regarding X. When suddenly the fact of widespread preference falsification is obvious - everyone realizes that everyone else thinks that X is B.S. - things can change very quickly, and in fact this explanation has been advanced to explain "surprising" revolutions (Iran in 1979, the Eastern Bloc in 1989, etc.) This is called a preference cascade; its opposite is a spiral of silence.
Another implication of the theory is that secret ballots as we have in the United States should also occasionally lead to such surprising shifts, since behind the curtain you can vote to legalize marijuana or take voting rights away from ethnic/gender group X or whatever it is that you're otherwise afraid to admit in front of your neighbors, and hey, what do you know - everyone else voted that way too. On referenda in the U.S., such surprising preference cascades don't happen very much; I can't think of many examples. Yes, Washington and Colorado just legalized marijuana, but that wasn't a preference cascade, since people told the pollsters they were going to. Another way of saying this is that if the polls were predictive of the actual voting in a secret ballot (as they almost always are in the U.S.) then there was no preference cascade. This may represent a problem with the theory, or reflect that in the U.S. we don't falsify our preferences very often, or that referenda involving highly falsified preferences are somehow kept from the ballots.
The strongest example, indeed one of the only examples, that I can think of in recent U.S. politics is the Bradley effect, where Tom Bradley was projected to win the California gubernatorial race based on polls results, but he lost. The revealed preference here? Tom Bradley was black, and it was thought that many voters falsified their racial preference to pollsters but not at the ballot. Note however, that even in this case, the unmasked preference did not result in a cascade of Californians suddenly becoming open about refusing to vote for black candidates.
A second and dare I say humorous phenomenon is observed where the received preference is subject to central control. Think of a dictatorship that modifies its propaganda, or a steep status hierarchy in business or academia. This is different from the usual situation that obtains in that centrally-controlled received preferences are dependent on very few nodes in the network and therefore much more likely to shift rapidly, versus the "normal" situation (i.e. with social norms) which are held or at least claimed by large numbers of people and are inert over time. The game then becomes changing stated preferences to keep up with the fast-changing centrally-controlled ones. For a concrete example: "Eastasia is the enemy! Wait, MiniTruth says Eurasia is the enemy? I misspoke just now; what I meant was that Eurasia is the enemy, and always has been!" A logical next step in this game is a kind of truce where everyone agrees not to point out each other's very recently endorsed and now obsolete received preferences.
Based on the signal you want to send to the central node which has changed the preference, there are two ways to play this game. If the central node can be convinced that you actually believe the new preference, and cares whether you do, then it becomes very important to avoid looking like all you're doing is repeating the received preferences - since that gives away that you probably don't. This is the case in academia or business. In my medical training, very often I've heard an attending object to a resident's treatment plan for a patient, and the resident will change his or her mind - but rather than say "Okay, you're higher on the ladder than me so I'll do what you want", the resident will say, "Oh, uh, well, you're right, I hadn't thought of that and now that I thought of it I realize that's a better idea, let's do it that way." You can't just be a yes-man, you have to look like you really believe it. (I'm always struck by how few residents are willing, even in private, to admit that most of the time, the decision has everything to do with agreeing with your attending, and little to do with evidence.)
On the other hand, in an authority structure where you just want to signal to the central node that "I'll do it your way no matter what" or they just don't care what you really think as long as you keep your mouth shut, then being overt and clumsy about your publicly stated preference-shifting may actually be a good thing. (See above for a terrifying example where Saddam Hussein purged the government and ministers were suddenly leaping to their feet shouting their newfound and obviously motivated loyalty.) Then the central node knows you'll endorse whatever preference they tell you to - and there may even be some effect of dampening or confusing conflicting internal preferences by this constant shifting. This is also more likely to be the case where the stated preferences are more ethereal or ideological - i.e. religious or ideological claims - versus those which you expect to directly, practically affect decisions you're making on a daily or hourly basis as in business or medicine.
Another implication of the theory is that secret ballots as we have in the United States should also occasionally lead to such surprising shifts, since behind the curtain you can vote to legalize marijuana or take voting rights away from ethnic/gender group X or whatever it is that you're otherwise afraid to admit in front of your neighbors, and hey, what do you know - everyone else voted that way too. On referenda in the U.S., such surprising preference cascades don't happen very much; I can't think of many examples. Yes, Washington and Colorado just legalized marijuana, but that wasn't a preference cascade, since people told the pollsters they were going to. Another way of saying this is that if the polls were predictive of the actual voting in a secret ballot (as they almost always are in the U.S.) then there was no preference cascade. This may represent a problem with the theory, or reflect that in the U.S. we don't falsify our preferences very often, or that referenda involving highly falsified preferences are somehow kept from the ballots.
The strongest example, indeed one of the only examples, that I can think of in recent U.S. politics is the Bradley effect, where Tom Bradley was projected to win the California gubernatorial race based on polls results, but he lost. The revealed preference here? Tom Bradley was black, and it was thought that many voters falsified their racial preference to pollsters but not at the ballot. Note however, that even in this case, the unmasked preference did not result in a cascade of Californians suddenly becoming open about refusing to vote for black candidates.
A second and dare I say humorous phenomenon is observed where the received preference is subject to central control. Think of a dictatorship that modifies its propaganda, or a steep status hierarchy in business or academia. This is different from the usual situation that obtains in that centrally-controlled received preferences are dependent on very few nodes in the network and therefore much more likely to shift rapidly, versus the "normal" situation (i.e. with social norms) which are held or at least claimed by large numbers of people and are inert over time. The game then becomes changing stated preferences to keep up with the fast-changing centrally-controlled ones. For a concrete example: "Eastasia is the enemy! Wait, MiniTruth says Eurasia is the enemy? I misspoke just now; what I meant was that Eurasia is the enemy, and always has been!" A logical next step in this game is a kind of truce where everyone agrees not to point out each other's very recently endorsed and now obsolete received preferences.
Based on the signal you want to send to the central node which has changed the preference, there are two ways to play this game. If the central node can be convinced that you actually believe the new preference, and cares whether you do, then it becomes very important to avoid looking like all you're doing is repeating the received preferences - since that gives away that you probably don't. This is the case in academia or business. In my medical training, very often I've heard an attending object to a resident's treatment plan for a patient, and the resident will change his or her mind - but rather than say "Okay, you're higher on the ladder than me so I'll do what you want", the resident will say, "Oh, uh, well, you're right, I hadn't thought of that and now that I thought of it I realize that's a better idea, let's do it that way." You can't just be a yes-man, you have to look like you really believe it. (I'm always struck by how few residents are willing, even in private, to admit that most of the time, the decision has everything to do with agreeing with your attending, and little to do with evidence.)
On the other hand, in an authority structure where you just want to signal to the central node that "I'll do it your way no matter what" or they just don't care what you really think as long as you keep your mouth shut, then being overt and clumsy about your publicly stated preference-shifting may actually be a good thing. (See above for a terrifying example where Saddam Hussein purged the government and ministers were suddenly leaping to their feet shouting their newfound and obviously motivated loyalty.) Then the central node knows you'll endorse whatever preference they tell you to - and there may even be some effect of dampening or confusing conflicting internal preferences by this constant shifting. This is also more likely to be the case where the stated preferences are more ethereal or ideological - i.e. religious or ideological claims - versus those which you expect to directly, practically affect decisions you're making on a daily or hourly basis as in business or medicine.
Labels:
medicine,
politics,
psychology
Saturday, April 13, 2013
Another Euthanasia Program in the U.S.
Oregon already has such a law - interestingly, it was enacted when an MD was governor - and now Seattle has a pilot program. It's a cliche to observe the irony that we understand when it's time to end suffering when extending a pet's life is futile, but it's a very strong argument for extending the same values to our fellow human beings.
Tuesday, April 2, 2013
Age at Marriage and Happiness
The title of this post is a bit misleading because there is no actual data that I can find answering the question, "Is happiness in marriage related to age at marriage?". But, you might object, every now and then there's an article in a young, liberal publication that tries to get readers to forward it to each other in anger by touching a nerve about age of heterosexual marriage; here's a recent one that seems to be doing its best to piss of their usual audience by contradicting their belief that assumes young marriage = uneducated, traditional, unhappy marriage. News flash: in the U.S., age of marriage has been going up, and every year we set a new record. (As I write this it's men 29, women 27. Note also that this article focuses on heterosexual marriage. Data from gay marriages as compared to straight will eventually unmask a lot of the patterns in how this institution works, but that's not what this post is about.)
The perennial question is whether later marriages are better ones. It's worth asking whether the answer matters, since it's not clear that we're really "deciding" to marry later and really deliberating the pros and cons, rather than just a being products of more complex professions requiring longer education. And the way the question is discussed further clouds the issue, because the outcome these debates focus on is divorce - when divorce is only a proxy for happiness, and probably not a good one.
I've seen data that the older you are when you're married, the less chance of divorce - and also, conflicting data that suggest there is no relationship at all. For the moment let's make the assumption (subject to revision with further data of course) that the consensus is correct, and the older you are, the less the risk of divorce. Is this necessarily because the marriage is better, or just because you're older, and you have fewer mate-finding options once the marriage ends? If you're on the "later marriage is better" side, and you think that because later marriages are (probably) less likely to end in divorce, the answer must obviously be that later marriages are happier - then what about arranged marriages? They seem to have a far lower rate of divorce, although again it's hard to make a direct comparison not confounded by other factors. For what it's worth, globally, arranged marriages have a 6% divorced rate. The United States has a 54% divorce rate (the vast majority of which marriages are not arranged). So are these more "successful" arranged marriages lasting because they're happier? Or because there's pressure forcing two miserable people to stay together to avoid greater social and familial censure? (That would equate to a much thicker "marriage cushion" as referenced here.) In any event, it's fairly easy to find divorce statistics for first marriage at age X, but I wasn't able to find anything (recent) for happiness or satisfaction in marriage with first marriage at age X - that would have to be collected a lot more actively. Until some awesome psychologist does such a study we'll have to keep using divorce as a proxy indicator.
A separate concern is that by making informed decisions to maximize personal happiness, we're actually working against our own reproduction. That is to say, it's possible that by thinking about what makes us happy and making good decisions with our frontal lobes to pursue that, we don't get married and we don't reproduce, at least as effectively. (This is in contrast with the way we've reproduced up until basically the past century: from a very small pool of potential mates with very poor information and the females more or less forced into the deal.) That having the freedom and ability to actual ponder our options could even be stated as a concern admittedly cuts at the foundation of the Enlightenment, but there are strange realities: for example, that having kids makes you less happy. Well think about it, of course it does! So if everyone is making decisions to maximize happiness, why would we last beyond this generation? (Answer: we wouldn't.) And what do we see in liberal democracies in the West? A decreasing birth rate - especially where women have an increasing say in work and education and therefore how the family grows and runs. (Although this is probably also partly a result of lower average fertility of later marriages.) And marriage can be hard too - so might not the same argument apply to delaying marriage, that people are better informed and able to make more rational decisions to maximize their happiness?
From the Onion of course.
It turns out that people are more satisfied on average after they get married than if they don't (or at least Germans are, per Zimmerman and Easterlin 2006), so happiness-maximizers won't avoid marriage completely - although men seem to be the winners in terms of both income and life span in the deal, whereas women with large age differences relative to their husbands can actually lose life span. So if anyone would be rational to avoid marriage altogether, it would be women. Of course if we made only decisions that made us happy in the near-term even when they stopped us from reproducing, we wouldn't be here, which is exactly my point - inherited biological imperatives in humans lead us to make choices that are not predicted by maximizing utility, but when we are actually rationally maximizing utility, we get into uncharted territory.
The perennial question is whether later marriages are better ones. It's worth asking whether the answer matters, since it's not clear that we're really "deciding" to marry later and really deliberating the pros and cons, rather than just a being products of more complex professions requiring longer education. And the way the question is discussed further clouds the issue, because the outcome these debates focus on is divorce - when divorce is only a proxy for happiness, and probably not a good one.
I've seen data that the older you are when you're married, the less chance of divorce - and also, conflicting data that suggest there is no relationship at all. For the moment let's make the assumption (subject to revision with further data of course) that the consensus is correct, and the older you are, the less the risk of divorce. Is this necessarily because the marriage is better, or just because you're older, and you have fewer mate-finding options once the marriage ends? If you're on the "later marriage is better" side, and you think that because later marriages are (probably) less likely to end in divorce, the answer must obviously be that later marriages are happier - then what about arranged marriages? They seem to have a far lower rate of divorce, although again it's hard to make a direct comparison not confounded by other factors. For what it's worth, globally, arranged marriages have a 6% divorced rate. The United States has a 54% divorce rate (the vast majority of which marriages are not arranged). So are these more "successful" arranged marriages lasting because they're happier? Or because there's pressure forcing two miserable people to stay together to avoid greater social and familial censure? (That would equate to a much thicker "marriage cushion" as referenced here.) In any event, it's fairly easy to find divorce statistics for first marriage at age X, but I wasn't able to find anything (recent) for happiness or satisfaction in marriage with first marriage at age X - that would have to be collected a lot more actively. Until some awesome psychologist does such a study we'll have to keep using divorce as a proxy indicator.
A separate concern is that by making informed decisions to maximize personal happiness, we're actually working against our own reproduction. That is to say, it's possible that by thinking about what makes us happy and making good decisions with our frontal lobes to pursue that, we don't get married and we don't reproduce, at least as effectively. (This is in contrast with the way we've reproduced up until basically the past century: from a very small pool of potential mates with very poor information and the females more or less forced into the deal.) That having the freedom and ability to actual ponder our options could even be stated as a concern admittedly cuts at the foundation of the Enlightenment, but there are strange realities: for example, that having kids makes you less happy. Well think about it, of course it does! So if everyone is making decisions to maximize happiness, why would we last beyond this generation? (Answer: we wouldn't.) And what do we see in liberal democracies in the West? A decreasing birth rate - especially where women have an increasing say in work and education and therefore how the family grows and runs. (Although this is probably also partly a result of lower average fertility of later marriages.) And marriage can be hard too - so might not the same argument apply to delaying marriage, that people are better informed and able to make more rational decisions to maximize their happiness?
From the Onion of course.
It turns out that people are more satisfied on average after they get married than if they don't (or at least Germans are, per Zimmerman and Easterlin 2006), so happiness-maximizers won't avoid marriage completely - although men seem to be the winners in terms of both income and life span in the deal, whereas women with large age differences relative to their husbands can actually lose life span. So if anyone would be rational to avoid marriage altogether, it would be women. Of course if we made only decisions that made us happy in the near-term even when they stopped us from reproducing, we wouldn't be here, which is exactly my point - inherited biological imperatives in humans lead us to make choices that are not predicted by maximizing utility, but when we are actually rationally maximizing utility, we get into uncharted territory.
Subscribe to:
Posts (Atom)