Monday, August 23, 2010
Why Does Xinhua Have a File Photo of Emperor Palpatine
Similarly, the San Francisco Chronicle also has their file photo of the owner of the Oakland Raiders mixed up with a still-frame of an orc from Lord of the Rings (scroll to the end for it.)
Saturday, August 21, 2010
Legislative Sclerosis and Subordinate Legal Systems
At the beginning of my first year of medical school the administration told us we could write our own honor code. Because during my undergrad years I had a bad habit of over-committing my time to non-career-supporting activities, I banished all thought of participating. But then a few days later when the writers were in front of us presenting the honor code they'd written, it occurred to me: I go on and on here on this blog about various social theories including social contracts, gaming the system by over-specializing legal language, and compiling constitutions in a legal program language. Yet here was the opportunity to draft a new social contract, and I'd rejected it! Then again, so for that matter did most of the class. (Interestingly, the students who did choose to participate were demographically non-representative of the student body as a whole but I won't say exactly in what way because it would change the focus of the post to something controversial.)
Even when it was presented, people didn't seem to care, and I was one of the non-carers. Why? There was no revolution going on. We weren't Jefferson or Lenin drawing up the new law of the land; maybe if we'd been on a desert island a la Lord of the Flies the code would have some meaning, but as it was, such a code has the backing of a sovereign organization only insofar as it conforms to the sovereign organization's aims. The student body's regulations are essentially those of a reservation within the medical school as a whole. This is not a complaint about my own institution; I can't imagine how it could be different anywhere else, or why schools would want it to be. For example, if we'd drawn up the following:
1) We don't have to take the exams or show up for labs but we still pass.
2) If we do take the exams, we can look at other people's tests.
3) We can perform whatever criminal activities we want while we're in school and there will be no repercussions for us.
4) A beer fountain on the quad every Tuesday. (Why wait for a beer volcano in heaven?)
But I'm guessing that wouldn't have flown, because it wouldn't accord with the sovereign's pre-existing rules.
I'm sure you're not surprised by the lack of civic enthusiasm for student government legislation, which at any rate seems universal. But there are two interesting observations here. First, the reason no one cares about student government laws is that they know if there's ever real trouble, it's the sovereign organization that has the real power. Yet in the U.S., people respect state and city governments even though they're ultimately beholden to the Federal government. What's the difference? I suspect there are three factors: 1) Those governments have police, whose right to operate as such are reciprocally recognized. (The Iroquois Nation recently found out about the importance of reciprocal recognition.) 2) Those governments have money. 3) A psychological coordination game - a political entity just seems "real" once it and the people+territory it governs reach a certain size.
Also, the laws of limited, subordinate organizations must conform to the sovereign organization's laws - that is, be at least as, or more, restrictive - which means that each successive layer of government makes a more restrictive overall set of legislation almost certain.
Ultimately the issue is that we humans have figured out very few land use arrangements that allow multiple ownership of a parcel between political entities. This is in contrast to agreements about pieces of property between individuals, or agreements about individuals' labor. Yes, a piece of land can belong to a person as well as be within a country, but only rarely have two separate political entities agreed to administer the same territory (a notable exception in U.s. history is the agreement between the U.S. and Britain to co-develop Columbia (the Oregon Country) from 1818 through 1846, all the more remarkable because the agreement was completed between two countries that had been at war less than four years before; and here's what might have happened if hotheads had prevailed in the American government at the end of that period.) The existence of multiple political entities administering the same piece of territory would allow actual competition, and avoid pitfalls like the sclerotic effect of multiple levels of government, or the regulatory ratchet problem. U.S.-and-British Columbia might not have been the best place to test this theory since there was hardly anybody there, and it was the relative number of settlers that decided the problem. Post-colonial enclaves like Hong Kong, where a regional culture and government survives as a result of the past collision between two cultures, could be argued to be a half-way point to geographically simultaneous competing political systems.
Currently emigration is one of few things that drives innovation - you lose people if you don't fix the things that are broken about your state - but there are barriers there too. From the emigrant's perspective, it costs money to emigrate, and you probably have to learn a new language and customs, and you lose your social network, and the people in charge either often don't care or don't understand what's happening, so the feedback loop is broken.
Even when it was presented, people didn't seem to care, and I was one of the non-carers. Why? There was no revolution going on. We weren't Jefferson or Lenin drawing up the new law of the land; maybe if we'd been on a desert island a la Lord of the Flies the code would have some meaning, but as it was, such a code has the backing of a sovereign organization only insofar as it conforms to the sovereign organization's aims. The student body's regulations are essentially those of a reservation within the medical school as a whole. This is not a complaint about my own institution; I can't imagine how it could be different anywhere else, or why schools would want it to be. For example, if we'd drawn up the following:
1) We don't have to take the exams or show up for labs but we still pass.
2) If we do take the exams, we can look at other people's tests.
3) We can perform whatever criminal activities we want while we're in school and there will be no repercussions for us.
4) A beer fountain on the quad every Tuesday. (Why wait for a beer volcano in heaven?)
But I'm guessing that wouldn't have flown, because it wouldn't accord with the sovereign's pre-existing rules.
I'm sure you're not surprised by the lack of civic enthusiasm for student government legislation, which at any rate seems universal. But there are two interesting observations here. First, the reason no one cares about student government laws is that they know if there's ever real trouble, it's the sovereign organization that has the real power. Yet in the U.S., people respect state and city governments even though they're ultimately beholden to the Federal government. What's the difference? I suspect there are three factors: 1) Those governments have police, whose right to operate as such are reciprocally recognized. (The Iroquois Nation recently found out about the importance of reciprocal recognition.) 2) Those governments have money. 3) A psychological coordination game - a political entity just seems "real" once it and the people+territory it governs reach a certain size.
Also, the laws of limited, subordinate organizations must conform to the sovereign organization's laws - that is, be at least as, or more, restrictive - which means that each successive layer of government makes a more restrictive overall set of legislation almost certain.
Ultimately the issue is that we humans have figured out very few land use arrangements that allow multiple ownership of a parcel between political entities. This is in contrast to agreements about pieces of property between individuals, or agreements about individuals' labor. Yes, a piece of land can belong to a person as well as be within a country, but only rarely have two separate political entities agreed to administer the same territory (a notable exception in U.s. history is the agreement between the U.S. and Britain to co-develop Columbia (the Oregon Country) from 1818 through 1846, all the more remarkable because the agreement was completed between two countries that had been at war less than four years before; and here's what might have happened if hotheads had prevailed in the American government at the end of that period.) The existence of multiple political entities administering the same piece of territory would allow actual competition, and avoid pitfalls like the sclerotic effect of multiple levels of government, or the regulatory ratchet problem. U.S.-and-British Columbia might not have been the best place to test this theory since there was hardly anybody there, and it was the relative number of settlers that decided the problem. Post-colonial enclaves like Hong Kong, where a regional culture and government survives as a result of the past collision between two cultures, could be argued to be a half-way point to geographically simultaneous competing political systems.
Currently emigration is one of few things that drives innovation - you lose people if you don't fix the things that are broken about your state - but there are barriers there too. From the emigrant's perspective, it costs money to emigrate, and you probably have to learn a new language and customs, and you lose your social network, and the people in charge either often don't care or don't understand what's happening, so the feedback loop is broken.
Labels:
morality,
politics,
regulation
Wednesday, August 11, 2010
Medicare and Bush's Tricks
Unfortunately, Bush's tricks are now Obama's tricks too. Just as Bush put out budgets that had little to do with reality (especially regarding projected deficits and using unrealistically rosy best-case scenario projections) Obama's Medicare cost projections are dangerously unrealistic. Medicare's chief actuary says: "There is a strong likelihood that the cost projections in the new trustees report under current law understate the actual future cost that Medicare will face. A strong likelihood." More here.
If Bush's budget fantasies bothered you - and they should have - then so should this.
If Bush's budget fantasies bothered you - and they should have - then so should this.
Sunday, August 8, 2010
Spider Web in Barbed Wire
If I'd had a digital camera back then (2001, did anybody?) I would've kept taking shots until I was sure I had it. Then again I kind of like it as it is with just the barest suggestion of the concentric strands of the web inside the wire.
Saturday, August 7, 2010
Cultures Cannot Suffer
Cultures cannot suffer. Therefore it is senseless and even harmful to talk about actions that result in harm to or destruction of cultures as immoral. This places a higher value on protecting perceived qualities of an inanimate entity than on preventing the suffering of conscious human beings. There is and can be no innate tragedy in the death of a culture or language apart from the suffering it causes to individual human beings.
The suffering caused by a culture's death or change can happen for various reasons. At the most basic level, adult humans don't appreciate the disruption involved with learning new social norms; it's just a pain. But there are additional and unnecessary causes of suffering that are built into the values of some cultures: namely, an explicit and conscious value that preserving the culture is itself innately good; and that if the culture were to change or disappear, it would be a moral disaster.
What prompted this post was Katja Grace's discussion regarding deliberately bringing up children to speak minority languages, thereby limiting their social contact with the rest of the world, "Agreeable Ways to Disable Your Children". (There are links to far more controversial proposals that have actually occurred in the real world.) By teaching children to be monolingual in an obscure language within a broader nation-state, parents are preserving their language, but limiting (disabling?) their children. Humans do indeed sometimes keep their children isolated in ingroups and one of the best insulation methods is by teaching them only the ingroup's language. This is one example where a moral value about preserving culture is inconsistent with preventing individual suffering. The language doesn't care that you preserved it; if you force yourself to perpetuate, you're just making it harder on yourself, and your children.
There is a more general argument to be made here, that more conservative, less open cultures which most strongly harbor an explicit value of preserving themselves for their own sake are doing something similar to their members by limiting them and making them suffer unnecessarily.
To make this slightly more concrete, imagine two cultures which differ in cultural conservatism. Culture A is a mercantile civilization that is frequently changed by its people's contact with foreign lands; they shrug and adopt the food and ideas of the people they meet. Culture B on the other side of the river is more conservative, with an explicit and conscious moral sense that their culture is intrinsically valuable, and that if it were ever lost, it would be the end of the world. Culture B constantly fights the advance of the new and its people wring their hands and gnash their teeth at the strange food and ideas polluting the next generation. In the end the material conditions of both places are the same, but the people in Culture B have suffered more, and unnecessarily. (If nation-states are cultures, then I would put the U.S. in Culture A's category. We've changed far more than we realize due to contact with other cultures, and I expect in 50-100 years we'll be, for example, far more Asian than we are now. And because we're an A-Culture, it fortunately won't bother us that much.)
On the other hand, culture is not just noise; cultural values can be better or worse in their impact on material well-being, so there is potentially still bad news in culture change beyond just the degree to which the culture causes its members to consciously decry change. But again the change or loss of these values must be evaluated only with regard to the effect on individual humans. It's tempting to object that we're still assuming values here in order to evaluate their worth. But there is a baseline that exists independent of acculturation, and it's composed of animal essentials - food, shelter, sex, and status.
[Added later: The Wall Street Journal publishes this article about an Eyak language preservation enthusiast. To put it bluntly: what is the value of this work, to Eyak-descendants or anyone else? Thanks to Thurston for the pointer.]
The suffering caused by a culture's death or change can happen for various reasons. At the most basic level, adult humans don't appreciate the disruption involved with learning new social norms; it's just a pain. But there are additional and unnecessary causes of suffering that are built into the values of some cultures: namely, an explicit and conscious value that preserving the culture is itself innately good; and that if the culture were to change or disappear, it would be a moral disaster.
What prompted this post was Katja Grace's discussion regarding deliberately bringing up children to speak minority languages, thereby limiting their social contact with the rest of the world, "Agreeable Ways to Disable Your Children". (There are links to far more controversial proposals that have actually occurred in the real world.) By teaching children to be monolingual in an obscure language within a broader nation-state, parents are preserving their language, but limiting (disabling?) their children. Humans do indeed sometimes keep their children isolated in ingroups and one of the best insulation methods is by teaching them only the ingroup's language. This is one example where a moral value about preserving culture is inconsistent with preventing individual suffering. The language doesn't care that you preserved it; if you force yourself to perpetuate, you're just making it harder on yourself, and your children.
There is a more general argument to be made here, that more conservative, less open cultures which most strongly harbor an explicit value of preserving themselves for their own sake are doing something similar to their members by limiting them and making them suffer unnecessarily.
To make this slightly more concrete, imagine two cultures which differ in cultural conservatism. Culture A is a mercantile civilization that is frequently changed by its people's contact with foreign lands; they shrug and adopt the food and ideas of the people they meet. Culture B on the other side of the river is more conservative, with an explicit and conscious moral sense that their culture is intrinsically valuable, and that if it were ever lost, it would be the end of the world. Culture B constantly fights the advance of the new and its people wring their hands and gnash their teeth at the strange food and ideas polluting the next generation. In the end the material conditions of both places are the same, but the people in Culture B have suffered more, and unnecessarily. (If nation-states are cultures, then I would put the U.S. in Culture A's category. We've changed far more than we realize due to contact with other cultures, and I expect in 50-100 years we'll be, for example, far more Asian than we are now. And because we're an A-Culture, it fortunately won't bother us that much.)
On the other hand, culture is not just noise; cultural values can be better or worse in their impact on material well-being, so there is potentially still bad news in culture change beyond just the degree to which the culture causes its members to consciously decry change. But again the change or loss of these values must be evaluated only with regard to the effect on individual humans. It's tempting to object that we're still assuming values here in order to evaluate their worth. But there is a baseline that exists independent of acculturation, and it's composed of animal essentials - food, shelter, sex, and status.
[Added later: The Wall Street Journal publishes this article about an Eyak language preservation enthusiast. To put it bluntly: what is the value of this work, to Eyak-descendants or anyone else? Thanks to Thurston for the pointer.]
"Doctors Tea Party" In San Diego
Because I will be a doctor in 3 years if things go according to plan, I'm of course concerned about any and all changes to the medical marketplace that state programs will bring about.
Given that I have libertarian leanings, I was interested to learn that an organization of physicians critical of the Obama administration's current and planned changes to medicine - the American Association of Physicians and Surgeons - is meeting where I live, in San Diego, tomorrow.
Then I clicked on the event website and saw the speakers: among them Christianist Sharron Angle, and Joseph Farah from the unhinged World Net Daily.
So here's an open letter to organizers: you have now alienated one of the few pro-free-market medical students in my class at UC San Diego by choosing these kinds of openly theocratic clowns to represent you. I'm a capitalist but I'm sorry to say I want nothing to do with your organization as long as I'm excluded based on religion, and as long as you seem as confused as you are about what's more important, economics or faith. Good luck trying to appeal to any demographic but middle-aged and older straight white Christian males.
Given that I have libertarian leanings, I was interested to learn that an organization of physicians critical of the Obama administration's current and planned changes to medicine - the American Association of Physicians and Surgeons - is meeting where I live, in San Diego, tomorrow.
Then I clicked on the event website and saw the speakers: among them Christianist Sharron Angle, and Joseph Farah from the unhinged World Net Daily.
So here's an open letter to organizers: you have now alienated one of the few pro-free-market medical students in my class at UC San Diego by choosing these kinds of openly theocratic clowns to represent you. I'm a capitalist but I'm sorry to say I want nothing to do with your organization as long as I'm excluded based on religion, and as long as you seem as confused as you are about what's more important, economics or faith. Good luck trying to appeal to any demographic but middle-aged and older straight white Christian males.
Labels:
capitalism,
health,
politics
Monday, August 2, 2010
How We Filter Arguments: Valid, Relevant, and Ridiculous
The Doomsday Argument as put forth by Nick Bostrom (and others) is a form of the self-indication assumption as applied to the continued existence of humans. In short: the Doomsday Argument states that we can reasonably assume we are substantially closer to the end of the existence of the human species than we are to the beginning of it. Hence the sunny name. Bostrom has complained that on hearing this argument, most people dismiss it outright as ridiculous, but do so without a real counterargument, or even an honest attempt therein. That is to say, the argument is not really rejected; it's filtered without being evaluated. Whether or not there is ever a time when this kind of filtering is legitimate or safe to do, we have no choice but to have some criteria for doing so.
I think adherents of the Doomsday Argument would agree that this particular chain of reasoning runs counter to our desires and intuitions, and that the argument's degree of abstraction is one factor that aids rejectors in calling it "ridiculous". To be clear, I'm not implying anything about the validity or lack thereof of Doomsday; but because of these characteristics, this is a good example of an argument that many readers will react similarly to and consider "ridiculous". This raises questions about what what we mean when we call an argument ridiculous, and how and why we filter arguments without actually evaluating them.
There are multiple ways that humans attempt to influence other humans, and outside of force, most of these ways involve language. These attempts to manipulate each other, via valid arguments or otherwise, do not occur in a vacuum. Literate people in industrialized societies are bombarded every waking minute with statements by other agents with their own interests who intend to change our behavior. Most of these statements don't bother with any semblance of logical coherence.
Of the attempts to influence that do at least look like arguments (whether or not they really are coherent and valid), a large portion (no doubt the majority) are invalid, advanced either in earnest by claimants unable to see the faults in their own arguments, or by claimants who are at best indifferent to the validity of their own arguments as long as they create the desired change in the behavior of their audience. The problem is that there are only so many hours in a day, and it takes time and effort to evaluate arguments, and we don't know until we evaluate them which are coherent and valid. Therefore we end up rejecting most arguments without actually evaluating them. This should be no cause for guilt. I strongly suspect that the average blog reader encounters far more arguments per day than Aristotle or Descartes did in their prime. In modernity the possible substrates for arguments are much greater, as are the channels by which we can receive them. This is why instead of evaluating and rejecting arguments, we filter them, i.e. ignore them. Sometimes we do this by calling them "ridiculous".
This might not be so dangerous if we were able to keep accurate labels in our mental catalog, but chances are, if you filtered an argument last month and it comes up again, you won't remember that you provisionally rejected it without actually evaluating it, you'll remember that you thought it was "ridiculous". It would be nice if we at least had a cognitive junk mail pile for those arguments. Therefore, knowing whether you've filtered or legitimately rejected an argument, and what your filtering system is, are very important.
To help us we use heuristics, which are usually a social network-influenced way of approximating truth values. "I don't have time or background to think through argument ABC I just encountered. But this is the first time I'm hearing it, and if such a profound argument were true, I would have heard of it already, or experts would be discussing it prominently in the media." Or, "A moral authority I respect has not heard of this or actively rejects it. Therefore, it is probably wrong." Or even, "I mistrust this person, or this person is trying to get me to buy something/vote for them/sleep with them, therefore this thing they told me is likely false." These are not bad ways to make your argument filtration more accurate, but again we forget which are provisional rejections based on "My brother told me it's hogwash" and which are full critical rejections.
Furthermore, if we use these social weighting methods, then the population dynamics of the spread of an argument become very important, and of course in most cases the spread is related much more to claimed relevance than it is to the merits of that argument. Some of these heuristics do seem to improve our chances of "buying" valid arguments: whether the source is secure enough to welcome critical approaches to the argument; the other positions the source holds, especially if they are normally hostile to the position to which the argument leads; and being repeatedly exposed to the argument. That is, "Everyone's talking about it, so it must be meaningful or useful, and besides I don't want to look stupid by not having considered it."
Talk is cheap, and arguments are vulnerable to a cheap-signaling-type exploitation, namely the argument's superficial relevance to the argument-hearer. If we want our arguments heard, we don't work on the logic, we work on the apparent relevance. (In most circles. Most humans are not graduate students in philosophy.) You're not very likely to spend your finite efforts parsing arguments that don't relate with high probability to anything in your current or future experience. But when someone tells you, a twenty-first century technology user, that "cell phone use causes brain cancer", it might be a good idea to actively pursue that line of reasoning to be sure it's false. But we muddy the waters because we all put relevant premises or conclusions in our arguments to get attention for them. Absent any source-weighting, and as long as the argument isn't "ridiculous" (more on what this means later), you're inclined to listen.
This is all to say: we want to spend as large a fraction of our attention as possible evaluating arguments in Category-B, but until we spend the time we don't know if those arguments actually belong to A (and most probably will). We don't care enough about arguments in C or D to decide on their validity because they don't seem to relate to anything that makes a difference to us, so we throw them into the argument spam folder. That is, even if we can't tell without deliberation whether arguments are valid (right column), we can usually tell at first glance whether they're relevant (upper row),
A "ridiculous" argument is therefore one which a) claims to be relevant, b) makes an argument which, if accepted, would require the audience to substantially update their model of the world, and c) which the audience therefore rejects without evaluating. An argument that is irrelevant can't be ridiculous: you might hear an airtight, clearly communicated argument that Genghis Khan was ambidextrous, and though it sounds reasonable, you probably don't care enough to worry about it or actively call it "ridiculous" unless you're a historian of medieval Asia. Of course we do save a lot of time here, because the majority of arguments making demands on our attention by claiming relevance are neither relevant nor valid.
So what properties make us likely to call an argument ridiculous, i.e. subject to dismissal despite claimed relevance and despite not being evaluated for validity? For now I'll stick to reasons that rejectors of ridiculous would themselves report.
Extreme implications - any argument involving a conclusion that an object can exist, or an event can occur, of a magnitude or quality unobserved in the hearer's life or the hearer's account of history. This is especially true for outcomes that are very pleasant, very unpleasant (as with the Doomsday Argument), or very strange.
Novel or strange relationships - arguments that entities in what seem to the audience to be completely separate categories are in fact related.
Arguments that require a substantial change in behavior - unsurprisingly.
Contradiction of immediately perceivable reality - also unsurprisingly.
Contradiction of currently held beliefs - perhaps most unsurprisingly.
Note that argument structure or source are not on this list. Once we deem an argument ridiculous we may resort to picking on the structure or source for further validation, but this isn't critical thinking, and these characteristics do not trigger the initial labeling of ridiculous.
Readers may notice the similarity of the relevance vs validity table to the urgent/not urgent, important/unimportant productivity table. We tend to spend too much time doing urgent but unimportant things (in corporatese, "putting out fires"), and not enough doing not-urgent but important things. The equivalent mistakes in argument filtering are that we spend too much time thinking about A-arguments (relevant/invalid) so we overcompensate by throwing out any argument which would make a dent in our belief network (some of which are certainly true!), and there are likely D arguments (not clearly relevant, valid) which actually might affect us. These are the rhetoric-parsing consequences of our limits in correlating beliefs, as well the human tendency to epistemological homeostasis, our strong tendency to preserve the status quo in our worldview and avoid updating our beliefs.
* * *
Language allows us to adopt beliefs about phenomena and relationships that we have not directly observed. As there are more humans communicating with each other through more channels, the amount of propositions we are exposed to will increase, but our cognitive bandwidth will not. The need for some argument filtration is unavoidable. Consequently, we use shortcuts to avoid fully evaluating every argument we hear: without evaluating them for coherence and validity, we reject arguments that are not relevant and we reject arguments that conflict with what we believe we know to be true (these we call "ridiculous"). However, the danger is that in both cases we do not cognitively categorize these beliefs as provisionally rejected, feeling that in fact the beliefs were positively refuted.
I think adherents of the Doomsday Argument would agree that this particular chain of reasoning runs counter to our desires and intuitions, and that the argument's degree of abstraction is one factor that aids rejectors in calling it "ridiculous". To be clear, I'm not implying anything about the validity or lack thereof of Doomsday; but because of these characteristics, this is a good example of an argument that many readers will react similarly to and consider "ridiculous". This raises questions about what what we mean when we call an argument ridiculous, and how and why we filter arguments without actually evaluating them.
There are multiple ways that humans attempt to influence other humans, and outside of force, most of these ways involve language. These attempts to manipulate each other, via valid arguments or otherwise, do not occur in a vacuum. Literate people in industrialized societies are bombarded every waking minute with statements by other agents with their own interests who intend to change our behavior. Most of these statements don't bother with any semblance of logical coherence.
Of the attempts to influence that do at least look like arguments (whether or not they really are coherent and valid), a large portion (no doubt the majority) are invalid, advanced either in earnest by claimants unable to see the faults in their own arguments, or by claimants who are at best indifferent to the validity of their own arguments as long as they create the desired change in the behavior of their audience. The problem is that there are only so many hours in a day, and it takes time and effort to evaluate arguments, and we don't know until we evaluate them which are coherent and valid. Therefore we end up rejecting most arguments without actually evaluating them. This should be no cause for guilt. I strongly suspect that the average blog reader encounters far more arguments per day than Aristotle or Descartes did in their prime. In modernity the possible substrates for arguments are much greater, as are the channels by which we can receive them. This is why instead of evaluating and rejecting arguments, we filter them, i.e. ignore them. Sometimes we do this by calling them "ridiculous".
This might not be so dangerous if we were able to keep accurate labels in our mental catalog, but chances are, if you filtered an argument last month and it comes up again, you won't remember that you provisionally rejected it without actually evaluating it, you'll remember that you thought it was "ridiculous". It would be nice if we at least had a cognitive junk mail pile for those arguments. Therefore, knowing whether you've filtered or legitimately rejected an argument, and what your filtering system is, are very important.
To help us we use heuristics, which are usually a social network-influenced way of approximating truth values. "I don't have time or background to think through argument ABC I just encountered. But this is the first time I'm hearing it, and if such a profound argument were true, I would have heard of it already, or experts would be discussing it prominently in the media." Or, "A moral authority I respect has not heard of this or actively rejects it. Therefore, it is probably wrong." Or even, "I mistrust this person, or this person is trying to get me to buy something/vote for them/sleep with them, therefore this thing they told me is likely false." These are not bad ways to make your argument filtration more accurate, but again we forget which are provisional rejections based on "My brother told me it's hogwash" and which are full critical rejections.
Furthermore, if we use these social weighting methods, then the population dynamics of the spread of an argument become very important, and of course in most cases the spread is related much more to claimed relevance than it is to the merits of that argument. Some of these heuristics do seem to improve our chances of "buying" valid arguments: whether the source is secure enough to welcome critical approaches to the argument; the other positions the source holds, especially if they are normally hostile to the position to which the argument leads; and being repeatedly exposed to the argument. That is, "Everyone's talking about it, so it must be meaningful or useful, and besides I don't want to look stupid by not having considered it."
Talk is cheap, and arguments are vulnerable to a cheap-signaling-type exploitation, namely the argument's superficial relevance to the argument-hearer. If we want our arguments heard, we don't work on the logic, we work on the apparent relevance. (In most circles. Most humans are not graduate students in philosophy.) You're not very likely to spend your finite efforts parsing arguments that don't relate with high probability to anything in your current or future experience. But when someone tells you, a twenty-first century technology user, that "cell phone use causes brain cancer", it might be a good idea to actively pursue that line of reasoning to be sure it's false. But we muddy the waters because we all put relevant premises or conclusions in our arguments to get attention for them. Absent any source-weighting, and as long as the argument isn't "ridiculous" (more on what this means later), you're inclined to listen.
This is all to say: we want to spend as large a fraction of our attention as possible evaluating arguments in Category-B, but until we spend the time we don't know if those arguments actually belong to A (and most probably will). We don't care enough about arguments in C or D to decide on their validity because they don't seem to relate to anything that makes a difference to us, so we throw them into the argument spam folder. That is, even if we can't tell without deliberation whether arguments are valid (right column), we can usually tell at first glance whether they're relevant (upper row),
A "ridiculous" argument is therefore one which a) claims to be relevant, b) makes an argument which, if accepted, would require the audience to substantially update their model of the world, and c) which the audience therefore rejects without evaluating. An argument that is irrelevant can't be ridiculous: you might hear an airtight, clearly communicated argument that Genghis Khan was ambidextrous, and though it sounds reasonable, you probably don't care enough to worry about it or actively call it "ridiculous" unless you're a historian of medieval Asia. Of course we do save a lot of time here, because the majority of arguments making demands on our attention by claiming relevance are neither relevant nor valid.
So what properties make us likely to call an argument ridiculous, i.e. subject to dismissal despite claimed relevance and despite not being evaluated for validity? For now I'll stick to reasons that rejectors of ridiculous would themselves report.
Extreme implications - any argument involving a conclusion that an object can exist, or an event can occur, of a magnitude or quality unobserved in the hearer's life or the hearer's account of history. This is especially true for outcomes that are very pleasant, very unpleasant (as with the Doomsday Argument), or very strange.
Novel or strange relationships - arguments that entities in what seem to the audience to be completely separate categories are in fact related.
Arguments that require a substantial change in behavior - unsurprisingly.
Contradiction of immediately perceivable reality - also unsurprisingly.
Contradiction of currently held beliefs - perhaps most unsurprisingly.
Note that argument structure or source are not on this list. Once we deem an argument ridiculous we may resort to picking on the structure or source for further validation, but this isn't critical thinking, and these characteristics do not trigger the initial labeling of ridiculous.
Readers may notice the similarity of the relevance vs validity table to the urgent/not urgent, important/unimportant productivity table. We tend to spend too much time doing urgent but unimportant things (in corporatese, "putting out fires"), and not enough doing not-urgent but important things. The equivalent mistakes in argument filtering are that we spend too much time thinking about A-arguments (relevant/invalid) so we overcompensate by throwing out any argument which would make a dent in our belief network (some of which are certainly true!), and there are likely D arguments (not clearly relevant, valid) which actually might affect us. These are the rhetoric-parsing consequences of our limits in correlating beliefs, as well the human tendency to epistemological homeostasis, our strong tendency to preserve the status quo in our worldview and avoid updating our beliefs.
Language allows us to adopt beliefs about phenomena and relationships that we have not directly observed. As there are more humans communicating with each other through more channels, the amount of propositions we are exposed to will increase, but our cognitive bandwidth will not. The need for some argument filtration is unavoidable. Consequently, we use shortcuts to avoid fully evaluating every argument we hear: without evaluating them for coherence and validity, we reject arguments that are not relevant and we reject arguments that conflict with what we believe we know to be true (these we call "ridiculous"). However, the danger is that in both cases we do not cognitively categorize these beliefs as provisionally rejected, feeling that in fact the beliefs were positively refuted.
Where Are China's Ancient Monuments?
David Frum has written a book review on Belknap's The Early Chinese Empires (Vol. I of VI). Among many pithy reflections, he offers this:
Finally, maybe a millennia-old strong central government of a large nation has left no room for the provinces to leave behind their own castles? Japan has a rich heritage of castles and kofun, but Japan was also not strongly unified until the seventeeth century.
Despite the amazing and even terrifying continuity of Chinese culture, it is really astonishing how little of ancient China there remains for anybody to look at. Lewis off-handedly mentions at one point that there remains not a single surviving house or palace from Han China. There are not even ruins.It must be interjected here that a certain Wall does leap to mind, but beyond that, can you name another monument? This absence is indeed striking, and once stated seems like it should always have been obvious. Why? Are people's identities wrapped in culture but not at all with specific government administrations, therefore obviating nostalgia? ("We're in China, regardless of whether our own Parthenon still is.") Or for the conspiracy-minded, might it result from a pragmatic policy that each conqueror or dynasty has made sure to erase the monuments of preceding rulers, so they could create the past at their leisure, as Orwell suggested an ideal dictator would? Maybe it's something far more mundane; reliance on wooden structures that unlike the Coliseum or the Pyramids, don't weather so well?
There's no equivalent of the Parthenon or the Roman Forum, no Pantheon or Colosseum. You can come closer to the present: There's no real Chinese equivalent for Notre Dame or the Palazzo Vecchio. For all its overpowering continuity, China does not preserve physical remains of the past.
Finally, maybe a millennia-old strong central government of a large nation has left no room for the provinces to leave behind their own castles? Japan has a rich heritage of castles and kofun, but Japan was also not strongly unified until the seventeeth century.
Can You Tell the Non-Japanese Actors in Memoirs of a Geisha?
When the movie came out, there was a controversy from certain quarters because most of the Asian roles are played by non-Japanese. In my anecdotal experience, the Japanese-American and Japanese expat community did not think this was a problem, just because East Asians look similar to each other, so why not cast non-Japanese in some roles?
If you're Caucasian, turn the tables. You might ask the same question of a movie with an all-European cast: do you think Braveheart cast only Scottish actors? Of course not! Why? Because West Europeans look similar to each other. Dienekes gives the results of a quiz to see if people could guess (if you want to test yourself, don't look at the top of the article because the answer is there; now here's the link.) Did you do any better?
It would be really interesting to have a bunch of pictures of Europeans - and Asians, and people from everywhere else for that matter - that the crowd can vote on to see if we really can tell a difference between adjacent nationalities. My guess is usually not.
If you're Caucasian, turn the tables. You might ask the same question of a movie with an all-European cast: do you think Braveheart cast only Scottish actors? Of course not! Why? Because West Europeans look similar to each other. Dienekes gives the results of a quiz to see if people could guess (if you want to test yourself, don't look at the top of the article because the answer is there; now here's the link.) Did you do any better?
It would be really interesting to have a bunch of pictures of Europeans - and Asians, and people from everywhere else for that matter - that the crowd can vote on to see if we really can tell a difference between adjacent nationalities. My guess is usually not.
Sunday, August 1, 2010
Intelligence and Suffering
In Dune, the gom jabbar is a device that tests self-control; it "only kills animals". It's a box into which a person places his hand. Once the hand is in the box, the person feels a gradually growing pain which, though not tissue-damaging, eventually expands to become the most intense he's ever experienced. If at any point the person attempts to withdraw his hand, he is stung by a poisoned needle and dies.
This is related to the forbidden marshmallow impulse-control test of Mischel et al, though perhaps more dramatic. While delayed gratification is related to planning, future success and even intelligence, it's interesting and a bit dark to note that a positive behavioral attribute is measured by how much someone can make himself suffer.
Where Are the East Asian Histories of Greece and the Roman Empire?
Apparently Umberto Eco had a project to encourage the researching and publication of exactly such works, on the assumption that there would be much to be gained on our own history from a fresh outside perspective. I would enthusiastically read such histories but it seems Eco's program was a bust; I don't know of any. Am I missing them? If not, why are there none? On the other side, there are certainly plenty of English-language histories of China.
I should add that modern (and "post-modern") writers are not the literary Ostrogoths they're often called in book reviews by grumbling columnists. They usually show a deep understanding and love for the classics, and continue to mine them in inventive ways that make them even more relevant. Italo Calvino is another excellent example.
I should add that modern (and "post-modern") writers are not the literary Ostrogoths they're often called in book reviews by grumbling columnists. They usually show a deep understanding and love for the classics, and continue to mine them in inventive ways that make them even more relevant. Italo Calvino is another excellent example.
Subscribe to:
Posts (Atom)