Former Australian Prime Minister Gough Whitlam died recently, at age 98. Predictable hagiographies followed, with cringe-inducing link titles like 'Gough Whitlam a Martyr and a Hero'. This causes right-thinking people to be torn between the polite and worthy tradition of not speaking ill of the recently-deceased, and a mildly grating feeling that the hagiographers write the narratives when this happens. Obituaries are hard to do well, that's for sure, and most don't even really try.
Say whatever else you will about Gough Whitlam, but he was a transformative Prime Minister. Unfortunately, the balance of this transformation was decidedly negative. To his credit, Whitlam enacted some truly good policies, most notably getting rid of the draft, and cutting tarriffs. He also brought in some others that were probably inevitable, like no-fault divorce and recognition of China. He also had some disastrous ones. The Racial Discrimination Act was probably his most poisonous legacy, most recently in the news for being part of the trashing of free speech in the prosecution of Andrew Bolt. Getting rid of university fees almost certainly contributed to the permanent underfunding and subsequent underperformance of Australian universities to this day. He also cut off Rhodesia (leading it to the brilliant sunlit uplands it's in today), and rewarded the buffoonish Lionel Murphy for his bizarre raids on ASIO offices (which tarnished Australia's reputation as a serious state in intelligence matters) by appointing him to the High Court (where he was predictably and comically awful).
But the big irony of the Whitlam years involves the Liberal Party. They struggled so mightily to unseat him, including blocking the funding of government to provoke a constitutional crisis. Blocking supply, I might add, was something that the Libs attract an oddly small amount of criticism for, given its role in the whole affair. Whitlam was famously dismissed by Governor-General John Kerr (who became the boogie-man to the Labor party faithful ever since). Whitlam was also then subsequently voted out by a huge margin in the ensuing elections (a fact that Whitlam fans never seem to discuss very much, since it doesn't fit the narrative very well).
So the Liberals finally won their big victory over Whitlam! And what was their big reward?
Eight years of Malcolm Bloody Fraser, the most disappointing Liberal Prime Minister ever, and one of the worst overall (giving Gough a red hot go for that title).
If the election is between Fraser and Whitlam, honestly, why even bother? It's like the David Cameron v Gordon Brown election - as Simon and Garfunkel said, every way you look at it, you lose.
Thankfully, conservatives eventually had something to cheer for when Fraser was kicked out and Australia finally got some sensible and important economic reforms, coming from... Labor Prime Ministers Bob Hawke and Paul Keating! The former was excellent, the latter was pretty decent too (and superb as Hawke's treasurer). Ex-post, is there a single member of the Liberal Party today (excluding the braindead and the hyper-partisan) who, if sent back in time to 1983 but knowing what they do now, would actually vote for Fraser over Hawke?
And yet Whitlam is the 'hero and the martyr'. Hawke plays second fiddle in Labor Party folklore, despite being excellent in ways that were of mostly bipartisan benefit (floating the dollar, cutting inflation, and other instances of important micro-economic reform).
Yeah, I don't get it either.
One pound of inference, no more, no less. No humbug, no cant, but only inference. This task done, and he would go free.
Tuesday, October 21, 2014
Monday, October 20, 2014
More Gold
It always warms my heart when the mere title of an essay makes me laugh. Herein, the estimable Theodore Dalrymple, with 'Your Dad is Not Hitler'. His other recent essay, 'A Sense of Others' is also fantastic.
Honestly, if Taki's Magazine had Moldbug and Heartiste writing for it, one would scarcely need to go anywhere else.
Honestly, if Taki's Magazine had Moldbug and Heartiste writing for it, one would scarcely need to go anywhere else.
Sunday, October 19, 2014
Yes, we are still on for the thing tonight, just like we said, god dammit.
Continuing my descent into old fogey-ness, I seem to have encountered another shift in the zeitgeist that marks off my age. The first one was the enormous increase in the number of text messages sent by the average teenager. But this was something that one mostly would only see if one actually has a teenager around the house. Since this doesn't apply to me, I only find out about it in odd magazine articles.
But there is another trend that I have had cause to experience firsthand - the proliferation in confirmatory text messages over every social arrangement.
Up until recently, my general presumption was that things worked as follows:
You, like me, might presume that this is how things still work, yes?
You, like me, might end up being rather surprised.
These days, a lot of people, particularly young people, seem to have decided collectively that they're switching from an opt-out system of arrangements to an opt-in one. In other words, plans to do things in two days time are merely a suggestion, a vague agreement-in-principle. If you actually intend to follow through, you have to confirm this.
I found this when I'd start getting messages asking if we were still on for what I considered agreed-upon plans. I used to respond with 'of course' or something like that, wondering vaguely why this was now the thing that people did, but dismissing it as evidence of their neediness or insecurity. Confirming to them would seem pointless, but not a big deal.
I remember complaining to a friend, and saying that it was refreshing to find people who didn't need this. I was meeting someone new for coffee that evening, and was glad that we hadn't done the obligatory text message dance, which seemed like a good sign. That is, until she didn't actually turn up. Apparently she had decided that not receiving a confirmation was an indication that things were canceled, so much so that she apparently hadn't bothered to message me to check.
To paraphrase Frank Costanza, as I rained abusive text messages on her, I realized there had to be another way. After my rage subsided, it became pretty clear that my attempts to fight a rearguard action against the culture were as doomed as the 50's protests against rock and roll. So I now suck it up and send confirmatory messages. Sometimes one still isn't enough - I've sent a confirmation the night before, only to get another query confirming things an hour before. Who are these people, and what on earth is wrong with them?
I think the reality is that people have become so flaky that this is actually the more efficient social arrangement. When enough people become sufficiently inconsiderate that they just cancel all the time at the last minute, confirmations are actually time-saving. They're only a net drain when the probability of last minute cancellations is politely low, at which point they're a nuisance. This was what I assumed was the case, but apparently not. The real shift will have arrived when cancelling is so common that it's not even considered that impolite. Once again, I'm pretty sure this is a generational thing.
If narcissism and self-centredness are the psychological traits of our age, then flakiness is merely the natural result. Everyone else's time is less valuable than mine (one reasons), so what difference does it make if I change plans on someone at the last minute? Actually, it's probably worse than that - the median reasoning (such as it is) is probably closer to 'I have something better on, or can't be bothered. Ergo, I won't go'. To that extent, expecting confirmatory text messages at least indicates an ability to escape from pure solipsism and anticipate everyone else's self-centredness too. Which, at the margin, I guess is a good thing, even if the need for such anticipation is ultimately depressing.
Plus I just hate sending zillions of text messages, which annoys me too. Why? Same underlying reason.
But there is another trend that I have had cause to experience firsthand - the proliferation in confirmatory text messages over every social arrangement.
Up until recently, my general presumption was that things worked as follows:
-You and person X would agree to do activity Y at time Z.
-If one of you couldn't make it, you would inform the other ahead of time.
-Absent that, it is assumed that the arrangements stand and you both turn up at time Z.
You, like me, might presume that this is how things still work, yes?
You, like me, might end up being rather surprised.
These days, a lot of people, particularly young people, seem to have decided collectively that they're switching from an opt-out system of arrangements to an opt-in one. In other words, plans to do things in two days time are merely a suggestion, a vague agreement-in-principle. If you actually intend to follow through, you have to confirm this.
I found this when I'd start getting messages asking if we were still on for what I considered agreed-upon plans. I used to respond with 'of course' or something like that, wondering vaguely why this was now the thing that people did, but dismissing it as evidence of their neediness or insecurity. Confirming to them would seem pointless, but not a big deal.
I remember complaining to a friend, and saying that it was refreshing to find people who didn't need this. I was meeting someone new for coffee that evening, and was glad that we hadn't done the obligatory text message dance, which seemed like a good sign. That is, until she didn't actually turn up. Apparently she had decided that not receiving a confirmation was an indication that things were canceled, so much so that she apparently hadn't bothered to message me to check.
To paraphrase Frank Costanza, as I rained abusive text messages on her, I realized there had to be another way. After my rage subsided, it became pretty clear that my attempts to fight a rearguard action against the culture were as doomed as the 50's protests against rock and roll. So I now suck it up and send confirmatory messages. Sometimes one still isn't enough - I've sent a confirmation the night before, only to get another query confirming things an hour before. Who are these people, and what on earth is wrong with them?
I think the reality is that people have become so flaky that this is actually the more efficient social arrangement. When enough people become sufficiently inconsiderate that they just cancel all the time at the last minute, confirmations are actually time-saving. They're only a net drain when the probability of last minute cancellations is politely low, at which point they're a nuisance. This was what I assumed was the case, but apparently not. The real shift will have arrived when cancelling is so common that it's not even considered that impolite. Once again, I'm pretty sure this is a generational thing.
If narcissism and self-centredness are the psychological traits of our age, then flakiness is merely the natural result. Everyone else's time is less valuable than mine (one reasons), so what difference does it make if I change plans on someone at the last minute? Actually, it's probably worse than that - the median reasoning (such as it is) is probably closer to 'I have something better on, or can't be bothered. Ergo, I won't go'. To that extent, expecting confirmatory text messages at least indicates an ability to escape from pure solipsism and anticipate everyone else's self-centredness too. Which, at the margin, I guess is a good thing, even if the need for such anticipation is ultimately depressing.
Plus I just hate sending zillions of text messages, which annoys me too. Why? Same underlying reason.
Monday, October 6, 2014
Crazy is not a hypothesis
One of the criticisms I sometimes hear of behavioral finance, mostly from the rational crowd, is that one is just showing that 'people are crazy' or 'people are stupid'. This is always said dismissively, as if such an observation were trivially true and thus unworthy of observation or elaboration.
The first indication that this is a vastly overblown criticism is given by the fact that, despite the claimed triviality and obviousness of people's stupidity and craziness, these traits don't seem to find their way into that many models - the agents in those models are all rational, you see.
Well, actually, it's a bit subtler than that. Stupid agents have actually been in models for quite a while now, most notably in models that include noise traders, trading on false beliefs or for wholly idiosyncratic reasons.
But agents who could be described as 'crazy' are harder to find - acting in completely counterproductive or irrational ways given a set of preferences and information. So why is that?
The reason, ultimately, is that 'crazy' is usually not a useful hypothesis. It's a blanket name given to a set of behaviors that falls outside of what could be considered rational behavior, or even partially rational (such as kludgy rules of thumb or naive reinforcement learning).
And the reason you know that crazy isn't a useful hypothesis is that it tells you very little about how someone will act, other than to specify what they won't do. How would you go about modeling the behavior of someone who was truly crazy? Maybe you could say they act at random (in which case things look like the noise traders that we labelled as stupid). But are you really sure that their behavior is random? How sure are you that it's not actually predictable in ways you haven't figured out? It seems pretty unlikely that there are large fractions of traders who are in a bona fide need of institutionalisation in a sanitorium, if for no other reason than someone who was really bonkers would (hopefully) struggle to get a job at the JP Morgan trading desk or acquire enough millions of dollars to move financial markets.
The whole point of behavioral economics (and abnormal psychology before it) is to figure out how people are crazy. When someone is doing something you don't understand, you can either view it as mysterious and just say that they went mad, or you can try to figure out what's driving the behavior. But madness is an abdication of explanation.
Good psychiatry reduces the mystery of madness to specific pathologies - bipolar disorder, psychopathy, depression, autism, what have you. 'Madness' functions as the residual claimant, thankfully getting smaller each year.
Good behavioral finance ultimately strives at similar ends - maybe people are overconfident, maybe they use mental accounting, maybe they exhibit the disposition effect. These are things we can model. These things we can understand, and finally cleave the positive from the normative - if rational finance is a great description of what people should do but a lousy description of what they do do, then let's also try to figure out what people are actually doing, while still preaching the lessons we formulated from the rational models.
To say that behavioral finance is just 'people acting crazy' is somewhat like saying that all of economics can be reduced to the statement 'people respond to incentives'. In a trivial sense, it may not be far from the truth. But that statement alone doesn't tell you very much about what to expect, as the whole science is understanding the how and the why of incentives in different situations - all the hard work is still to be done, in other words.
It's also worth remembering this in real life situations - when someone you know seems to be acting crazily, it's possible they have an unusual form of mental illness as yet unknown to you, but it's also possible that you simply have inadequate models of their preferences and decision-making processes. Usually, I'd bet on the latter.
The first indication that this is a vastly overblown criticism is given by the fact that, despite the claimed triviality and obviousness of people's stupidity and craziness, these traits don't seem to find their way into that many models - the agents in those models are all rational, you see.
Well, actually, it's a bit subtler than that. Stupid agents have actually been in models for quite a while now, most notably in models that include noise traders, trading on false beliefs or for wholly idiosyncratic reasons.
But agents who could be described as 'crazy' are harder to find - acting in completely counterproductive or irrational ways given a set of preferences and information. So why is that?
The reason, ultimately, is that 'crazy' is usually not a useful hypothesis. It's a blanket name given to a set of behaviors that falls outside of what could be considered rational behavior, or even partially rational (such as kludgy rules of thumb or naive reinforcement learning).
And the reason you know that crazy isn't a useful hypothesis is that it tells you very little about how someone will act, other than to specify what they won't do. How would you go about modeling the behavior of someone who was truly crazy? Maybe you could say they act at random (in which case things look like the noise traders that we labelled as stupid). But are you really sure that their behavior is random? How sure are you that it's not actually predictable in ways you haven't figured out? It seems pretty unlikely that there are large fractions of traders who are in a bona fide need of institutionalisation in a sanitorium, if for no other reason than someone who was really bonkers would (hopefully) struggle to get a job at the JP Morgan trading desk or acquire enough millions of dollars to move financial markets.
The whole point of behavioral economics (and abnormal psychology before it) is to figure out how people are crazy. When someone is doing something you don't understand, you can either view it as mysterious and just say that they went mad, or you can try to figure out what's driving the behavior. But madness is an abdication of explanation.
Good psychiatry reduces the mystery of madness to specific pathologies - bipolar disorder, psychopathy, depression, autism, what have you. 'Madness' functions as the residual claimant, thankfully getting smaller each year.
Good behavioral finance ultimately strives at similar ends - maybe people are overconfident, maybe they use mental accounting, maybe they exhibit the disposition effect. These are things we can model. These things we can understand, and finally cleave the positive from the normative - if rational finance is a great description of what people should do but a lousy description of what they do do, then let's also try to figure out what people are actually doing, while still preaching the lessons we formulated from the rational models.
To say that behavioral finance is just 'people acting crazy' is somewhat like saying that all of economics can be reduced to the statement 'people respond to incentives'. In a trivial sense, it may not be far from the truth. But that statement alone doesn't tell you very much about what to expect, as the whole science is understanding the how and the why of incentives in different situations - all the hard work is still to be done, in other words.
It's also worth remembering this in real life situations - when someone you know seems to be acting crazily, it's possible they have an unusual form of mental illness as yet unknown to you, but it's also possible that you simply have inadequate models of their preferences and decision-making processes. Usually, I'd bet on the latter.
Thursday, September 25, 2014
A thing I did not know until recently
The word 'se'nnight'. It's an archaic word for 'week', being a contraction of 'seven night(s)'. The most interesting thing is that it makes it immediately clear where 'fortnight' comes from, being a similar contraction of 'fourteen night(s)'. The more you know.
Via the inimitable Mark Steyn.
Via the inimitable Mark Steyn.
Tuesday, September 23, 2014
On the dissolving of political bands and the causes impelling separation
Well, Scottish independence has come and gone, thank God. The list of grievances being cited was pathetic enough to make even the complaints of the American colonists (already laughably overblown) seem like the accounts of survivors from North Korean prison camps.
But one thing this whole debacle really illustrated is the following: very few people these days think in a principled way about secession. When, if ever, do a group of people have a right to secede from a country? Do they even need legitimate grievances? How many of them need to agree, and by what margin?
This is certainly true in America. What are the two historical events that most people in this country agree on? Firstly, that the American revolution was a jolly good thing and entirely appropriate. And secondly, that the civil war was fortunately won by the North, whose cause was ultimately just (this is probably still somewhat disputed in the South today, but I think it's probably broadly agreed on overall).
Ponder, however, the surprising difficulty in reconciling those two positions in a principled manner. For some thoughts on the justification for the Cofederacy, meet Raphael Semmes, a Captain of the Confederate States Navy. Have a read of how an actual member of the Confederacy justifies the South's position. It's all in the first couple of chapters of his book, 'Memoirs of Service Afloat', which Gutenberg has for free here.
If you're too lazy to read the original, his argument is quite simple. Firstly, he argues that the same rights that gave the states the ability to join the union gave them the right to leave - they were separate political entities capable of their own decisions, a status that predated the union. Second, he argues that the people of the North and the people of the South are fundamentally dissimilar in attitude and culture. And finally, that the North had been oppressing the South. over the years, and the South simply wanted out.
Now, you may consider these arguments persuasive or unpersuasive. But before you decide, it is worth comparing them to the arguments that the American Colonists claimed as their justification for seceding from Britain. Semmes' argument, if you boil it down, essentially says that we claim the same right to secede from the Union as the thirteen colonies claimed as their right to secede from Britain.
Perhaps slavery is the trump card, the elimination of which (presuming for a moment that this was the sole rationale for the war from the Northern perspective, a far from obvious point) had such moral force that it overwhelmed all the other arguments. But without this logical Deus Ex Machina, it is quite challenging to come up with a consistent set of principles under which the colonies independence was was justified but the South's was not. It's not impossible, but it's not straightforward either. And when you're done with that, be sure to reconcile it with your thoughts on independence in Kosovo, Catalonia, Chechnya, the Kurds in Turkey, ISIS in northern Iraq and other modern examples.
Or put it this way - hypothetically, had the South agreed to abolish slavery, and then done so in a way that meant reinstating it was impossible, but afterwards still insisted on secession, would their cause have been justified then?
I really don't know what most Americans would say to that one.
I don't think Americans are alone in this unthinking attitude to the question.
You saw this exactly on display in the Scottish fiasco. Most political unions don't contain explicit descriptions of how they can be dissolved. This goes doubly so for countries like Britain, which don't have a formal constitution at all.
What this means is that it's entirely unclear when or which bits of it can break off. Scotland at least had the virtue of being a polity with its own history, own accent, own traditions and so forth. People know who 'The Scots' are, so you don't need to explain why they should be considered their own entity. But what if Glasgow decided that, notwithstanding the opinion of the rest of Scotland, they wanted to secede from the UK themselves. Could they do it? Population-wise, there's as many people living in Glasgow (596,000) as Montenegro (625,000)or Luxembourg (549,000). And if Glasgow, what about Inverness (72,000)?
And not only that, but the lack of formality was on display by the method of deciding the question. A single referendum, with the Scots as the only people being consulted. Moreover, for a decision this momentous, you might assume that you need some kind of supermajority or something. But since we can't specify that kind of thing ahead of time, the default assumption is that a simple majority will do, one time. If 50.01% of Scots want to leave, then out they go. Bad luck for the remaining 49.99%. Bad luck for any Scots yet to come who might have preferred the union. I suspect that if Cameron had thought he might lose, he might have asked for a higher standard. But a) how would he justify that higher number, and b) if he did, would he then be bound by the outcome?
For a lot of major political decisions, the public never gets consulted at all. It's not clear if the British will get a vote on whether to stay in the EU. They did get a referendum in 1975 to decide whether to join the European Economic Community (which later became the EU) but you'd be a bold man to claim that that signing up to the EEC meant a full knowledge of the leviathan that the EU would later become. In November 2012, support for leaving the EU was 56%. Under the one-time, one-vote rule, that could have been enough to get them out. One might say that holding this vote would force exclusion from the EU for future Brits, who might not be able to change their minds. Then again, one could equally say that the vote in 1975 forced inclusion on lots of modern Brits who now also can't change their mind.
I don't pretend there's easy answers to any of these questions. The libertarians would say every individual has the right to secede from any group, which is a consistent, if difficult to implement position.
But the whole Scotland thing has shown is that avoiding thinking about these kinds of questions doesn't make them go away. They're going to come up periodically, and you just get incoherent answers by not having any contingency plans.
Everyone goes into marriages thinking they'll last forever. And yet we still think it prudent to have divorce procedures well known in advance.
Since I'm mostly a fan of formalism, I think countries would benefit from the same arrangements.
But one thing this whole debacle really illustrated is the following: very few people these days think in a principled way about secession. When, if ever, do a group of people have a right to secede from a country? Do they even need legitimate grievances? How many of them need to agree, and by what margin?
This is certainly true in America. What are the two historical events that most people in this country agree on? Firstly, that the American revolution was a jolly good thing and entirely appropriate. And secondly, that the civil war was fortunately won by the North, whose cause was ultimately just (this is probably still somewhat disputed in the South today, but I think it's probably broadly agreed on overall).
Ponder, however, the surprising difficulty in reconciling those two positions in a principled manner. For some thoughts on the justification for the Cofederacy, meet Raphael Semmes, a Captain of the Confederate States Navy. Have a read of how an actual member of the Confederacy justifies the South's position. It's all in the first couple of chapters of his book, 'Memoirs of Service Afloat', which Gutenberg has for free here.
If you're too lazy to read the original, his argument is quite simple. Firstly, he argues that the same rights that gave the states the ability to join the union gave them the right to leave - they were separate political entities capable of their own decisions, a status that predated the union. Second, he argues that the people of the North and the people of the South are fundamentally dissimilar in attitude and culture. And finally, that the North had been oppressing the South. over the years, and the South simply wanted out.
Now, you may consider these arguments persuasive or unpersuasive. But before you decide, it is worth comparing them to the arguments that the American Colonists claimed as their justification for seceding from Britain. Semmes' argument, if you boil it down, essentially says that we claim the same right to secede from the Union as the thirteen colonies claimed as their right to secede from Britain.
Perhaps slavery is the trump card, the elimination of which (presuming for a moment that this was the sole rationale for the war from the Northern perspective, a far from obvious point) had such moral force that it overwhelmed all the other arguments. But without this logical Deus Ex Machina, it is quite challenging to come up with a consistent set of principles under which the colonies independence was was justified but the South's was not. It's not impossible, but it's not straightforward either. And when you're done with that, be sure to reconcile it with your thoughts on independence in Kosovo, Catalonia, Chechnya, the Kurds in Turkey, ISIS in northern Iraq and other modern examples.
Or put it this way - hypothetically, had the South agreed to abolish slavery, and then done so in a way that meant reinstating it was impossible, but afterwards still insisted on secession, would their cause have been justified then?
I really don't know what most Americans would say to that one.
I don't think Americans are alone in this unthinking attitude to the question.
You saw this exactly on display in the Scottish fiasco. Most political unions don't contain explicit descriptions of how they can be dissolved. This goes doubly so for countries like Britain, which don't have a formal constitution at all.
What this means is that it's entirely unclear when or which bits of it can break off. Scotland at least had the virtue of being a polity with its own history, own accent, own traditions and so forth. People know who 'The Scots' are, so you don't need to explain why they should be considered their own entity. But what if Glasgow decided that, notwithstanding the opinion of the rest of Scotland, they wanted to secede from the UK themselves. Could they do it? Population-wise, there's as many people living in Glasgow (596,000) as Montenegro (625,000)or Luxembourg (549,000). And if Glasgow, what about Inverness (72,000)?
And not only that, but the lack of formality was on display by the method of deciding the question. A single referendum, with the Scots as the only people being consulted. Moreover, for a decision this momentous, you might assume that you need some kind of supermajority or something. But since we can't specify that kind of thing ahead of time, the default assumption is that a simple majority will do, one time. If 50.01% of Scots want to leave, then out they go. Bad luck for the remaining 49.99%. Bad luck for any Scots yet to come who might have preferred the union. I suspect that if Cameron had thought he might lose, he might have asked for a higher standard. But a) how would he justify that higher number, and b) if he did, would he then be bound by the outcome?
For a lot of major political decisions, the public never gets consulted at all. It's not clear if the British will get a vote on whether to stay in the EU. They did get a referendum in 1975 to decide whether to join the European Economic Community (which later became the EU) but you'd be a bold man to claim that that signing up to the EEC meant a full knowledge of the leviathan that the EU would later become. In November 2012, support for leaving the EU was 56%. Under the one-time, one-vote rule, that could have been enough to get them out. One might say that holding this vote would force exclusion from the EU for future Brits, who might not be able to change their minds. Then again, one could equally say that the vote in 1975 forced inclusion on lots of modern Brits who now also can't change their mind.
I don't pretend there's easy answers to any of these questions. The libertarians would say every individual has the right to secede from any group, which is a consistent, if difficult to implement position.
But the whole Scotland thing has shown is that avoiding thinking about these kinds of questions doesn't make them go away. They're going to come up periodically, and you just get incoherent answers by not having any contingency plans.
Everyone goes into marriages thinking they'll last forever. And yet we still think it prudent to have divorce procedures well known in advance.
Since I'm mostly a fan of formalism, I think countries would benefit from the same arrangements.
Sunday, September 14, 2014
Of Behavioural Red Flags and Unfunded Campaign Promises
One of the key meta-points of the rationality crowd is that one needs to explicitly think about problem-solving, because one's intuitions will frequently be wrong. In general, sophistication about biases is crucially important - awareness of the possibility that one might be wrong, and being able to spot when this might be occurring. If you don't have that, you'll keep making the same mistakes over and over, because you won't consider that you might have screwed up last time. Instead, the world will just seem confusing or unfair, as unexpected (to you) things keep happening over and over.
For me, there are a number of red flags I have that indicate that I might be screwing something up. They're not ironclad indications of mistakes, but they're nearly always cause to consider problems more carefully.
The first red flag is time-inconsistent preferences (see here and here). When you find yourself repeatedly switching back and forth between preferring X and preferring Not X, this is usually a sign that you're screwing something up. If you go back and forth once or twice, maybe you can write that off as learning due to new information. But if you keep changing your mind over and over, that's harder to explain. At least in my case, it's typically been due to some form of the hot-cold empathy gap - you make different decisions in cold, rational, calculating states versus hot, emotionally charged states, but in both types of state you fail to forecast how your views will predictably change when you revert back to the previous state. I struggle to think of examples of when repeatedly changing your mind back and forth over something is not in fact an indication of faulty reasoning of some form.
The second red flag is wishing for less information. This isn't always irrational - if you've only got one week to live, it might be entirely sensible to prefer to not find out that your husband or wife cheated on you 40 years ago, and just enjoy the last week in peace. (People tempted to make confessions to those on their deathbed might bear in mind that this is probably actually a selfish act, compounding what was likely an earlier selfish act). But for the most part, wishing to not find something out seems suspicious. Burying one's head in the sand is rarely the best strategy for anything, and the desire to do so seems to be connected to a form of cognitive dissonance - the ego wanting to protect the self-image, rather than admit to the possibility of a mistake. Better advice is to embrace Eugene Gendlin
Conflicts between stated and revealed preference may be resolved in one of two ways. As I've discussed before, for a long time I had a persistent conflict when it came to learning Spanish. I kept saying I wanted to learn it, and would try half-heartedly with teach yourself Spanish MP3s, but would pretty soon drift off and stop doing it.
This inconsistency can be resolved one of two ways. Firstly, the stated preference could be correct, and I have a self-control problem: Spanish would actually be fun to learn, but due to laziness and procrastination I kept putting it off for more instantly gratifying things. Secondly, the revealed preference could be correct: learning Spanish isn't actually fun for me, which is why I don't persist in it, and the stated preference just means that I like the idea of learning Spanish, probably out of misguided romantic notions of what it will comprise.
Having tried and failed at least twice (see: time-inconsistent preferences), I decided that the second one was true - I actually didn't want to learn Spanish. Of course, time-inconsistency being what it is, every few years it seems like a good idea to do it, and I have to remind myself of why I gave up last time.
Being in the middle of one such bout of mental backsliding recently, I was pondering why the idea of learning another language kept holding appeal to me, even after thinking about the problem as long as I had. I think it comes from the subtle aspect of what revealed preference is, this time repeated with emphasis on the appropriate section:
Of course, it's in the nature of human beings to resist acknowledging opportunity cost. There's got to be a way to make it work!
And it occurred to me that straight expressions of a desire to do something have a lot in common with unfunded campaign promises. I'll learn the piano! I'll start a blog! I'll read more Russian literature!
These things all take time. If your life has lots of idle hours in it, such as if you've recently been laid off, then great, you can take up new hobbies with gay abandon.
But if your week is more or less filled with stuff already, saying you want to start some new ongoing task is pointless and unwise unless you're willing to specify what you're going to give up to make it happen. There are only so many hours in the week. If you want to spend four of them learning piano, which current activities that you enjoy are you willing to forego? Two dinners with friends? Spending Saturday morning with your kid? Half a week's worth of watching TV on the couch with your boyfriend? What?
If you don't specify exactly what you're willing to give up, you're in the exact same position as politicians promising grand new spending schemes without specifying how they're going to pay for them. And this goes doubly so for ongoing commitments. Starting to listen to the first teach-yourself-Spanish MP3, without figuring out how you're going to make time for the remaining 89 in the series, is just the same as deciding you want to build a high speed rail from LA to San Francisco, and constructing a 144 mile section between Madera and Bakersfield without figuring out how, or if, you're going to be able to build the whole thing.
And like those politicians you scorn, you'll find yourself tempted to offer the same two siren-song mental justifications that get trotted out for irresponsible programs everywhere.
The first of the sirens is that you'll pay for the program by eliminating waste and duplication elsewhere. Doubt not that your life, much like the wretched DMV, is full of plenty of waste and duplication. But doubt it not as well that this waste and duplication will prove considerably harder to get rid of than you might have bargained for. If your plan for learning Spanish is 'I'll just stop wasting any time on the internet each day'... yeah, you're not going to get very far. Your system 2 desire to learn piano is like Arnie, and your desire to click on that blog is like the California Public Sector Unions - I know who my money's on. The amount of waste you can get rid of is probably not enough to fund very much activity at all. Just like in government.
The second siren is the desire to just run at a budget deficit. The area of deficit that almost always comes up is sleep. I'll just get up and hour earlier and practice the piano! Great - so are you planning to go to bed an hour earlier too? If so, we're back at square one, because something in the night's activities has to be cut. If not, do you really think that your glorious plan to switch from 8 hours a night to 7 hours a night, in perpetuity, is likely to prove feasible (absent long-term chemical assistance) or enjoyable (even with such assistance)? Every time I've tried, the answer has been a resounding 'no'. I say 'every time' advisedly, as this awful proposal manages to seem appealing again and again. You can in fact live on less sleep for extended periods - just ask parents with newborn children. It's also incredibly unpleasant to do so - just ask parents with newborn children. They'll do it because millions of years of evolutionary forces have caused them to feel such overwhelming attachment to their children that the sacrifice is worth it. And you propose to repeat the feat to learn the piano? That may seem like a great idea when you start out for the first night, fresh from a month of good sleeping. It seems like less of a good idea the next morning when your alarm goes off an hour earlier than usual. And I can assure you it almost certainly will not seem like a good idea after a month of being underslept, should you in fact get that far. Iterate forward, and don't start.
The real lesson is to only undertake things that you're actually willing to pay for. If you don't know what you're willing to give up, you don't actually know if you demand something, as opposed to merely want it. Confuse the two at your peril.
For me, there are a number of red flags I have that indicate that I might be screwing something up. They're not ironclad indications of mistakes, but they're nearly always cause to consider problems more carefully.
The first red flag is time-inconsistent preferences (see here and here). When you find yourself repeatedly switching back and forth between preferring X and preferring Not X, this is usually a sign that you're screwing something up. If you go back and forth once or twice, maybe you can write that off as learning due to new information. But if you keep changing your mind over and over, that's harder to explain. At least in my case, it's typically been due to some form of the hot-cold empathy gap - you make different decisions in cold, rational, calculating states versus hot, emotionally charged states, but in both types of state you fail to forecast how your views will predictably change when you revert back to the previous state. I struggle to think of examples of when repeatedly changing your mind back and forth over something is not in fact an indication of faulty reasoning of some form.
The second red flag is wishing for less information. This isn't always irrational - if you've only got one week to live, it might be entirely sensible to prefer to not find out that your husband or wife cheated on you 40 years ago, and just enjoy the last week in peace. (People tempted to make confessions to those on their deathbed might bear in mind that this is probably actually a selfish act, compounding what was likely an earlier selfish act). But for the most part, wishing to not find something out seems suspicious. Burying one's head in the sand is rarely the best strategy for anything, and the desire to do so seems to be connected to a form of cognitive dissonance - the ego wanting to protect the self-image, rather than admit to the possibility of a mistake. Better advice is to embrace Eugene Gendlin
What is true is already so.The third red flag is persistent deviations between stated and revealed preference (see, for instance, here and here). This is what happens when you say you want X and are willing to pay for it at the current price, and X is within your budget set, and you keep not purchasing X. The stated preference for liking X is belied by the revealed preference to not actually buy it. Being in the budget set is key - if one has a stated preference for sleeping with Scarlett Johannson but is not doing so, this is unlikely to be violating any axioms of expected utility theory, whatever else it may reveal.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
Conflicts between stated and revealed preference may be resolved in one of two ways. As I've discussed before, for a long time I had a persistent conflict when it came to learning Spanish. I kept saying I wanted to learn it, and would try half-heartedly with teach yourself Spanish MP3s, but would pretty soon drift off and stop doing it.
This inconsistency can be resolved one of two ways. Firstly, the stated preference could be correct, and I have a self-control problem: Spanish would actually be fun to learn, but due to laziness and procrastination I kept putting it off for more instantly gratifying things. Secondly, the revealed preference could be correct: learning Spanish isn't actually fun for me, which is why I don't persist in it, and the stated preference just means that I like the idea of learning Spanish, probably out of misguided romantic notions of what it will comprise.
Having tried and failed at least twice (see: time-inconsistent preferences), I decided that the second one was true - I actually didn't want to learn Spanish. Of course, time-inconsistency being what it is, every few years it seems like a good idea to do it, and I have to remind myself of why I gave up last time.
Being in the middle of one such bout of mental backsliding recently, I was pondering why the idea of learning another language kept holding appeal to me, even after thinking about the problem as long as I had. I think it comes from the subtle aspect of what revealed preference is, this time repeated with emphasis on the appropriate section:
when you say you want X and are willing to pay for it at the current price, and X is within your budget set, and you keep not purchasing XNearly everything comes down to actual willingness to pay. Sure, it would be great to know Spanish. Does that mean it is great to learn Spanish? Probably not. One thinks only of the final end state of knowledge, not of the process of sitting in the car trying to think of the appropriate Spanish phrase for whatever the nice-sounding American man is saying, and worrying if the mental distraction is increasing one's risk of accidents.
Of course, it's in the nature of human beings to resist acknowledging opportunity cost. There's got to be a way to make it work!
And it occurred to me that straight expressions of a desire to do something have a lot in common with unfunded campaign promises. I'll learn the piano! I'll start a blog! I'll read more Russian literature!
These things all take time. If your life has lots of idle hours in it, such as if you've recently been laid off, then great, you can take up new hobbies with gay abandon.
But if your week is more or less filled with stuff already, saying you want to start some new ongoing task is pointless and unwise unless you're willing to specify what you're going to give up to make it happen. There are only so many hours in the week. If you want to spend four of them learning piano, which current activities that you enjoy are you willing to forego? Two dinners with friends? Spending Saturday morning with your kid? Half a week's worth of watching TV on the couch with your boyfriend? What?
If you don't specify exactly what you're willing to give up, you're in the exact same position as politicians promising grand new spending schemes without specifying how they're going to pay for them. And this goes doubly so for ongoing commitments. Starting to listen to the first teach-yourself-Spanish MP3, without figuring out how you're going to make time for the remaining 89 in the series, is just the same as deciding you want to build a high speed rail from LA to San Francisco, and constructing a 144 mile section between Madera and Bakersfield without figuring out how, or if, you're going to be able to build the whole thing.
And like those politicians you scorn, you'll find yourself tempted to offer the same two siren-song mental justifications that get trotted out for irresponsible programs everywhere.
The first of the sirens is that you'll pay for the program by eliminating waste and duplication elsewhere. Doubt not that your life, much like the wretched DMV, is full of plenty of waste and duplication. But doubt it not as well that this waste and duplication will prove considerably harder to get rid of than you might have bargained for. If your plan for learning Spanish is 'I'll just stop wasting any time on the internet each day'... yeah, you're not going to get very far. Your system 2 desire to learn piano is like Arnie, and your desire to click on that blog is like the California Public Sector Unions - I know who my money's on. The amount of waste you can get rid of is probably not enough to fund very much activity at all. Just like in government.
The second siren is the desire to just run at a budget deficit. The area of deficit that almost always comes up is sleep. I'll just get up and hour earlier and practice the piano! Great - so are you planning to go to bed an hour earlier too? If so, we're back at square one, because something in the night's activities has to be cut. If not, do you really think that your glorious plan to switch from 8 hours a night to 7 hours a night, in perpetuity, is likely to prove feasible (absent long-term chemical assistance) or enjoyable (even with such assistance)? Every time I've tried, the answer has been a resounding 'no'. I say 'every time' advisedly, as this awful proposal manages to seem appealing again and again. You can in fact live on less sleep for extended periods - just ask parents with newborn children. It's also incredibly unpleasant to do so - just ask parents with newborn children. They'll do it because millions of years of evolutionary forces have caused them to feel such overwhelming attachment to their children that the sacrifice is worth it. And you propose to repeat the feat to learn the piano? That may seem like a great idea when you start out for the first night, fresh from a month of good sleeping. It seems like less of a good idea the next morning when your alarm goes off an hour earlier than usual. And I can assure you it almost certainly will not seem like a good idea after a month of being underslept, should you in fact get that far. Iterate forward, and don't start.
The real lesson is to only undertake things that you're actually willing to pay for. If you don't know what you're willing to give up, you don't actually know if you demand something, as opposed to merely want it. Confuse the two at your peril.
Wednesday, September 10, 2014
The limits of expected utility
It is probably not a surprise to most readers of this august periodical to find out that I yield to few people in my appreciation for economic reasoning. Mostly, the alternative to economic reasoning is shonky, shoddy intuitions about the world that make people worse off. Shut up and multiply is nearly always good advice - work out the optimal answer, not what makes you feel good. The alternative is, disturbingly often, more people dying or suffering just so you can feel good about a policy.
But perhaps it may be a surprise to find that I not infrequently end up in arguments with economists about the limits of economic reasoning in personal and ethical situations. There is often a tendency to confuse the 'is' and the 'ought'. We model people as maximising expected utility, usually over simple things like consumption or wealth, because these are powerful tools to help us predict what people will do on a large scale. But for the question of what one ought to do, it is particularly useless to do what some economists do and say, 'Well, I do whatever maximises my utility'. No kidding! So how does that help you decide what's in your utility function? Does it include altruism? If so, to whom and how much? Do you even know? A lot of ethical dilemmas in life come from not knowing how to act, which (if you want to reduce everything to utility terms) you could say is equivalent to not knowing how much utility or disutility something will give you. There's ways to find that out, of course, but those ways mostly aren't economics.
More importantly, this argument tends to sneak in a couple of assumptions that, when brought to the fore, are not nearly as obvious as the economics advice makes them.
Firstly, it's not clear that utility functions are fixed and immutable. This is perhaps less pressing when modeling monopolistic competition among firms, but is probably more first order in one's own life. Could you change your preferences over time so that you eventually got more joy out of helping other people, versus only helping yourself? And if so, should you? It's hard to say. You could think about having a meta-utility function - utility over different forms of utility. For the same amount of pleasure, I'd rather get pleasure from virtue than vice. This isn't in most models, although it probably could be included in some behavioral version of stuff (I suspect it may all just simplify to another utility function in the end). But even to do this requires a set of ethics about what you ought to be doing - you need to specify what behavior is utility-generating behaviour is admirable and what isn't. Philosophers have debated what those ethics should be for a long time, but you'll need to look outside economics to find what they are.
Mostly, people just assume that whatever they like now is good enough. Of course, they're assuming their desires don't raise any particular ethical dilemmas. You can always think about extreme cases, like if someone gains utility over torturing people. Most die-hard economists would probably still not give the torturer the advice to just do what gives them utility. They'd try to find wiggle ways out by saying that they'd get caught, but that just punts the question further down the road - if they won't get caught, does that mean they should do it? You'd probably say either a) try to learn to get a different utility function that gets joy from other things (but what if they can't?), or if they're more honest b) your utility isn't everything - some form of deontology applies, and you just shouldn't torture people for fun simply because you find it enjoyable.
Of course, if you admit that deontology applies, some things are just wrong. It doesn't matter if the total disutility from 3^^^3 dust specks getting in people's eyes is greater, you'd still rather avoid torture. Eliezer Yudkowsky implies that the answer to that question is obvious. How many economists would agree? Fewer than you'd think. I'm probably not among them either, although I don't trust my intuitions here.
But fine, let's leave the hypotheticals to one side, and consider something very simple - should you call your parents more often than you do? For most young people, I'd say the answer is yes, even if you don't enjoy it that much. Partly, it's something you should endeavour to learn to enjoy. Even if this doesn't include enjoying all of the conversation, at least try to enjoy the part of being generous with one's time. Though the bigger argument is ultimately deontological - children have enormous moral obligations to their parents, and the duties of a child in the modern age include continuing to be a support for one's parents, even if you might rather be playing X-Box. If you ask me to reason this from more basic first principles, I will admit there aren't many to offer. Either one accepts the concept of duties or one doesn't.
In the end, one does one's duty not always because one enjoys it, but simply because it is duty. Finding ways to make duty pleasurable for all concerned is enormously important, and will make you more likely to carry it out, but in the end this isn't the only thing at stake. There is more to human life than your own utility, even your utility including preferences for altruism. It would be wonderful if you can do good as a part of maximising your expected utility. Failing that, it would be good to learn to get utility from doing good, perhaps by habit, even if that's not currently in your utility function. Failing that, do good anyway, simply because you ought to.
But perhaps it may be a surprise to find that I not infrequently end up in arguments with economists about the limits of economic reasoning in personal and ethical situations. There is often a tendency to confuse the 'is' and the 'ought'. We model people as maximising expected utility, usually over simple things like consumption or wealth, because these are powerful tools to help us predict what people will do on a large scale. But for the question of what one ought to do, it is particularly useless to do what some economists do and say, 'Well, I do whatever maximises my utility'. No kidding! So how does that help you decide what's in your utility function? Does it include altruism? If so, to whom and how much? Do you even know? A lot of ethical dilemmas in life come from not knowing how to act, which (if you want to reduce everything to utility terms) you could say is equivalent to not knowing how much utility or disutility something will give you. There's ways to find that out, of course, but those ways mostly aren't economics.
More importantly, this argument tends to sneak in a couple of assumptions that, when brought to the fore, are not nearly as obvious as the economics advice makes them.
Firstly, it's not clear that utility functions are fixed and immutable. This is perhaps less pressing when modeling monopolistic competition among firms, but is probably more first order in one's own life. Could you change your preferences over time so that you eventually got more joy out of helping other people, versus only helping yourself? And if so, should you? It's hard to say. You could think about having a meta-utility function - utility over different forms of utility. For the same amount of pleasure, I'd rather get pleasure from virtue than vice. This isn't in most models, although it probably could be included in some behavioral version of stuff (I suspect it may all just simplify to another utility function in the end). But even to do this requires a set of ethics about what you ought to be doing - you need to specify what behavior is utility-generating behaviour is admirable and what isn't. Philosophers have debated what those ethics should be for a long time, but you'll need to look outside economics to find what they are.
Mostly, people just assume that whatever they like now is good enough. Of course, they're assuming their desires don't raise any particular ethical dilemmas. You can always think about extreme cases, like if someone gains utility over torturing people. Most die-hard economists would probably still not give the torturer the advice to just do what gives them utility. They'd try to find wiggle ways out by saying that they'd get caught, but that just punts the question further down the road - if they won't get caught, does that mean they should do it? You'd probably say either a) try to learn to get a different utility function that gets joy from other things (but what if they can't?), or if they're more honest b) your utility isn't everything - some form of deontology applies, and you just shouldn't torture people for fun simply because you find it enjoyable.
Of course, if you admit that deontology applies, some things are just wrong. It doesn't matter if the total disutility from 3^^^3 dust specks getting in people's eyes is greater, you'd still rather avoid torture. Eliezer Yudkowsky implies that the answer to that question is obvious. How many economists would agree? Fewer than you'd think. I'm probably not among them either, although I don't trust my intuitions here.
But fine, let's leave the hypotheticals to one side, and consider something very simple - should you call your parents more often than you do? For most young people, I'd say the answer is yes, even if you don't enjoy it that much. Partly, it's something you should endeavour to learn to enjoy. Even if this doesn't include enjoying all of the conversation, at least try to enjoy the part of being generous with one's time. Though the bigger argument is ultimately deontological - children have enormous moral obligations to their parents, and the duties of a child in the modern age include continuing to be a support for one's parents, even if you might rather be playing X-Box. If you ask me to reason this from more basic first principles, I will admit there aren't many to offer. Either one accepts the concept of duties or one doesn't.
In the end, one does one's duty not always because one enjoys it, but simply because it is duty. Finding ways to make duty pleasurable for all concerned is enormously important, and will make you more likely to carry it out, but in the end this isn't the only thing at stake. There is more to human life than your own utility, even your utility including preferences for altruism. It would be wonderful if you can do good as a part of maximising your expected utility. Failing that, it would be good to learn to get utility from doing good, perhaps by habit, even if that's not currently in your utility function. Failing that, do good anyway, simply because you ought to.
Sunday, August 24, 2014
Making the living as interesting as the dead
Dinosaurs are endlessly fascinating things. They may be one of the biggest common denominators interest of among young children, both male and female. They’re huge, they’re weirdly shaped, and perhaps most importantly, they don’t exist. You see only the skeleton, and drawings. As a consequence, you’re forced to imagine what they would have been like. This means that they get a romance and curiosity attached to them that seldom attracts to the animals that the world actually has. One wants what one can’t have, after all. What is Jurassic Park, if not a combination of Frankenstein and man’s attempt to recreate the Garden of Eden?
Of course, if dinosaurs actually existed, they’d just be one more animal in the zoo. You can take this one of two ways. Either dinosaurs are overrated, or we should be more interested than we are in things that actually exist. For aesthetic reasons, I prefer the latter choice - one ought to learn to take joy in the merely real. Of course, getting people to see that is easier said than done.
Doubt it not, a giraffe is as bizarre as any dinosaur. One may appreciate this on an intellectual level, but it is hard to view one with quite the same wonder. The most effective way I've seen to demonstrate the point is at the Harvard Museum of Natural History. Firstly, the exhibits move from dinosaurs, to ice age skeletons, and then on to living animals, encouraging the juxtaposition quite naturally.
But most importantly, to get people to be intrigued by modern animals, the most successful trick is to show them not just a giraffe, but a giraffe skeleton. It encourages you to look at a giraffe the way you look at the dinosaurs. And when you do, you realise that it’s comparably tall, wackily elongated, and many of the elements of the skeleton share features with those in the previous rooms. They also show you a real giraffe next to it, completing the imagination picture you had to fill in on your own in the previous cases. But most of the room is filled with skeletons of living species. A rhino or a hippo skeleton could easily be placed in the previous rooms without seeming out of place. If you judge a dinosaur less by its age and more like a child would, as a strange giant animal, the dinosaurs still exist. We just stopped noticing them.
The message is subtle but powerful. You would do well to be less fascinated with dinosaurs, and more fascinated with animals. The dead are intriguing, but so are the living. The latter have the advantage that you can still see them. So afterwards, why not take a trip to the zoo?
Of course, if dinosaurs actually existed, they’d just be one more animal in the zoo. You can take this one of two ways. Either dinosaurs are overrated, or we should be more interested than we are in things that actually exist. For aesthetic reasons, I prefer the latter choice - one ought to learn to take joy in the merely real. Of course, getting people to see that is easier said than done.
Doubt it not, a giraffe is as bizarre as any dinosaur. One may appreciate this on an intellectual level, but it is hard to view one with quite the same wonder. The most effective way I've seen to demonstrate the point is at the Harvard Museum of Natural History. Firstly, the exhibits move from dinosaurs, to ice age skeletons, and then on to living animals, encouraging the juxtaposition quite naturally.
But most importantly, to get people to be intrigued by modern animals, the most successful trick is to show them not just a giraffe, but a giraffe skeleton. It encourages you to look at a giraffe the way you look at the dinosaurs. And when you do, you realise that it’s comparably tall, wackily elongated, and many of the elements of the skeleton share features with those in the previous rooms. They also show you a real giraffe next to it, completing the imagination picture you had to fill in on your own in the previous cases. But most of the room is filled with skeletons of living species. A rhino or a hippo skeleton could easily be placed in the previous rooms without seeming out of place. If you judge a dinosaur less by its age and more like a child would, as a strange giant animal, the dinosaurs still exist. We just stopped noticing them.
The message is subtle but powerful. You would do well to be less fascinated with dinosaurs, and more fascinated with animals. The dead are intriguing, but so are the living. The latter have the advantage that you can still see them. So afterwards, why not take a trip to the zoo?
Sunday, August 17, 2014
On Memory and Imagination
It recently occurred to me that I have a very poor memory,
but not in the standard way that people suspect.
By most metrics, I remember a lot of things. I have entire
parts of my brain devoted to song lyrics, which is exactly the kind of odd
thing that strikes people as notable precisely because of its triviality. I
remember books I've read for a long time, and can usually talk usefully about
them to people who've only recently read them. I remember ideas even better,
and details of useful examples that illustrate the things I believe.
So for the most part, this qualifies me as having a reasonable memory. But nearly all the things I remember well are to do with words
and concepts. This isn't universal – I’m bad at names and birthdays, for
instance, but that’s about the only thing that might give it away.
The part I lack, however, is the ability to form mental pictures of what things look like. Yvain wrote about
this in the context of imagination.
There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?
Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.
The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question.
There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery to three percent of people completely unable to form mental images.
Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.
This holds true both for the parts of memory I have, as well as those I lack. My relatively strong ability to remember the written word has a ton of variation. A smart friend of mine remarked years ago that he found it almost impossible to remember much from the novels he's read. I remember thinking at the time that this seemed very tragic.
For my part, I would score quite low on the ability to form mental
images. It's not non-existent - there are images, but they’re hazy, and the details tend to shrink away
when you try focus in on them. When I read books, I have only a vague vision of
what the people involved look like, or the places where the action is taking
place. I would find it very hard to do the job of a writer and keep in my head
a consistently detailed image of the physical features of a person’s appearance
or the scenery. If I thought hard I could add in enough detail to make it
convincing, but no amount of detail would cause me to actually have a clear
picture of it myself.
I once saw a fascinating hint of how you might kludge things
if you lacked a strong ability to form images and had to write about them
anyway. This was when I saw the study of a friend’s mother who writes fiction. Up
on a pinboard, she had pictures of the faces of a number of famous people from
various angles. It was very much an ‘of course!’ moment. To make sure an image
is credible if you can’t form one yourself, describe something in front of you that
actually exists. This is the equivalent of painting from a photograph instead
of painting a scene entirely in your head. It seems overwhelmingly likely that
any painter who can create a detailed imaginary scene is an eidetic imager or
close to.
But the bit that goes less noticed is that imagining
pictures isn't important just for wholly made up scenery, but for memories too.
The source material is still there, but you still need to recreate the scene.
And I find I’m fairly bad at forming mental images even of
people I know well. I can remember particular scenes they were in, and certain
facial expressions that seem familiar. But I don’t immediately have a crystal
clear picture of them in my head. I’ll remember a particular still image, or a
collage of them. But I can’t make the picture do arbitrary things like talk, or
perform some action. I can’t imagine a different version of them, I can only
remember a particular image of them that stuck for some reason.
Part of the reason that this deficit goes almost completely
unnoticed is that it doesn't show up in the one situation where you might
expect it, namely being bad at recognising people. I’m actually okay at that, even if I can’t always remember their name. When presented with an
actual person in front of me, it’s enough to stir up recollections of what they
were like, and to fill in the blanks of their appearance. Since I had only a
hazy memory of what they looked like anyway, it’s less jarring to see how they've
changed, which might cause me to think they were someone else.
So I can remember the faces in front of me, but not the
faces that aren't. They’re stored in there, because I know them when I see
them. But I can’t recall them at will.
You’d think that this would cause me to anticipate this by
taking a lot of photos to preserve the memories. Sometimes it does, but often I’m
content to remember the event in terms of events and stories, even if the
scene isn't always precise. This is a reasonable tradition in the Holmes
household. My parents took long trips around Europe and Asia in their youth,
but I think I've seen precisely one photo from the entire time, affectionately referred to as 'the Cat Stevens photo'. But the stories
from that time have been recounted many times, particularly among the people who were
there. As Papa Holmes put it, when describing his relative lack of photos of his
trips – ‘you go places, and you take in the scenery at the time. And you
remember it, for a while. And then … you forget’.
In other words, the forgetting is okay, and is actually an
important part of the process, the way death is to life. The world you remember
was always impermanent anyway. Eventually, even the memory is too.
Tuesday, August 5, 2014
Thought of the Day
Curses on you, all you great problems! Let someone else beat his head against you, someone more stupid. Oh, just to rest there from the interrogator’s mother oaths and the monotonous unwinding of your whole life, from the crash of the prison locks, from the suffocating stuffiness of the cell. Only one life is allotted us, one small, short life! And we have been criminal enough to push ours in front of somebody’s machine guns, or drag it with us, still unsullied, into the dirty rubbish heap of politics. There, in the Altai, it appeared, one could live in the lowest, darkest hut on the edge of the village, next to the forest. And one could go into the woods, not for brushwood and not for mushrooms, but just to go, for no reason, and hug two tree trunks: Dear ones, you’re all I need.
-Aleksandr Solzhenitsyn, The Gulag Archipelago.
Saturday, August 2, 2014
Living on the Grid
There is something deeply appealing about a city built on a road grid. Not just because of my love of order and planning, either. You can arrive there and navigate your way around pretty easily, because most places can be accessed without making more than a couple of turns. I always like that in a place I’m travelling to. Not the fastest route, but the route I'm least likely to screw up.
It also gives rise to the wonderful phenomenon of numbering addresses by block. Growing up in Australia, the assumption that consecutive houses would be two numbers apart if on the same side of the street was one of those things so baked into your way of thinking that if I’d lived a thousand years, it would never have occurred to me to do it differently. I think this is how it always is. Everybody thinks of technological change as making an IPad or something, they rarely look for improvements in something mundane and simple like how addresses are numbered. But in a grid city, you can do much better than consecutive numbering, by making the numbers go up by 100 each block, and just rank order arbitrary numbers within the blocks. Manhattan is the epitome of this. When every street and avenue has a number, ‘312 E 28th St’ tells you exactly where the building is – between 3 and 4 blocks east of the dividing line of 5th Avenue, on 28th street. If the city had a definite lower left point, you wouldn’t even need the extra knowledge of the dividing Avenue. In Chicago, the numbering tells you one dimension, but the streets themselves have names. So you have to, for instance, know that the downtown streets are named in order of the presidents. Well, except for Jefferson, who’s somewhere else. And that pesky thing that there were two ‘Adams’ presidents within the space of five presidents (it’s Quincy Adams who gets the street, not Adams). Hey, nowhere's perfect.
The knock on grid cities is that they’re boring from a design point of view, but I’m not so sure. From high above the city at night, the lights on Roosevelt Ave stretch out in a line to the far horizon, fading off as if they go on forever. Without contours in the ground, it feels like living in the mathematician's depiction of parallel lines on an idealized infinite plane. Eventually, the lights on the two sides of the street must converge to a single point. Theoretically, this happens only at infinity, but I’d wager that somewhere in Nebraska ought to be far enough.
It also gives rise to the wonderful phenomenon of numbering addresses by block. Growing up in Australia, the assumption that consecutive houses would be two numbers apart if on the same side of the street was one of those things so baked into your way of thinking that if I’d lived a thousand years, it would never have occurred to me to do it differently. I think this is how it always is. Everybody thinks of technological change as making an IPad or something, they rarely look for improvements in something mundane and simple like how addresses are numbered. But in a grid city, you can do much better than consecutive numbering, by making the numbers go up by 100 each block, and just rank order arbitrary numbers within the blocks. Manhattan is the epitome of this. When every street and avenue has a number, ‘312 E 28th St’ tells you exactly where the building is – between 3 and 4 blocks east of the dividing line of 5th Avenue, on 28th street. If the city had a definite lower left point, you wouldn’t even need the extra knowledge of the dividing Avenue. In Chicago, the numbering tells you one dimension, but the streets themselves have names. So you have to, for instance, know that the downtown streets are named in order of the presidents. Well, except for Jefferson, who’s somewhere else. And that pesky thing that there were two ‘Adams’ presidents within the space of five presidents (it’s Quincy Adams who gets the street, not Adams). Hey, nowhere's perfect.
The knock on grid cities is that they’re boring from a design point of view, but I’m not so sure. From high above the city at night, the lights on Roosevelt Ave stretch out in a line to the far horizon, fading off as if they go on forever. Without contours in the ground, it feels like living in the mathematician's depiction of parallel lines on an idealized infinite plane. Eventually, the lights on the two sides of the street must converge to a single point. Theoretically, this happens only at infinity, but I’d wager that somewhere in Nebraska ought to be far enough.
Wednesday, July 30, 2014
The best poem title ever
Making this a double dose of Hal G.P. Colebatch, if there has been a better poem title than:
'Observing a thong-shod pedestrian's reaction to catching his toe in the ring of a discarded condom'
I certainly haven't come across it.
'Observing a thong-shod pedestrian's reaction to catching his toe in the ring of a discarded condom'
I certainly haven't come across it.
Time for Malcolm Fraser to repent
"And there is another feeling that is a great consolation in poverty. I believe everyone who has been hard up has experienced it. It is a feeling of relief, almost of pleasure, at knowing yourself at last genuinely down and out. You have talked so often of going to the dogs--and well, here are the dogs, and you have reached them, and you can stand it. It takes off a lot of anxiety,"
-George Orwell, Down and Out in Paris and LondonFrom a certain progressive standpoint, Zimbabwe, it seems, has at last gone to the dogs.
The Rhodesians would have told you that the dogs arrived years ago, and the rest of the changes were merely being more open about the kennel-like aspects of state.
Of course, this doesn't mean that things can't get worse. When it comes to forecasting the fortunes of countries, as in stockmarkets, picking exactly when the bottom has been reached is a very perilous business. It is always dangerous with basket-case countries to assume that things can't get any worse, because truly awful leaders seem to be uncannily persistent in finding a way. If Zimbabwe is remembered for anything, perhaps it will be for that.
So let's focus on a more stripped-down prediction - that installing Robert Mugabe was a mistake that everyone involved ought to feel intensely ashamed about.
Surely that's been pretty obvious for at least 25 years, right?
Ha! Sometimes it takes a while for things to get so bad that they break through the cognitive dissonance of those that helped create the disaster.
Just ask former Prime Minister of Australia Malcolm Fraser.
In one of the more disgraceful episodes of a mostly worthless (at best) Prime Ministership, Fraser was heavily involved in getting Robert Mugabe installed. As Hal G.P. Colebatch recounts:
Fraser's 1987 biographer Philip Ayres wrote: "The centrality of Fraser's part in the process leading to Zimbabwe's independence is indisputable. All the major African figures involved affirm it."
Tanzanian president Julius Nyerere said he considered Fraser's role "crucial in many parts", and Zambian president Kenneth Kaunda (whose own achievements included making his country a one-party state) called it "vital".
Mugabe is quoted by Ayres: "I got enchanted by (Fraser), we became friends, personal friends ... He's really motivated by a liberal philosophy."
Fraser's role also attracted tributes from Australian diplomats. Duncan Campbell, a former deputy secretary of the Department of Foreign Affairs and Trade, has claimed that Fraser was a "principal architect" of the agreement that installed Mugabe and that "he was largely responsible for pressing Margaret Thatcher to accept it".
Former Australian diplomat and Commonwealth specialist Tony Kevin has also claimed that Fraser "challenged Margaret Thatcher's efforts to stage-manage a moderate political solution".In an interview in 2000, Fraser showed that he appeared to have learned absolutely nothing from the process. This was just after Mugabe had passed a law allowing white farming assets to be taken without compensation.
JOHN HIGHFIELD: Mr Fraser, what do you make of these goings on in Zimbabwe? After all it was in the late 1970s that you and your friend, Kenneth Kowunda [phonetic], persuaded Mrs Thatcher to come across to your view and give Zimbabwe independence.
MALCOLM FRASER: I find it very hard to understand the disintegration that has, in fact, occurred because I really did believe, and I think many people who knew what was happening in the country believed, that President Mugabe started very well. I can remember speaking with Dennis Norman who was a white farmer in Mugabe's first government, and he spoke very highly of him and spoke very highly of his policies at that time.
...By 2000, it had been clear for quite a while that Zimbabawe had been disgracefully managed on a purely economic basis for a long time. When Mugabe was installed, Zimbabwe's GDP per capita of $916 (in current US dollars). By 2000, its GDP had declined by over 40%, to $535. Have a look at the graph below of the subsequent growth of some nearby countries that were poorer than Zimbabwe in 1980 and see what you think of Fraser's claim that Mugabe 'started very well'. Try putting in Botswana as well (slightly richer in 1980) and the comparison becomes even more dismal, as it towers over Zimbabwe. The most optimistic description is that things hadn't yet gone to hell as late as 1982. Heckuva job, Malcolm and Robbie!
I'm - you know, what has gone wrong in the last several years I find it very difficult to pin-point, except that economic policies have not worked. He's tried to defy, I think, the international moves of the marketplace which would have reduced investment in Zimbabwe and therefore reduced employment opportunities for Zimbabweans.
But in some sense, this isn't really the striking point about the Fraser response.
The first bizarre part is Fraser's contemptible obfuscation of referring to a policy of forced, uncompensated confiscation of white farm assets as merely 'economic policy'. Nothing racial here, no siree! See no race, hear no race. Why is that? Why the absurd euphemisms?
The second bizarre part is that, 20 years later, Fraser still finds the events mysterious. Do you think this might be related to the first point, you worthless old fool?
Fraser has to skate around the racism of the Mugabe regime, because given the economic catastrophe that befell the country, this is the only advantage that the initial Mugabe boosters can claim over Smith. Sure, we replaced a system that was lifting Zimbabwe out of poverty with a brutal and corrupt regime that terrorises its citizens. But hey, at least it's not racist, like Smith!
Of course, Smith's racism was mostly of a disparate impact variety. Rhodesia was not South Africa, and the practical restrictions on blacks were far less than under Apartheid. The 1961 constitution had property and education requirements for voting rights, but made no explicit racial prohibitions (although later voting systems did). The outcome was heavily skewed towards whites, obviously, and this was almost certainly the intended effect. But if you think that having a property requirement for voting means that a system is not meaningfully democratic, then Britain in World War I was just another undemocratic oligarchy fighting against other equally undemocratic oligarchies. You also wouldn't want to praise the US founding fathers too highly.
When Hal Colebatch caned Fraser in 2008 for his shameful role in getting Mugabe installed, Fraser's response was pathetic. You will scour in vain for any description by Fraser of racism in anything Mugabe did. You will also scour in vain for any coherent explanation of what exactly was wrong with the Smith regime, except that Smith personally was a real meanie who didn't let Mugabe, who was already fighting a civil war to overthrow the government, visit his young son when he was sick, and when he ultimately died. By all means, let's then give the country to a man who at the time was already famous for running an organisation that cut the noses and lips off blacks who opposed him. Have a look, Malcolm! Have a look, if you can stomach it, and tell me again what a terrible man Ian Smith was.
In the mean time, Fraser clings to a cock-and-bull story that the real issue with Mugabe was when his wife died, and that's when it all went to hell. Great theory! Completely untestable in terms of its main aspects of course. But what about the implication - that nobody could have seen this coming, as the start was so excellent. Seems plausible, no? Except that Smith pretty accurately did predict what was going to happen. Malcolm Fraser continues to express his surprise. Smith expressed no surprise at all. Sadness, yes, but not surprise.
How about, just for a change, you consider the possibility that you got completely suckered by Mugabe, that his moderate image was all a con for your benefit, and that millions of people suffered enormously because of your gullibility. You got played, you silly old fool. You are the muppet in this story, the mark, the rube. 35 years later you still can't see that. Gee, I picked the cup that I'm super sure had the pea under it! And somehow I still lost money, it just doesn't make sense!
So now, let us return to the story I linked at the start. Exactly where have things gotten to recently?
In the harshest official policy on race and land reform in a country that has been close to bankruptcy, the 90-year old autocrat said Wednesday that whites may no longer own any land in Zimbabwe.Let us pause and reflect on Malcolm Fraser's shame. We have known for almost 30 years that Fraser bequeathed to Zimbabwe economic and social catastrophe. We have already known of the thousands brutally killed and tortured in Mugabe's prison of a country. We have already known of the increasing hostility towards the dwindling number of remaining whites, even when it was entirely self-defeating from an economic point of view. We have known that Mugabe has long since stopped holding any semblance of free and fair democratic elections, another frequent criticism of Smith.
But finally, we have reached the nadir, from the progressive point of view - at long last, we now have a regime that is actually more racist than Ian Smith's. Smith never imposed any restrictions this draconian on blacks. The fig leaf, absurd though it was all along, is finally stripped away. There is nothing left, absolutely nothing, to recommend this regime over the one it replaced.
Malcolm Fraser never had to face the consequences of his actions. He will live out his days in comfort and peace in a stable and prosperous first world country. The same cannot be said of the citizens of Zimbabwe, both white and black, who had to live with the regime Fraser helped install.
Labels:
Australia,
Democracy,
Politics,
Race,
The Third World
Saturday, July 19, 2014
Snappy responses you weren't hoping for that nonetheless answer the question quite well
From the New York Times
In the last few years, unable to hold a list of just four grocery items in my head, I’d begun to fret a bit over my literal state of mind. So to reassure myself that nothing was amiss, just before tackling French I took a cognitive assessment called CNS Vital Signs, recommended by a psychologist friend. The results were anything but reassuring: I scored below average for my age group in nearly all of the categories, notably landing in the bottom 10th percentile on the composite memory test and in the lowest 5 percent on the visual memory test.
All this means that we adults have to work our brains hard to learn a second language. But that may be all the more reason to try, for my failed French quest yielded an unexpected benefit. After a year of struggling with the language, I retook the cognitive assessment, and the results shocked me. My scores had skyrocketed, placing me above average in seven of 10 categories, and average in the other three. My verbal memory score leapt from the bottom half to the 88th — the 88th! — percentile and my visual memory test shot from the bottom 5th percentile to the 50th. Studying a language had been like drinking from a mental fountain of youth.
What might explain such an improvement?Regression toward the mean.
Labels:
Colour Me Unconvinced,
Language,
One-Liners,
Probability
Monday, July 14, 2014
Lionel Messi and Soccer Equilibrium Outcomes
So another World Cup has come and gone. Enough water had passed under the bridge that I no longer resented Argentina for their dismal performance in 2002 when I wagered on them. I was vaguely hoping for an Argentine win, just because I would have liked to see Lionel Messi win a cup.
'Twas not to be, of course.
A very good starting point for understanding Messi is this excellent post by Nate Silver going through a whole lot of metrics of soccer success and showing that Messi is not only an outlier, he's such an outlier that his data point is visibly distinct from the rest even in simple plots. Like this one:
(image credit)
Seriously, go read the whole thing. If you're apt to be swayed by hard data, it's a pretty darn convincing case.
So what happened in the World Cup? Why didn't he seem nearly this dominant when you watched him play?
The popular narrative is that there's some inability to perform under pressure - in the big situations when it really counts, he doesn't come through with the goods. He's a choker, in other words.
This is hard to disprove exactly, but one thing that should give you pause is that with Messi on the team, Barcelona has won two FIFA Club World Cups and three UEFA championships. This at least suggests that the choking hypothesis seems more specific to World Cups.
So one explanation consistent with the choking hypothesis is that the World Cup is much higher stakes than the rest, hence the choking is only visible in that setting. It's possible, and hard to rule out.
But another possibility is that the difference comes from the way that opposing teams play against Messi in each setting.
Remember, a player's performance is an equilibrium outcome. It's determined by how skilfully the person plays that day (which everyone thinks about), but also by how many opposing resources are focused on the person (which very few people think about).
Let's take the limiting case, since it's easiest. Suppose I take a team comprised of Lionel Messi and ten guys from a really good high school team, and pit them against a mid-range club team. My guess is that Messi wouldn't perform that well there, and not just because he wouldn't have as many other good people to pass to. Rather, the opposing team is going to devote about 4 defenders just to covering Messi, since it's obvious that this is where the threat is. Throw enough semi-competent defense players on someone, and you can make their performance seem much less impressive.
Have a look at the pictures from the Daily Mail coverage of the game against the Netherlands. In one, Messi is surrounded by four Dutch defenders. In another, he's surrounded by three. The guy is good, but that's a pretty darn big ask of anyone.
In other words, Messi may be better than the rest of the Argentine players by a large enough margin that opposing teams will throw lots of resources into covering him, making it harder for him to shine. In soccer, like in martial arts reality (as opposed to martial arts movies), numbers matter. Jet Li may beat up 12 bad guys at a time, but it you try that in real life, you're on your way to the emergency room or the morgue, almost regardless of your martial arts skill.
The last piece of the puzzle for this hypothesis is the question of why this doesn't happen when Messi plays at Barcelona.
I'm a real newb at soccer (evidenced by me referring to it as 'soccer' - you can take the boy out of Australia, etc.), but my soccer-following friends can tell me if I'm right here or not.
My guess is that the rest of the Barcelona team is much closer to Messi's level of skill than the rest of the Argentine team. This means that if opposing teams try to triple mark Messi in a Barcelona game, the rest of the attackers will be sufficiently unguarded that they'll manage to score and the result will be the same or even worse than if Messi were totally covered. As a result, Messi goes less covered and scores more.
There's a reason that the sabremetricians (who tend to be among the most sophisticated of sports analysers) talk about wins above replacement. You need to think about the counterfactual of if the person wasn't there, not the direct effect of what they did or didn't do in equilibrium.
Of course, the skeptics will point out the cases where great stars did manage to indivdiually play a big role in lifting their national teams to great success. What about Maradona, they say?
This is a fair question. Sometimes you really can get it past five defenders to win a world cup. Maybe that's what a true champion would have done yesterday.
Or maybe the English just weren't marking as well as the Dutch were.
Or maybe, even more pertinent, the rest of the Argentine team in 86 was sufficiently better in relative terms that England couldn't afford to mark Maradona as hard. The effect of this, if true, would be for Maradona's performance to look more spectacular relative to the rest of his team - having a good team means less defenders on you means more heroics. And when that happens, you look individually more brilliant, leading to you getting all the credit and making it look like you won the game single-handedly. If you really were that much better than everybody else, you would be less likely to deliver a performance that showed this fact to a novice observer.
Not many people think in equilibrium terms. This is why we analyse data.
The data case, however, is clear. Viva Messi!
'Twas not to be, of course.
A very good starting point for understanding Messi is this excellent post by Nate Silver going through a whole lot of metrics of soccer success and showing that Messi is not only an outlier, he's such an outlier that his data point is visibly distinct from the rest even in simple plots. Like this one:
(image credit)
Seriously, go read the whole thing. If you're apt to be swayed by hard data, it's a pretty darn convincing case.
So what happened in the World Cup? Why didn't he seem nearly this dominant when you watched him play?
The popular narrative is that there's some inability to perform under pressure - in the big situations when it really counts, he doesn't come through with the goods. He's a choker, in other words.
This is hard to disprove exactly, but one thing that should give you pause is that with Messi on the team, Barcelona has won two FIFA Club World Cups and three UEFA championships. This at least suggests that the choking hypothesis seems more specific to World Cups.
So one explanation consistent with the choking hypothesis is that the World Cup is much higher stakes than the rest, hence the choking is only visible in that setting. It's possible, and hard to rule out.
But another possibility is that the difference comes from the way that opposing teams play against Messi in each setting.
Remember, a player's performance is an equilibrium outcome. It's determined by how skilfully the person plays that day (which everyone thinks about), but also by how many opposing resources are focused on the person (which very few people think about).
Let's take the limiting case, since it's easiest. Suppose I take a team comprised of Lionel Messi and ten guys from a really good high school team, and pit them against a mid-range club team. My guess is that Messi wouldn't perform that well there, and not just because he wouldn't have as many other good people to pass to. Rather, the opposing team is going to devote about 4 defenders just to covering Messi, since it's obvious that this is where the threat is. Throw enough semi-competent defense players on someone, and you can make their performance seem much less impressive.
Have a look at the pictures from the Daily Mail coverage of the game against the Netherlands. In one, Messi is surrounded by four Dutch defenders. In another, he's surrounded by three. The guy is good, but that's a pretty darn big ask of anyone.
In other words, Messi may be better than the rest of the Argentine players by a large enough margin that opposing teams will throw lots of resources into covering him, making it harder for him to shine. In soccer, like in martial arts reality (as opposed to martial arts movies), numbers matter. Jet Li may beat up 12 bad guys at a time, but it you try that in real life, you're on your way to the emergency room or the morgue, almost regardless of your martial arts skill.
The last piece of the puzzle for this hypothesis is the question of why this doesn't happen when Messi plays at Barcelona.
I'm a real newb at soccer (evidenced by me referring to it as 'soccer' - you can take the boy out of Australia, etc.), but my soccer-following friends can tell me if I'm right here or not.
My guess is that the rest of the Barcelona team is much closer to Messi's level of skill than the rest of the Argentine team. This means that if opposing teams try to triple mark Messi in a Barcelona game, the rest of the attackers will be sufficiently unguarded that they'll manage to score and the result will be the same or even worse than if Messi were totally covered. As a result, Messi goes less covered and scores more.
There's a reason that the sabremetricians (who tend to be among the most sophisticated of sports analysers) talk about wins above replacement. You need to think about the counterfactual of if the person wasn't there, not the direct effect of what they did or didn't do in equilibrium.
Of course, the skeptics will point out the cases where great stars did manage to indivdiually play a big role in lifting their national teams to great success. What about Maradona, they say?
This is a fair question. Sometimes you really can get it past five defenders to win a world cup. Maybe that's what a true champion would have done yesterday.
Or maybe the English just weren't marking as well as the Dutch were.
Or maybe, even more pertinent, the rest of the Argentine team in 86 was sufficiently better in relative terms that England couldn't afford to mark Maradona as hard. The effect of this, if true, would be for Maradona's performance to look more spectacular relative to the rest of his team - having a good team means less defenders on you means more heroics. And when that happens, you look individually more brilliant, leading to you getting all the credit and making it look like you won the game single-handedly. If you really were that much better than everybody else, you would be less likely to deliver a performance that showed this fact to a novice observer.
Not many people think in equilibrium terms. This is why we analyse data.
The data case, however, is clear. Viva Messi!
Wednesday, July 9, 2014
Things that need no elaboration to explain why they're awesome
'Vaguely Rude Place Names of the World'
It's good to see Australia get some decent representation in there.
It's good to see Australia get some decent representation in there.
Tuesday, July 8, 2014
Out of Sample Predictions About World Cup Rioting
So Brazil gets humiliatingly crushed in the World Cup by Germany, 7-1. While there is much to be said about this, mostly in the way of cruel mockery, it has already been done by folks much more learned on the subject than me. As a side note, while watching my streaming of dubious legal status, I did reflect on how the ideal commentators for a complete drubbing are the BBC ones, since they just ooze dry and scathing humour. It's full of great adjectives like 'shambolic' and 'appalling', and they managed to get in some classic digs (quoted from memory):
'This has been the worst 45 minutes of football in Brazilian history'.
'Without Neymar, could this be the worst team to make a World Cup Semi-Final?'and my favourite of all:
'And Oscar scores the most pointless of World Cup goals...'
So since it would be mean to pile on more, let me focus instead on something where I can add more value. Given that Brazil has been crushed and humiliated, will this defeat lead to rioting? Plenty of people seem to think it will - this CBC story in the Google cache version has the sentence 'Brazil riots feared as home team routed by Germany', but this has now been scrubbed. For a prediction, let's turn to my favorite author on violence, Randall Collins (I've written about him here and here). In his excellent book, 'Violence: A Micro-Sociological Theory' (pdf of the first chapter available here), he makes the following observations (p312):
'During the 2002 World Cup, Russian soccer fans, who were watching the game with Japan on a big screen in a central Moscow square, rioted after Japan scored the one goal of the game...
The 2002 Moscow riot is both a political riot and a defeat riot, the counterpart to a victory celebration riot. As we will see, celebration riots can be just as destructive as defeat riots; and celebration riots are much more common. Losing a game is generally emotionally deflating, and the crowd lacks the ebullience and the traditional rituals (such as tearing down goal posts), which can segue from a victory celebration into a destructive riot. Defeat riots require an additional mechanism. One clue is that defeat riots seem to be more common in international competition than domestically, and where sports rivalries are highly politicized. Defeat riots depend more on features extraneous to the game, since the emotional flow of the game itself will generally de-energize the defeated and energize the victors.So while this is an international competition, I'd say that the thrust of the Collins prediction is that, contra the predictions of many, there won't be rioting.
And the verdict?
Brazil Riots in World Cup? Nope; Bogus Photos Spread After Germany Beats Brazil 7-1 in Soccer Semi-Finals; Fake Demonstration-Protest Tweets in Belo Horizonte Trending1-0 in Russia might have been enough to get people angry, but 7-1 just produces dejection. People don't burn buildings while dejected.
It's still too early to tell, and I'll continue to see if I (and more importantly, Mr Collins) are wrong, but my guess is that there won't be any rioting.
Seriously, if you didn't last time I talked about it a few years ago, go and read the first chapter of Collins here. I am no apologist for the general predictive power of sociology, but the man knows his stuff.
Monday, July 7, 2014
Earl Scruggs has some pretty cool friends
Apropos nothing, the great Earl Scruggs, playing 'Foggy Mountain Breakdown' (which he in fact wrote), the best banjo tune perhaps since Duelling Banjos. Check out both Steve Martin and Paul Shaffer making cameo solo appearances.
A little internet privacy is like being a yellow belt in karate
One of the things that Sam Peltzman most famously taught us (or perhaps reminded us) is that one should always pay attention to income effects, because they can show up in odd places.
Income effects are simple at a first pass - if I have more income I can buy more of a product. Most goods are normal goods, meaning that demand rises as income rises. Some goods are inferior goods, meaning that as incomes go up, people buy less of them (e.g. Walmart clothes), because they substitute to better alternatives. So far, so easy.
As microeconomists have known for a long time though, income effects can be induced by changes in the price of goods, rather than directly through income changes. If the price of rice increases, the first order effect is likely to be a substitution effect - rice is now expensive relative to wheat, so I buy more bread and less rice. But there is also an income effect: the real bundle of goods I can now purchase has shrunk, which is effectively a decrease in income.
As a result, the fact that income has decreased can induce other changes in demand which can partially or totally offset the original effect. In other words, the first pass effect is that rice consumption goes down (the substitution effect), but because I'm now poorer overall I have to cut my purchases of luxuries and buy more rice than I otherwise would. If the income effect is large enough to offset the substitution effect completely, the good is called a Giffen good - when the price of the good goes up, demand can actually increase. Robert Jensen and Nolan Miller carried out an experiment in China where they showed that for some really poor Chinese people, rice really is a Giffen good. When its price increases, they buy more of it, because they're now so poor it's the only way to get enough calories.
Which brings us to Mr Peltzman. He famously argued that income-like effects can lead to puzzling results in a wide variety of settings, most notably risk-compensation (which became known as the Peltzman Effect). If you spend government money to make roads safer or mandate seatbelt use, people will have a lower chance of dying from a given type of driving (similar to the substitution effect). But there's an income effect too - the budget set of allowable risky driving behavior has increased. Peltzman argued that this can in some cases totally offset the gains, as people drive in a more risky manner on the safer roads to maintain the same overall level of risk.
The classic case of Peltzman-like effects that people do seem to instinctively grasp is self-defence knowledge. In theory, knowing a little karate has only improved one's ability to fight relative to knowing zero karate. But the problem is the income effect. The ability to defend oneself can either be consumed entirely as an increase in safety, or it can be spent by substituting towards talking $#!& to bullies. Thus the overall level of safety can go up or down as a result of being able to fight back. The popular conception is that people overestimate their fighting ability and 'spend' more than they actually had, leading to Giffen-like behavior at low levels of self-defence knowledge.
And now it turns out that there's inadvertent Peltzman effects going on with internet privacy.
Several researchers with Tor have described how using the internet privacy software Tor results in your IP address receiving permanently much greater scrutiny from the NSA. Even searching for Tor online is enough to get you logged.
At high levels of security, this is still probably worth it if you value privacy. Tor is an incredibly powerful tool to avoid being tracked. Unfortunately there's still lots of other exploits they can use to target your computer, but Tor itself is pretty reliable.
Since the NSA doesn't like this, they are determined to raise the income effect stakes a lot. If you get slack and only use Tor sometimes, you have almost certainly increased the chances of your behavior being tracked and monitored. Before you had the blessing of anonymity. When you embark down the road of privacy, the NSA makes sure that goes away for good. Tor is a Basilisk - a single search for it is enough to get you permanently flagged. So if you're going to start down that road, it's got to be the full retard or nothing at all.
The reality is that maintaining anonymity is hard. Really hard. It is a form of tradecraft, as the spies put it. It needs an obsessive attention to detail, and a willingness to forgo a number of aspects of the internet (flash video, for instance, as well as dealing with slow loading times). And unfortunately, the predicament is quite similar to the position of the IRA viz Mrs Thatcher - the NSA only needs to get lucky once, whereas you need to get lucky every day.
The unfortunate reality is that for most people, no protection is probably safer than a little protection. And even then, the only reason that 'no protection' offers any protection is because the internet is simply too large for the NSA to be able to store everything that goes on there. On the other hand, they are able to store everything done by Tor users.
The one saving grace is that the NSA is not actually the NKVD. For the most part, the NSA is only interested in tracking terrorists, and passing the occasional Silk Road drug dealer onto the DEA. Not only that, they are reluctant to blow the details of the data collection process (any more than they already have) by having the details of it disclosed in court cases unimportant to the NSA's mission. So they're probably not going after you for buying that Adderall online, even though they could.
On the other hand, the Snowden disclosures have massively reduced the cost of the NSA using information at trials, since a lot of the details are now already known, so maybe that protection has decreased too.
Income effects are rarely counterintuitive once they're pointed out, but they have a tendency to be lurking in places that you weren't thinking hard about.
Unfortunately, none of them are good in this story.
Income effects are simple at a first pass - if I have more income I can buy more of a product. Most goods are normal goods, meaning that demand rises as income rises. Some goods are inferior goods, meaning that as incomes go up, people buy less of them (e.g. Walmart clothes), because they substitute to better alternatives. So far, so easy.
As microeconomists have known for a long time though, income effects can be induced by changes in the price of goods, rather than directly through income changes. If the price of rice increases, the first order effect is likely to be a substitution effect - rice is now expensive relative to wheat, so I buy more bread and less rice. But there is also an income effect: the real bundle of goods I can now purchase has shrunk, which is effectively a decrease in income.
As a result, the fact that income has decreased can induce other changes in demand which can partially or totally offset the original effect. In other words, the first pass effect is that rice consumption goes down (the substitution effect), but because I'm now poorer overall I have to cut my purchases of luxuries and buy more rice than I otherwise would. If the income effect is large enough to offset the substitution effect completely, the good is called a Giffen good - when the price of the good goes up, demand can actually increase. Robert Jensen and Nolan Miller carried out an experiment in China where they showed that for some really poor Chinese people, rice really is a Giffen good. When its price increases, they buy more of it, because they're now so poor it's the only way to get enough calories.
Which brings us to Mr Peltzman. He famously argued that income-like effects can lead to puzzling results in a wide variety of settings, most notably risk-compensation (which became known as the Peltzman Effect). If you spend government money to make roads safer or mandate seatbelt use, people will have a lower chance of dying from a given type of driving (similar to the substitution effect). But there's an income effect too - the budget set of allowable risky driving behavior has increased. Peltzman argued that this can in some cases totally offset the gains, as people drive in a more risky manner on the safer roads to maintain the same overall level of risk.
The classic case of Peltzman-like effects that people do seem to instinctively grasp is self-defence knowledge. In theory, knowing a little karate has only improved one's ability to fight relative to knowing zero karate. But the problem is the income effect. The ability to defend oneself can either be consumed entirely as an increase in safety, or it can be spent by substituting towards talking $#!& to bullies. Thus the overall level of safety can go up or down as a result of being able to fight back. The popular conception is that people overestimate their fighting ability and 'spend' more than they actually had, leading to Giffen-like behavior at low levels of self-defence knowledge.
And now it turns out that there's inadvertent Peltzman effects going on with internet privacy.
Several researchers with Tor have described how using the internet privacy software Tor results in your IP address receiving permanently much greater scrutiny from the NSA. Even searching for Tor online is enough to get you logged.
At high levels of security, this is still probably worth it if you value privacy. Tor is an incredibly powerful tool to avoid being tracked. Unfortunately there's still lots of other exploits they can use to target your computer, but Tor itself is pretty reliable.
Since the NSA doesn't like this, they are determined to raise the income effect stakes a lot. If you get slack and only use Tor sometimes, you have almost certainly increased the chances of your behavior being tracked and monitored. Before you had the blessing of anonymity. When you embark down the road of privacy, the NSA makes sure that goes away for good. Tor is a Basilisk - a single search for it is enough to get you permanently flagged. So if you're going to start down that road, it's got to be the full retard or nothing at all.
The reality is that maintaining anonymity is hard. Really hard. It is a form of tradecraft, as the spies put it. It needs an obsessive attention to detail, and a willingness to forgo a number of aspects of the internet (flash video, for instance, as well as dealing with slow loading times). And unfortunately, the predicament is quite similar to the position of the IRA viz Mrs Thatcher - the NSA only needs to get lucky once, whereas you need to get lucky every day.
The unfortunate reality is that for most people, no protection is probably safer than a little protection. And even then, the only reason that 'no protection' offers any protection is because the internet is simply too large for the NSA to be able to store everything that goes on there. On the other hand, they are able to store everything done by Tor users.
The one saving grace is that the NSA is not actually the NKVD. For the most part, the NSA is only interested in tracking terrorists, and passing the occasional Silk Road drug dealer onto the DEA. Not only that, they are reluctant to blow the details of the data collection process (any more than they already have) by having the details of it disclosed in court cases unimportant to the NSA's mission. So they're probably not going after you for buying that Adderall online, even though they could.
On the other hand, the Snowden disclosures have massively reduced the cost of the NSA using information at trials, since a lot of the details are now already known, so maybe that protection has decreased too.
Income effects are rarely counterintuitive once they're pointed out, but they have a tendency to be lurking in places that you weren't thinking hard about.
Unfortunately, none of them are good in this story.
Sunday, June 29, 2014
Eating Crow
Back in 2003, in the lead-up to the Iraq War, a younger Shylock Holmes was an ardent neoconservative. Democracy was, in my view at the time, both an inherent moral good and a practical instrumental good (though I probably wouldn't have expressed it in those terms). More importantly, I took the Krauthammer position that the time to bomb a country seeking to acquire nuclear weapons was before the weapons were completed, not afterwards. Once they have the nukes, it's rather more difficult to threaten them (see, for instance, North Korea). Which is fine, as far as it goes, and indeed a short, sharp war along these lines might not have been nearly so bad. It sure would have made Iran think twice. There was, of course, a big question of 'yeah, and then what do you plan to do after the place is bombed?', to which I would have had only vague notions about trying out consensual democracy as a cure for the ongoing slow-motion calamity that is the Middle East.
Around the same time, the country group The Dixie Chicks were performing at a concert in London when lead singer Natalie Maines decided to unburden herself of the following observation:
"Just so you know, we're on the good side with y'all. We do not want this war, this violence, and we're ashamed that the President of the United States is from Texas."
It has long been a bugbear of mine when artists needlessly inject their political views into situations that do not call for them. In an audience of thousands of people, it is inconceivable that all of them will share your political preoccupations. It seems needlessly rude and antagonistic to turn everything into a political issue, particularly when most people just came to see you sing.
Not only that, but I still feel that the moral righteousness of the left at the time was enormously overblown. I remember thinking at the time that the Dixie chicks seemed like complete morons. The whole tenor of the left's argument appeared to be mainly thinly dressed up pacifism and a knee-jerk dislike of whatever it was George Bush was doing. Regarding the former, if Saddam had in fact possessed weapons of mass destruction, would they have felt any differently about the ex-post outcome? I sure would have, but for most of the left, I honestly don't know if it would affect their assessment. Regarding the latter, I note that the urgency of getting out of Iraq among the left seemed to drop off a cliff as soon as Barack Obama was elected. I also note, however, that the same election result also made the right a whole lot more willing to consider frankly the possibility that it was a corpse-strewn fool's errand to try to turn Baghdad into Geneva. Hey, no-one said thinking in a non-partisan way was easy.
And yet...
When in comes to predictions, reality is a very equal-opportunity master. You have your views on how the world will evolve, and you may feel clever, or educated or erudite. You may feel that the people who predict differently from you are worthless imbecilic fools. And indeed they may be. But when you say that X will happen, and someone else says that Y will happen, you will find out, at least ex post, who was right.
So with more than ten years of hindsight, here are some randomly chosen recent headlines about Iraq:
et cetera, et depressing cetera.
So it is time to ask the question the Moldbug asked about Zimbabwe -given what we know now, who was right? Putting aside haggling over the specific reasoning and argumentation, who had the better overall gist of the wisdom of the Iraq war?
The answer, alas (for both my ego and the people of Iraq), is a clean sweep to the Dixie Chicks. They were right, and I was wrong. Dead wrong.
The narrow lesson, which I took to heart, is a general skepticism of democracy, especially when applied to third world hellholes, as a cure of society's ills.
But the broader lesson, which it is much easier to forget, is that one should be less certain of one's models of the world. Reality is usually messier and more surprising than you think. Overconfidence springs eternal, notwithstanding (or perhaps because of) how clever you think you are.
Let pride be taught by this rebuke, as Mr Swift put it.
Labels:
History,
Middle East,
Military,
Politics,
Predictions
Monday, June 9, 2014
The minimum requirements for serious conversation
In real life (certainly in this country, though not nearly as much in Australia), I've sometimes been accused of having no filter on what I say. This isn't true, of course, but the extent of my sociological observations goes farther than most people here. America is a country where it is crucially important not to notice things, as Steve Sailer put it. If you notice, you absolutely shouldn't comment. If you comment, you really truly ruly shouldn't dare find any of it funny or ironic, or indeed anything other than deadly serious.
How tiresome.
But these are serious times, and joking with the world at large about the wrong things does not tend to get rewarded. One must pick one's audience, so to speak. This blog, for instance, is not that audience. Everything said here is said to everyone, for all time, and able to be quoted out of context and misconstrued for years to come.
But it is oppressive to never speak one's mind freely. Paul Graham recommended drawing a wall between one's thoughts and one's speech, the former being free, the latter being restricted for what is acceptable.
I dance a finer line. With people whose character I feel I can trust, I'll say what I think. Sometimes they're surprised, because this assessment isn't actually that correlated with how long I've known a person. Some people I know and consider dear friends never fall into this category. Some people I've known I a day or two do. Those, I think, are the ones who sometimes think I have no filter.
So what determines whether I think it's likely to be worthwhile to speak freely to someone or not?
As far as I can tell, there are three main classes of requirement.
The first is that you know, without me needing to explain it to you, in a deep and instinctive sense, the difference between the following words:
All
Most
The Average
The Median
The Modal
Some
A Few
Causes
Is Correlated with
The statement 'all Australians are obnoxious' is very different from 'the average Australian is obnoxious'. People that don't get this will transform the latter into the former, and thus read it as 'he is accusing me of being obnoxious because I am Australian'. Conversation with people who think like this is always a minefield, so it's better to stick to small-talk.
Related to the above, understanding basic causal inference is equally important. Umbrellas are correlated with traffic accidents but do not cause traffic accidents - rain causes both. Prisons affect crime and crime affects prisons - prisons fill up when crime increases, and the increase in prison populations reduces crime.
You don't need to use words like 'omitted variables' and 'simultaneity', but you do need to have a good feel for these different types of models of the world, and be able to think about how they might apply to some new situation.
These requirements mean that your words aren't apt to be misconstrued. If you happen to get lazy and utter something like 'Australians are obnoxious' rather than specifying a precise probabilistic and causal statement, the person will not immediately assume the most inflammatory possible interpretation.
The second requirement is that you consider truth a near-complete defense to any charges levelled against pure statements about the nature of the world (as opposed to statements of opinion). If the average Australian is indeed obnoxious, one should be free to say so. You do not change the territory by yelling at the world's cartographers. It is possible that Australians will become less obnoxious if we all agree to stop discussing the fact of their obnoxious behaviour. But I would not bet on it. If in doubt, truth should be a sufficient justification for any statement purporting to claim a fact about the world in general or a model of causality in the world.
There are limiting cases where some statements might be irresponsible, like spreading information on how to make nuclear weapons from household items. In my estimation, those are pretty rare, however (actually, your view on how many statements ought be ruled as impermissible based on responsibility criteria is another way of phrasing the second requirement - you probably need a low filter here). There are also basic questions of politeness when it comes to not making unhelpful statements about a single person, particularly when made to that person. All of that applies. But outside of such personal interactions, there ought to be a strong presumption that truth is a sufficient justification for any statement.
This stops every argument descending into accusations about motives. The earth rotates around the sun, regardless of whether Galileo is saying so because of a devotion to scientific truth as he perceives it, or because Galileo is a contrarian rabble-rouser who likes to intellectually stick a finger in people's eyes, or because Galileo is intellectually committed to bringing down the Catholic Church. Truth is truth.
The third is that you don't take disagreement personally. If you think X, and someone else thinks Y, and X and Y are merely statements about how the world is, then we should be able to discuss this without the fact of my disagreeing with you causing you to get angry. If disagreement alone is enough to get you pissed off, then any discussion is a joint balancing of the strength and veracity of an argument, with my estimate of your current mood and the likely impact of the next statement on said mood. Such discussions tend to get exhausting very quickly for me. If disagreement, even about cherished beliefs, is not a source of anger, then we can talk about things.
Of course, you never quite know at first whether these requirements are going to be met. You try to feel people out about them.
But my experience is that with people who fit in these categories, I don't actually need any particular filter on what I say, although sometimes my remarks sound outlandish given popular sentiments. Usually, such people have a sense of humor about jokes on whatever the subject is too. They are worthy conversation partners.
In any case, if I do speak to you frankly, it is a mark of esteem, that I think you fit into all of the categories above.
How tiresome.
But these are serious times, and joking with the world at large about the wrong things does not tend to get rewarded. One must pick one's audience, so to speak. This blog, for instance, is not that audience. Everything said here is said to everyone, for all time, and able to be quoted out of context and misconstrued for years to come.
But it is oppressive to never speak one's mind freely. Paul Graham recommended drawing a wall between one's thoughts and one's speech, the former being free, the latter being restricted for what is acceptable.
I dance a finer line. With people whose character I feel I can trust, I'll say what I think. Sometimes they're surprised, because this assessment isn't actually that correlated with how long I've known a person. Some people I know and consider dear friends never fall into this category. Some people I've known I a day or two do. Those, I think, are the ones who sometimes think I have no filter.
So what determines whether I think it's likely to be worthwhile to speak freely to someone or not?
As far as I can tell, there are three main classes of requirement.
The first is that you know, without me needing to explain it to you, in a deep and instinctive sense, the difference between the following words:
All
Most
The Average
The Median
The Modal
Some
A Few
Causes
Is Correlated with
The statement 'all Australians are obnoxious' is very different from 'the average Australian is obnoxious'. People that don't get this will transform the latter into the former, and thus read it as 'he is accusing me of being obnoxious because I am Australian'. Conversation with people who think like this is always a minefield, so it's better to stick to small-talk.
Related to the above, understanding basic causal inference is equally important. Umbrellas are correlated with traffic accidents but do not cause traffic accidents - rain causes both. Prisons affect crime and crime affects prisons - prisons fill up when crime increases, and the increase in prison populations reduces crime.
You don't need to use words like 'omitted variables' and 'simultaneity', but you do need to have a good feel for these different types of models of the world, and be able to think about how they might apply to some new situation.
These requirements mean that your words aren't apt to be misconstrued. If you happen to get lazy and utter something like 'Australians are obnoxious' rather than specifying a precise probabilistic and causal statement, the person will not immediately assume the most inflammatory possible interpretation.
The second requirement is that you consider truth a near-complete defense to any charges levelled against pure statements about the nature of the world (as opposed to statements of opinion). If the average Australian is indeed obnoxious, one should be free to say so. You do not change the territory by yelling at the world's cartographers. It is possible that Australians will become less obnoxious if we all agree to stop discussing the fact of their obnoxious behaviour. But I would not bet on it. If in doubt, truth should be a sufficient justification for any statement purporting to claim a fact about the world in general or a model of causality in the world.
There are limiting cases where some statements might be irresponsible, like spreading information on how to make nuclear weapons from household items. In my estimation, those are pretty rare, however (actually, your view on how many statements ought be ruled as impermissible based on responsibility criteria is another way of phrasing the second requirement - you probably need a low filter here). There are also basic questions of politeness when it comes to not making unhelpful statements about a single person, particularly when made to that person. All of that applies. But outside of such personal interactions, there ought to be a strong presumption that truth is a sufficient justification for any statement.
This stops every argument descending into accusations about motives. The earth rotates around the sun, regardless of whether Galileo is saying so because of a devotion to scientific truth as he perceives it, or because Galileo is a contrarian rabble-rouser who likes to intellectually stick a finger in people's eyes, or because Galileo is intellectually committed to bringing down the Catholic Church. Truth is truth.
The third is that you don't take disagreement personally. If you think X, and someone else thinks Y, and X and Y are merely statements about how the world is, then we should be able to discuss this without the fact of my disagreeing with you causing you to get angry. If disagreement alone is enough to get you pissed off, then any discussion is a joint balancing of the strength and veracity of an argument, with my estimate of your current mood and the likely impact of the next statement on said mood. Such discussions tend to get exhausting very quickly for me. If disagreement, even about cherished beliefs, is not a source of anger, then we can talk about things.
Of course, you never quite know at first whether these requirements are going to be met. You try to feel people out about them.
But my experience is that with people who fit in these categories, I don't actually need any particular filter on what I say, although sometimes my remarks sound outlandish given popular sentiments. Usually, such people have a sense of humor about jokes on whatever the subject is too. They are worthy conversation partners.
In any case, if I do speak to you frankly, it is a mark of esteem, that I think you fit into all of the categories above.
Monday, May 26, 2014
Lies, Damn Lies, and STD Risk Statistics, Part 2
Continued from Part 1.
If you've just joined us, we're giving a good fisking to the Mayo Clinic's worthless list of STD risk factors, namely:
Having unprotected sex.
Having sexual contact with multiple partners.
Abusing alcohol or using recreational drugs.
Injecting drugs.
Being an adolescent female
The biggest proof that their advice is completely worthless comes from the full description of the first point, 'having unprotected sex'. At a very minimum, they don't make the most minimal distinction between vaginal, anal and oral intercourse. But even within that, the whole thing is basically a ridiculous scare campaign:
Vaginal or anal penetration by an infected partner who is not wearing a latex condom transmits some diseases with particular efficiency. Without a condom, a man who has gonorrhea has a 70 to 80 percent chance of infecting his female partner in a single act of vaginal intercourse. Improper or inconsistent use of condoms can also increase your risk. Oral sex is less risky but may still transmit infection without a latex condom or dental dam. Dental dams — thin, square pieces of rubber made with latex or silicone — prevent skin-to-skin contact.
This one I know is in the 'deliberately misleading to fool the public' category. You know why? Because they use the weasel words 'some diseases'. They then back it up with the gonorrhea example, where one-off unprotected vaginal transmission rates are high. But people don't generally stay up late at night freaking out about getting gonorrhea, do they? As a matter fact, you don't hear about it much, because it can be treated with antibiotics. What people actually worry about the most is HIV. Why not tell them about that instead?
So what are the chances of HIV transmission from unprotected vaginal intercourse with someone who is HIV positive? This is such a classic that I want to put the answer (and the rest of the post, which gets even more awesome by the way, though you may not believe it's possible) below the jump. Suppose a man and a woman have unprotected vaginal intercourse once.
a) If the man is HIV positive, what is the chance the women contracts HIV?
b) If the woman is HIV positive, what is the chance the man contracts HIV?
Subscribe to:
Posts (Atom)